id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2308.13125
Exploring the Use of Generative AI in the Search for Extraterrestrial Intelligence (SETI)
The search for extraterrestrial intelligence (SETI) is a field that has long been within the domain of traditional signal processing techniques. However, with the advent of powerful generative AI models, such as GPT-3, we are now able to explore new ways of analyzing SETI data and potentially uncover previously hidden signals. In this work, we present a novel approach for using generative AI to analyze SETI data, with focus on data processing and machine learning techniques. Our proposed method uses a combination of deep learning and generative models to analyze radio telescope data, with the goal of identifying potential signals from extraterrestrial civilizations. We also discuss the challenges and limitations of using generative AI in SETI, as well as potential future directions for this research. Our findings suggest that generative AI has the potential to significantly improve the efficiency and effectiveness of the search for extraterrestrial intelligence, and we encourage further exploration of this approach in the SETI community. (Disclosure: For the purpose of demonstration, the abstract and title were generated by ChatGPT and slightly modified by the lead author.
John Hoang, Zihe Zheng, Aiden Zelakiewicz, Peter Xiangyuan Ma, Bryan Brzycki
2023-08-25T00:36:37Z
http://arxiv.org/abs/2308.13125v1
# Exploring the Use of Generative AI in the Search for Extraterrestrial Intelligence (SETI) ###### Abstract The search for extraterrestrial intelligence (SETI) is a field that has long been within the domain of traditional signal processing techniques. However, with the advent of powerful generative AI models, such as GPT-3, we are now able to explore new ways of analyzing SETI data and potentially uncover previously hidden signals. In this work, we present a novel approach for using generative AI to analyze SETI data, with focus on data processing and machine learning techniques. Our proposed method uses a combination of deep learning and generative models to analyze radio telescope data, with the goal of identifying potential signals from extraterrestrial civilizations. We also discuss the challenges and limitations of using generative AI in SETI, as well as potential future directions for this research. Our findings suggest that generative AI has the potential to significantly improve the efficiency and effectiveness of the search for extraterrestrial intelligence, and we encourage further exploration of this approach in the SETI community. (**Disclosure**: For the purpose of demonstration, the abstract and title were generated by ChatGPT and slightly modified by the lead author.) ## 1 Introduction The Breakthrough Listen project has been searching for technosignatures in our universe using powerful radio telescopes around the world, including the Green Bank Telescope, Parkes Telescope, Allen Telescope Array, and MeerKAT. The most popular technique in radio SETI involves looking for narrow-band signals in the time-frequency spectrogram data (often referred to as dynamic spectra or waterfall plots in existing literature). In recent years, machine learning (ML) algorithms have been employed to classify image-like spectrograms [1], and setigen, an open-source Python library, was created to synthesize mock labelled radio SETI training data set [2]. Since setigen is meant to be a general-purpose heuristic framework, there are rooms for improvement if one wishes to improve the speed of the algorithm in specialized cases. The method used in this work is Generative Adversarial Network (GAN) [3]. _This paper's copyright is held by the author(s). It is published in these proceedings and included in any archive such as IEEE Xplore under the license granted by the "Agreement Granting URSI and IEICE Rights Related to Publication of Scholarly Work."_ A GAN consists of two competing deep neural networks: the generative network and the discriminator network. The generator generates images from random noise, and the generated images are fed into the discriminator along with real images. The discriminator then classifies the images as either real or fake and both the generator and the discriminator models are updated according to the result. At equilibrium, the generator will generate images that look similar to the training data and as a consequence, the discriminator will no longer be able to distinguish between generated and real images. The output of the generator will determine whether an image is similar to the training data or not, which can be useful in determining whether the image is normal or anomalous. Therefore, GAN can also be used to detect outliers. In astronomy, GANs have been used to simulate galaxy images [4], gamma-ray Cherenkov air-shower signals [5], detecting outliers [6], to name but a few. GANs and its family of other generative frameworks such as AutoEncoder are parts of what is colloquially known as DeepFake, a portmanteau of Deep learning and Fake. ## 2 Experiments We employ the setigen software to generate waterfall plots used for training. Each generated set includes 15000 waterfall plots, each having 128 pixels in time and 128 pixels in frequency. Background noise is randomly chosen from either Gaussian noise or Chi-squared noise, with mean equals to 10 and standard deviation (only for Gaussian noise) equals to 1. Injected narrow-band signals are specified by the starting point, drift rate and width in each data set. The width of the signals all range from 50 to 60 pixels. Random walk signals are injected in some of the data sets to simulate radio-frequency interference (RFI), which is frequently seen in real observations. These 4 sets used in the experiments are shown in Fig.1. ### Simple drifting signals We first test GAN's ability to generate waterfall plots containing narrow-band signals with a single drift rate. As seen from Fig.2, GAN successfully reproduces this data set after training. ### Simple drifting signals with varying drift rates We then expand the variety of data by changing the drift rate and starting points of signals in the waterfall plots, which result in the 6-type dataset. We train the same vanilla GAN network architecture on the 6-type data set shown in Fig.1 for 1000 epochs. However, the network fails to produce realistic waterfall plots. Conditional GAN [7] is then selected as our next candidate architecture among several variations of GAN since it allows the network to generate images conditioned on the class of data. Conditional GAN achieved this idea by adding labels for data according to their drift rates, which are fed in the network along with the plot themselves. We labeled the 6-type data set with 0, 1, 2, 3, 4, 5 according to 6 different drift rates, and trained the conditional GAN on the data set for 1000 epochs. The generator from the conditional GAN is able to generate waterfall plots with a singular signal with various starting points in each class. _This paper's copyright is held by the author(s). It is published in these proceedings and included in any archive such as IEEE Xplore under the license granted by the "Agreement Granting URSI and IEICE Rights Related to Publication of Scholarly Work."_ The results in comparison with traditional GAN are shown in Fig.3. ### Cadence signals Due to the prevalence of radio frequency interference, a typical radio SETI observation consists of pointing the telescope alternatively to ON-OFF sky locations, and a potential narrow-band ETI signal candidate should only show up in the ON observations. To simulate this, three horizontal noisy band of size 21*128, 21*128 and 23*128 pixels are added to each waterfall plots to simulate the ON-OFF cadence in real observations. Successful results are shown in Fig.4 ### Cadence signals with RFI We finally test our generator on its resistance to RFI in the dynamic spectra. We trained our conditional GAN model on the Cadence-RFI dataset, which contains injected RFI noise. The result is shown in Fig.5. The generator ignores the presence of RFI and does not produce them in the OFF observations. In addition, the generator also avoids a common failure mode called mode-collapse whereby GAN only learns to output a particularly plausible output and nothing else. Fig.5 shows a degree of variability within a single class: although all the waterfall plots in the same column have the same drift rate (label), they vary with intensity and location within the frame. Figure 1: 4 narrow-band data sets used for training in this work with increasing complexity. Top row: narrow-band signals with a constant drift rate, narrow-band signals with 6 drift rates. Bottom row: ON-OFF cadence observations with narrow-band ETI-like signals in only ON observation, ON-OFF cadence observations with narrow-band ETI-liked signal in only ON observation and random narrow-band RFI-like in OFF observation. Figure 4: Conditional GAN is able to learn the more complex ON-OFF signal patterns mimicking typical SETI observations. Figure 3: Vanilla GAN struggles to learn the distinct features among different drift rates. However, conditional GAN solves the issue by artificially condition training data on the slope label. Figure 2: Vanilla GAN is able to reproduce the training data from setigen (left): right plot contains GAN-generated waterfall plots with random background noise level ### Generator's scores Because the discriminator learns to classify realistic waterfall plots and unrealistic waterfall plots during training, the trained discriminator can be used subsequently as a classifier. We train our conditional GAN on the cadence data set in Fig.1. To test the trained discriminator, we used the 4 data sets shown in Fig.6, each containing 15000 waterfall plots: only noise background, only cadence background, GAN-produced data, and the cadence-RFI data. It is clear that waterfall plots that are less similar to training data receive lower scores from the discriminator than those are similar to the training data set. The blue and green distribution have similar means, confirming that both generator and discriminator disregard RFI in the training data. ### Timing To illustrate the advantage of using pre-train model, we show in Tab.1 the time it took our GAN to generate and write to storage \(10^{2},10^{3},10^{4}\), \(10^{5}\) waterfall plots. Comparing with the clearly linear time of setigen, GAN's speed is superior when a large data set is involved, and we are only limited by the availability of GPU RAM and storage I/O. ## 3 Possible extensions It is possible to further modify GANs into a more advanced bi-directional architecture so that we can reverse-engineer images into meaningful latent vectors. The proposed Bidirectional Conditional GAN (BiCoGAN) [8] includes an additional encoder network that is trained simultaneously with the generator and the discriminator, and can provide inverse mappings of data samples to both intrinsic and extrinsic vectors. The benefit of BiCoGAN is two-fold: the reverse-engineered latent vectors of hitherto unseen signals can be used to compare with those of known signals to identify anomalies, and anomalous latent vectors can be slightly perturbed and subsequently fed into the generator to synthesize new anomalous signals useful for further training. This remains a work in progress, and an example of hand-written digits generated by BiCoGAN is shown in Fig.7. ## 4 Limitations For the purpose of demonstrating the limitations of Generative AI, let us examine this paper's title and abstract, which were largely produced by ChatGPT - Chat Generative Pretrained Transformer. While reads convincingly at first, the generated abstract certainly contains errors. Notably, the relationship between natural language processing and radio data analysis makes little sense, if at all. Nonetheless, upon a handful of minor edits by an expert, the hybrid version of the abstract sounds even more convincing (see Fig.8). In a similar vein, radio spectra produced by a GAN are realistic overall, but they require more careful benchmark when high precision is required or when statistics are low. \begin{table} \begin{tabular}{c c c c c c} & \multicolumn{3}{c}{**SETIGEN**} & \multicolumn{2}{c}{**GAN**} \\ \# & File Size & G (s) & G\&W (s) & G (s) & G\&W (s) \\ \(10^{2}\) & 12.5 MB & 0.578 & 0.613 & 3.335 & 3.378 \\ \(10^{3}\) & 125.3 MB & 5.09 & 6.287 & 3.784 & 4.181 \\ \(10^{4}\) & 1.2 GB & 51.279 & 55.723 & 3.807 & 7.724 \\ \(10^{5}\) & 12.2 GB & 515.802 & 563.833 & 4.358 & 95.597 \\ \end{tabular} \end{table} Table 1: CPU/GPU time to generate (G) and write (W) to storage as a function of number of waterfall plots, using both heuristic setigen with CPU and generative GAN with GPU. GAN excels when a large number of waterfall plots are involved, improving the speed by more than 2 orders of magnitude. Figure 5: Conditional GAN is able to ignore RFI in the training data. For clarity, in the right plot, waterfall plots in each column have the same label (drift rate). Figure 6: Distribution of scores for different data sets by the discriminator. Figure 7: MNIST images synthesized by a BiCoGAN’s generator with random latent vectors (top row), comparing with original training data (middle row) and with reconstructed latent vectors from the encoder network (bottom row). Fundamentally, by the virtue of the Universal Approximation Theorems, neural networks are functional approximator (often polynomial approximation if simple activation functions are used) of a multidimensional distribution to arbitrary precision. In essence, it is analogous to the Weierstrass Approximation Theorem, which is as follows: Suppose f(x) is a continuous real-valued function defined on the real interval [a,b], then: \[\forall\varepsilon>0,\exists\ p(x)\quad\text{s.t.}\quad\forall x\in [a,b]\enspace,\ |f(x)<p(x)|<\varepsilon \tag{1}\] In the case of ChatGPT or any Generative AI, the interval [a,b] is the domain in which the training data f(x) is contained. Since the domain of the function is a specific _closed interval_ instead of the entire real number line, the generator can approximate or interpolate very well the distribution _within the domain_ using a polynomial p(x). In other words, the output is a sufficiently good polynomial approximation of the training data contained therein. In the case of this work's abstract, since ChatGPT most likely obtained training data from publications, it outputs "paper" instead of the more naturally sounded "work" in the context of an abstract for a conference presentation. However, _outside the domain_, the _extrapolated_ output is sometimes only tangential, and requires further scrutiny. The ideal training data set should includes both theory-based data as well as real-life examples, although this is hardly achievable in many cases whereby new phenomenon are being investigated. ## 5 Conclusions We have explored the potential of deep generative networks for generating dynamic spectra and their ability to detect outliers. The main advantage of such generative AI is that they can encode the complex representations of the training data and rapidly decode them at a rapid speed by leveraging GPU usage. The work is relevant not only to radio SETI, and can be readily extend to SETI search in other parts of the electromagnetic spectrum. Furthermore, since the AI models used in this work are still far from the industrial grade models such as one used by ChatGPT, there is much for further experimentation with more sophisticated AI architecture. However, as in the case with other applications of Generative AI, we caution against the over-reliance on their outputs, as sometimes they are only tangential to the reality at best, and nonsensical at worst. The code used in this work is publicly available at [https://github.com/zzheng18/SETIGAN](https://github.com/zzheng18/SETIGAN). ## Acknowledgements We thank the Breakthrough Prize Foundation and the University of California, Berkeley, for their support. Special thanks to Andrew Siemion, Steve Croft, and Yuhong Chen for several fruitful discussions. This work is funded by the Breakthrough Listen Initiative and the National Science Foundation.
2310.06004
Characteristic Modes of Frequency-Selective Surfaces and Metasurfaces from S-parameter Data
Characteristic modes of arbitrary two-dimensional periodic systems are analyzed using scattering parameter data. This approach bypasses the need for periodic integral equations and allows for characteristic modes to be computed from generic simulation or measurement data. Example calculations demonstrate the efficacy of the method through comparison against a periodic method of moments formulation for a simple, single-layer conducting unit cell. The effect of vertical structure and electrical size on the number of modes is studied and its discrete nature is verified with example calculations. % Additional examples verify the binary impact of vertical structure on the number of radiating characteristic modes. A multiband polarization-selective surface and a beamsteering metasurface are presented as additional examples.
Kurt Schab, Frederick Chen, Lukas Jelinek, Miloslav Capek, Johan Lundgren, Mats Gustafsson
2023-10-09T16:05:25Z
http://arxiv.org/abs/2310.06004v1
# Characteristic Modes of Frequency-Selective Surfaces and Metasurfaces from S-parameter Data ###### Abstract Characteristic modes of arbitrary two-dimensional periodic systems are analyzed using scattering parameter data. This approach bypasses the need for periodic integral equations and allows for characteristic modes to be computed from generic simulation or measurement data. Example calculations demonstrate the efficacy of the method through comparison against a periodic method of moments formulation for a simple, single-layer conducting unit cell. The effect of vertical structure and electrical size on the number of modes is studied and its discrete nature is verified with example calculations. A multiband polarization-selective surface and a beamsteering metasurface are presented as additional examples. Antenna theory, eigenvalues and eigenfunctions, frequency-selective surfaces, metasurface, scattering. ## I Introduction Characteristic mode analysis is a widely studied method of decomposing electromagnetic scattering problems into a basis with convenient properties [1, 2, 3, 4, 5]. Though an impedance-based formulation of characteristic modes [6, 7, 8] is most widely used in the literature, scattering-based definitions predate that method [9, 10] and afford the ability to compute the characteristic modes of arbitrary structures without the requirement of numerical methods based on integral equations [11, 12, 13]. This scattering-based approach to characteristic mode analysis has previously been demonstrated and validated using method of moments (MoM), finite element method (FEM), finite-difference time-domain (FDTD), and hybrid methods [13]. Unlike the vast majority of scatterers studied using characteristic mode analysis, frequency-selective surfaces (FSS) and metasurfaces are typically modeled in a periodic setting [14] using numerical methods incorporating periodic boundary conditions, _e.g._, periodic integral equations [15]. Previous work on characteristic modes in periodic systems utilizes the impedance-based approach in conjunction with periodic integral equation methods [16, 17, 18, 19, 20, 21, 22]. Within such a periodic formulation, it becomes apparent that radiating characteristic modes are directly tied to propagating Floquet modes [20]. Because existing methods for computing the characteristic modes of periodic systems require surface or volume integral formulations over the unit cell, features such as layered substrates can be included but the implementation and computational complexity can grow rapidly with increasing unit cell inhomogeneity. Another, more common strategy is to analyze the characteristic modes of a unit cell in free space before studying its scattering behavior within a periodic lattice [23, 24]. While this approach has been demonstrated as effective in particular applications, it relies on approximate relationships between the behavior of a unit cell in isolation and its behavior within a periodic setting where inter-element coupling may be significant. In this work, we adopt a scattering formulation of characteristic modes to the study of periodic systems. The added flexibility of this approach enables the study of arbitrary unit cells using a variety of numerical methods, including FEM. The scattering formulation requires only the calculation of S-parameter data, making the technique straightforward to implement using measured data or any solver capable of calculating the S-parameters of a single unit cell within a periodic setting. ## II Scattering-based Periodic Characteristic Modes The scattering-based eigenvalue problem used to compute characteristic modes for an object in free space reads [11] \[\mathbf{S}\mathbf{a}_{n}=(2t_{n}+1)\mathbf{a}_{n}, \tag{1}\] where \(\mathbf{S}\) is the scattering matrix mapping incoming to outgoing waves [25], shown schematically in the top left panel of Fig. 1. In this eigenvalue problem, the vectors \(\mathbf{a}_{n}\) have the dual interpretation of incoming wave coefficients (_i.e._, characteristic excitations) and scattered wave coefficients (_i.e._, characteristic far fields), both in an appropriate basis [9, 11, 13]. Following conventions from microwave circuit theory, the wave coefficients are normalized to have units of \(\mathrm{W}^{1/2}\), see Appendix A for details. The values \(t_{n}\) are the characteristic mode eigenvalues associated with the transition matrix, which maps regular waves to outgoing waves [25]. The absolute value of the eigenvalue \(t_{n}\) is equal to modal significance [11], and that quantity is used throughout the remainder of this paper, as opposed to the eigenvalues \(s_{n}=2t_{n}+1\) in (1), which exhibit unit modulus due to the unitarity of the matrix \(\mathbf{S}\). For lossless scatterers, equivalence between the scattering formulation in (1) and formulations based on several integral equation methods is demonstrated in [11]. Interpretation of characteristic modes for lossy structures is nuanced, and the equivalence of scattering and impedance formulations depends on selected orthogonality properties [8, 12]. For this reason, we consider problems involving only lossless scatterers and lossless background media. By virtue of the assumption that the obstacle being studied in (1) exists "in free space", the scattering matrix with no object present is an identity matrix [10]. Let this "background" scattering matrix, sketched in the top right panel of Fig. 1, be denoted as \(\mathbf{S}_{0}\). For free-space scattering problems, the eigenvalue problem in (1) can therefore be written as (\(\mathbf{S}_{0}\) is an identity matrix) \[\mathbf{S}\mathbf{a}_{n}=(2t_{n}+1)\mathbf{S}_{0}\mathbf{a}_{n}, \tag{2}\] gaining the slightly different interpretation of finding incident field configurations \(\mathbf{a}_{n}\) which produce the same scattered fields via the total and background scattering matrices, to within a complex scaling factor \(2t_{n}+1\). Equivalence of (1) and (2) for free space problems suggests that the form of (2) may be the appropriate choice when studying problems with non-trivial background scattering matrices, such as the situation encountered in the analysis of periodic systems. This generalization is supported by further analysis appearing later in this section and Appendix A showing the equivalence of (2) with standard impedance-based methods of computing characteristic modes. Unlike the free-space scattering scenario, the scattering by a periodic system is closer to that of the multi-port network sketched in the middle row of Fig. 1. Consider the special case of a two-port system consisting of two multi-mode measurement ports which, in the absence of any other scattering network, are connected by a lossless, zero-length, perfectly matched transmission line. In this case, the background scattering matrix reads \[\mathbf{S}_{0}=\begin{bmatrix}\mathbf{0}&\mathbf{1}\\ \mathbf{1}&\mathbf{0}\end{bmatrix}, \tag{3}\] where \(\mathbf{1}\) denotes an identity matrix. This choice of background problem in two-port networks is particularly well-suited to the analysis of planar1 periodic systems (_e.g._, metasurfaces or FSS) with S-parameters de-embedded to a common reference plane, as sketched in the bottom row of Fig. 1. There the S-parameters of the system represent scattering into propagating Floquet modes above and below the periodic surface. Footnote 1: Throughout this work, the term _planar periodic system_ refers to an arbitrary three-dimensional unit cell infinitely repeated in two dimensions. By the unitary nature of this background matrix, we can rewrite the eigenvalue problem in (2) as \[\tilde{\mathbf{S}}\mathbf{a}_{n}=2t_{n}\mathbf{a}_{n}. \tag{4}\] where \[\tilde{\mathbf{S}}=\mathbf{S}_{0}^{\mathbf{H}}\mathbf{S}-\mathbf{1}. \tag{5}\] The matrix \(\tilde{\mathbf{S}}\) has the physical interpretation of mapping incident positive- and negative-going waves onto the scattered component of the total field in those directions. The physical interpretation and motivation for this background matrix rests on the inherent connection between the upper and lower half-spaces when the periodic structure is not present. Leveraging this interpretation, it can be shown that the eigenvalues \(t_{n}\) in (4) are explicitly related to those obtained by decomposition of an integral impedance operator in the usual form of characteristic modes, see Appendix A, \[\mathbf{X}\mathbf{I}_{n}=\lambda_{n}\mathbf{R}\mathbf{I}_{n}, \tag{6}\] with the relation \[t_{n}=-\frac{1}{1+\mathrm{j}\lambda_{n}}. \tag{7}\] In showing this equivalence, the periodic impedance operator \(\mathbf{Z}=\mathbf{R}+\mathrm{j}\mathbf{X}\) represents free-space scattering by currents induced within the periodic environment for a prescribed angle of incidence (phase shift per unit cell). Because of their connection to radiated and reactive power [6], the matrices \(\mathbf{R}\) and \(\mathbf{X}\) are referred to as the radiation and reactance matrices, respectively. For objects in free space, the transpose and Hermitian symmetry of these matrices guarantees that characteristic currents \(\mathbf{I}_{n}\) and excitations \(\mathbf{a}_{n}\) are real-valued. In the periodic case with oblique incidence, these matrices are Hermitian symmetric, and the eigenvectors of either characteristic mode eigenvalue problem are no longer real-valued. Fig. 1: (top left) Scattering by an object \(\Omega\) in free space and (top right) the corresponding background case. (middle left) Scattering by a two-port network and (middle right) the choice of a zero-length through connection as the background case. (bottom left) Scattering between Floquet harmonics by a periodic surface and (bottom right) the corresponding background case of uninterrupted plane wave propagation. Inclusion of background media (_e.g._, material present when the designable portion of the structure is removed) requires the adoption of a problem-specific Green's function or, equivalently, selection of an appropriate background matrix \(\mathbf{S}_{0}\) representing scattering by the fixed background media. For example, in problems involving the design of patterned conducting elements on a fixed structural substrate, the matrix \(\mathbf{S}_{0}\) may be defined by the scattering behavior of the substrate alone, while the matrix \(\mathbf{S}\) describes the complete system including both the substrate and conducting elements. Definition of the background problem, and thus the matrix \(\mathbf{S}_{0}\) is not unique and can be selected to synthesize modes with different interpretations and properties. This is analogous to the definition of controllable and uncontrollable regions in the formulation of substructure characteristic modes [26]. For simplicity, throughout this paper, we assume a free-space background scattering matrix defined by (3). The above analysis demonstrates that the characteristic modes of an arbitrary periodic system can be computed using measurements of two sets of S-parameters: \(\mathbf{S}\) (structure present) and \(\mathbf{S}_{0}\) (background measurement, no structure present). Deembedding both sets of S-parameters to a common reference plane makes for a simpler connection to the zero-length, matched, transmission line background case in the center-right panel of Fig. 1, though it is not necessary. Should the reference planes be shifted (embedded or de-embedded) via propagation through lossless transmission lines, the two pairs of S-parameters become [27, SS4.3] \[\mathbf{S}^{\prime}=\mathbf{\Phi}\mathbf{S}\mathbf{\Phi}\quad\mathrm{and} \quad\mathbf{S}^{\prime}_{0}=\mathbf{\Phi}\mathbf{S}_{0}\mathbf{\Phi}, \tag{8}\] where \(\mathbf{\Phi}\) is a diagonal matrix of complex exponentials. Substitution of these parameters into (4) and (5) leads to identical eigenvalues \(t_{n}\) and eigenvectors \(\mathbf{\Phi}\mathbf{a}_{n}\) embedded to the new reference planes. Hence the characteristic modes can be computed using two sets of S-parameters with arbitrary reference planes, so long as the reference planes are unchanged between the two measurements. Note that this behavior is predicated on the fact that the scattering matrices \(\mathbf{S}\) and \(\mathbf{S}_{0}\) contain only propagating modes, which is consistent with the equivalence with impedance-based characteristic modes demonstrated in Appendix A. ### _Number of radiating CMs_ The number of propagating Floquet harmonics is determined by the size of the unit cell, the Floquet mode index, and the specular wavenumber (phase shift per unit cell). For rectangular unit cells of dimension \(T_{x}\times T_{y}\), the \(\gamma=\{u,v\}\) Floquet harmonic is propagating if its corresponding longitudinal wavenumber \(k_{\gamma z}\) is real [15], that is, when \[k^{2}\geq\left(k_{\mathrm{i}x}+\frac{2\pi u}{T_{x}}\right)^{2}+\left(k_{iy}+ \frac{2\pi v}{T_{y}}\right)^{2}. \tag{9}\] For low frequencies and near-normal incidence, only the specular \(\gamma=\{0,0\}\) mode propagates, leading to a scattering matrix of dimension four2. Increasing frequency or the incident wavevector components can enable higher-order Floquet harmonics to propagate, associated with the onset of grating lobes. Regardless of the frequency or incident wavevector, the condition in (9) determines _a priori_ the number of propagating Floquet modes \(K\), before any electromagnetic modeling or measurement takes place. Footnote 2: The total number arises from the two ports, two polarizations, and one propagating Floquet harmonic. Devices consisting of a single, patterned, two-dimensional current-supporting sheet cannot produce radiation selectively in the \(\pm z\) directions, meaning the matrix \(\mathbf{\tilde{S}}\) has a maximum rank of \(2K\). In contrast, devices supporting currents which vary in the \(z\) direction (_e.g._, dielectrics of finite thickness, multiple conducting sheets) allow for directional scattering, leading to a scattering matrix with a maximum rank of \(4K\). The rank-limited nature of the scattering matrix \(\mathbf{S}\) and the radiation matrix \(\mathbf{R}\) leads to a fixed number of radiating characteristic modes. These characteristic modes have finite eigenvalues \(\lambda_{n}\), corresponding to non-zero eigenvalues \(t_{n}\). Sums over the small set of radiating characteristic modes exactly reconstruct the total scattered fields by the unit cell. These trends were previously observed in impedance-based approaches to characteristic modes [20], but in the scattering-based approach, the underlying eigenvalue problem itself is reduced to the order of the number of radiating Floquet harmonics. This is markedly different from the impedance-based approach, where the matrices involved in the characteristic mode eigenvalue problem retain a high number of spatial degrees of freedom (corresponding to the number of basis functions) regardless of the sparsity of radiating characteristic modes. Hence, the _a priori_ truncation of the scattering matrices \(\mathbf{S}\) and \(\mathbf{S}_{0}\) to include only propagating modes is again justified by equivalent sparsity produced by matrix systems of much larger dimensions. ### _Decoupling of both temporal and spatial frequencies_ In the preceding analysis, the scattering operator \(\mathbf{S}\) and impedance operator \(\mathbf{Z}\) are calculated for specific temporal frequency \(\omega\) and incident wave vector (spatial frequency) \(\mathbf{k}_{\mathrm{i}}\). Hence the characteristic modes produced by either formulation depend on both of these parameters. Despite the additional dependence on spatial wavenumber, this does not contradict the off-cited feature that characteristic modes are independent of any prescribed excitation. Rather, it further decouples characteristic modes and the excitations which may induce them into subsets of finer granularity than exists in the finite scatterer case. Consider a scatterer of finite extent characterized by the impedance matrix \(\mathbf{Z}(\omega)\). The frequency dependence of the impedance matrix is tied to the assumption that any excitation applied to the system carries a time dependence \(\mathrm{e}^{\mathrm{i}\omega t}\). Hence the frequency-dependent characteristic modes can be interpreted as the modes that are excitable only by incident fields sharing the same temporal frequency \(\omega\). For excitations comprised of multiple temporal frequencies, the union of characteristic modes over those multiple frequencies must be considered, though only sets of modes and excitations sharing the same temporal frequencies need to be considered in expanding a total solution into characteristic modes. This decoupling of modes and their corresponding excitations has added dimensionality in the case of periodic scatterers analyzed using Floquet methods. There, operators have an explicit dependence on both temporal and spatial frequencies, _e.g_., the impedance matrix takes the form \(\mathbf{Z}(\omega,\mathbf{k}_{i})\). Hence the characteristic modes produced by such an impedance or scattering matrix can only be excited by incident fields sharing the same temporal and spatial frequencies. For general incident fields with multiple temporal and spatial frequencies, the union of corresponding sets of characteristic modes must be considered. As in the simpler case of the finite scatterer, however, the expansion of any total solution into characteristic modes is block diagonalized by temporal and spatial frequencies. In summary, the property of "excitation independence" of characteristic modes carries over into the periodic analysis presented in this work. However, closer examination shows that, even in the case of finite scatterers, this independence is only superficial since there exists an implicit restriction that characteristic modes share the temporal frequency of a selected excitation. In the case of periodic scatterers, this restriction is extended to spatial frequencies as well. In both scenarios, the union of sets of characteristic modes spanning multiple frequencies (or a continuum of frequencies) can be constructed when considering more complex incident fields. However, it should be stressed that coupling between excitations and induced modes occurs only when both share the same temporal and spatial frequency characteristics, greatly simplifying the analysis of systems under complex excitation. ### _Reconstruction of modal currents and fields_ In the scattering-based formulation (4), the characteristic mode eigenvector \(\mathbf{a}_{n}\) represents a modal excitation consisting of incident plane waves from all directions, Floquet harmonics, and polarizations. Applying this excitation leads to scattered fields of the form \(t_{n}\mathbf{a}_{n}\), _i.e_., maintaining the same shape as the excitation. Additionally, fields or currents induced by a modal excitation can be taken as alternative representations of characteristic modes. For example, applying a modal excitation \(\mathbf{a}_{n}\) to a unit cell containing a PEC scatterer induces a surface current distribution which, when represented in an appropriate basis, corresponds to the characteristic mode current distribution \(\mathbf{I}_{n}\) from (6). This process is demonstrated in an example in Sec. IV-A. ### _Expansion of S-parameters into CMs_ Collecting the eigenvectors \(\mathbf{a}_{n}\) as the columns of a matrix \(\mathbf{Q}\) and constructing a diagonal matrix \(\mathbf{\Lambda}\) containing the values \(2t_{n}\) allows for (2) to be rewritten as \[\mathbf{S}\mathbf{Q}=\mathbf{S}_{0}\mathbf{Q}(\mathbf{\Lambda}+\mathbf{1}). \tag{10}\] Because both matrices \(\mathbf{S}\) and \(\mathbf{S}_{0}\) are unitary under the assumption of losslessness used throughout this paper, the matrix \(\mathbf{Q}\) is also unitary [28, Ch. 10]. Rearranging the above expression, the S-parameters of the system can be written in terms of the characteristic modes as \[\mathbf{S}=\mathbf{S}_{0}\mathbf{Q}\mathbf{A}\mathbf{Q}^{\mathrm{H}}+\mathbf{ S}_{0}, \tag{11}\] For this particular definition of background problem, multiplication with the matrix \(\mathbf{S}_{0}\) on the right-hand side serves only to reorder the entries in a vector or matrix. Let the effect of this reordering on the eigenvector matrix be denoted \(\mathbf{S}_{0}\mathbf{Q}=\bar{\mathbf{Q}}\) with the columns of \(\bar{\mathbf{Q}}\) being the reordered vectors \(\hat{\mathbf{a}}_{n}\). With this notation, the individual entries of the S-parameter matrix can be written as a sum of modal contributions \[S_{ij}=\sum_{n}S_{ij}^{n}+S_{0,ij}, \tag{12}\] where \[S_{ij}^{n}=2\hat{a}_{n,i}a_{n,j}^{*}t_{n} \tag{13}\] are the modal scattering parameters. Similarly to the expansion of fields and currents in traditional characteristic mode analysis, the presence of the eigenvalue \(t_{n}\) represents a dependence on the modal significance, while the terms \(a_{n,j}^{*}\) and \(\hat{a}_{n,i}\) represent the projection of each characteristic mode on incoming and outgoing characteristic field configurations. Equivalent to (11), the above expressions can be written in matrix form as \[\mathbf{S}=\sum_{n}\mathbf{S}_{n}+\mathbf{S}_{0}. \tag{14}\] where \[\mathbf{S}_{n}=2t_{n}\mathbf{S}_{0}\mathbf{a}_{n}\mathbf{a}_{n}^{\mathrm{H}}. \tag{15}\] The separation of (12) and (14) into modal and background contributions is not mandatory. By the unitary nature of the matrix \(\mathbf{Q}\), the relation (14) can be rearranged to yield \[\mathbf{S}=\sum_{n}(2t_{n}+1)\mathbf{S}_{0}\,\mathbf{a}_{n}\,\mathbf{a}_{n}^{ \mathrm{H}}, \tag{16}\] which is mathematically identical, but its interpretation varies slightly from that in (14). Whereas in (14) the scattering parameters consist of a modal sum and a background contribution, in (16), the background contribution itself is decomposed into characteristic modes and joined with the other modal terms. The choice of using (14) or (16) to define modal S-parameters is arbitrary, as the two approaches differ only in how the background scattering parameter is treated. Previous single-mode analysis of periodic systems below the onset of grating lobes in [17, SS5.3.2] utilizes a definition equivalent to (16). In all subsequent examples, we adopt the definition in (14). Neither definition is necessarily more correct than the other, and it may be advantageous in future work to study whether one approach affords more insight in the analysis or design of particular applications. ## III Modal tracking When truncated to include only propagating Floquet harmonics, the dimension of the matrices \(\mathbf{S}\) and \(\mathbf{S}_{0}\) change abruptly at the onset of additional propagating Floquet harmonics. Because of this matrix discontinuity, there is no guarantee [29] that any modal quantities are continuous across these boundaries in frequency and scan-angle. Hence we focus here only on modal tracking within zones defined by these boundaries. Consider _blocks_ of frequencies3 which share the same number of propagating Floquet harmonics. Within these blocks, characteristic modes can be tracked using any of a number of standard methods (see [13, SSV] for a summary of references). Because the eigenvectors \(\mathbf{a}_{n}\) represent scattered fields, far-field orthogonality can be utilized to carry out far-field tracking when the frequency step size is sufficiently small. For linking the \(n^{\mathrm{th}}\) characteristic mode at the \(k^{\mathrm{th}}\) frequency to its corresponding data point at the \((k+1)^{\mathrm{th}}\) frequency, far-field tracking consists of simply selecting the mode which maximizes the correlation coefficient magnitude Footnote 3: Here “frequency” is used to refer generally to the three-dimensional parameter of frequency and two-dimensional transverse incident wavenumber. \[\left|\rho_{mn}\right|=\left|\left(\mathbf{a}_{m}^{(k+1)}\right)^{\mathrm{H}} \mathbf{a}_{n}^{k}\right|. \tag{17}\] In the study of finite structures, this form of tracking is highly effective but its implementation can be complicated by the "appearance" of modes at certain frequencies due to shifts in the order of modal significances or in modal eigenvalues crossing the threshold of numerical noise. For the periodic systems considered here, the fact that the number of propagating modes is fixed within each block removes these issues entirely. ## IV Example calculations In this section, we consider several examples designed to demonstrate critical features of scattering-based characteristic mode analysis for periodic structures. ### _Equivalency with impedance-based methods_ To demonstrate the equivalency of the impedance-based and scattering-based formulations, we analyze the scattering properties of a single-layer PEC FSS using both methods. For this example, a square unit cell has dimension \(T\) and contains a rectangular PEC patch of size \(0.4T\times 0.5T\). For the scattering-based method, the matrix \(\tilde{\mathbf{S}}\) is generated using S-parameters computed from a finite element solver (Ansys HFSS [30]). Characteristic mode eigenvalues \(t_{n}\) are then computed by (4). For the impedance-based method, the EFIE impedance matrix data are generated using the method of moments [15, 20]. In this case, characteristic mode eigenvalues \(\lambda_{n}\) are computed by (6) and then converted into the eigenvalues \(t_{n}\) via (7). Figs. 2 and 3 show the modal significance \(\left|t_{n}\right|\) and characteristic angle \(\alpha_{n}=\angle t_{n}\) over a range of frequencies, normalized as electrical size \(T/\lambda=kT/(2\pi)\) for normal (\(\theta=0^{\circ},\ \phi=0^{\circ}\)) and oblique (\(\theta=20^{\circ},\ \phi=0^{\circ}\)) incidence. The eigenvalues \(t_{n}\) from both methods are tracked using the block-wise correlation method discussed in Sec. III. Changes in the trace colors correspond to block boundaries and the onset of additional propagating modes. unit cell under normal excitation. The four panels show the modal surface current distribution over the conducting patch computed using FEM (Ansys HFSS [30]), where the structure is illuminated with a superposition of plane waves with weights governed by the characteristic mode eigenvector \(\mathbf{a}_{n}\), see Sec. II-C. Bold arrows have been added to facilitate identification of the dominant surface current orientations. Modes 1 and 2 are near resonance and have small positive (inductive) and negative (capacitive) eigenvalues, respectively. Modes 3 and 4 are further away from resonance and exhibit multipole capacitive and loop-like inductive behavior, respectively. It is important to stress that Fig. 4 shows only the periodic current distribution on a single unit cell. The fields and interactions between adjacent unit cells also significantly impact the reactive nature of each mode, _i.e._, their eigenvalues and modal significance. Additionally, the modal currents shown here are the periodic modal currents without the progressive phase shift present in the true current distribution covering the entire periodic surface, see (18) in Appendix A for further details. ### _Vertical structure and the number of propagating modes_ Based on the discussion in Sec. II, a structure with only one infinitely thin conducting layer can only scatter in a symmetric fashion above and below the plane of the unit cell. It follows that for such structures the frequency-dependent number of radiating characteristic modes is equal to \(2K\), where \(K\) is the number of propagating Floquet harmonics at each frequency. In contrast, a two-layer structure is capable of supporting unidirectional scattering, and this additional degree of freedom increases the number of radiating characteristic modes to \(4K\). Adding further vertical structure does not afford further scattering diversity, and a structure with three or more layers will also exhibit \(4K\) radiating characteristic modes. To verify the effect of vertical structure on the predictable number of radiating characteristic modes, we examine the set of systems depicted in Fig. 5 consisting of one, two, or three stacked, rectangular, PEC patches of dimension \(0.4T\times 0.5T\) centered within a square unit cell of dimension \(T\times T\). In the cases involving two or three patches, adjacent patches are aligned in the \(xy\) plane and separated by a vertical distance \(\Delta=T/10\) in the \(z\) direction. For this example, we consider the incidence angle (\(\phi=0^{\circ}\), \(\theta=20^{\circ}\)). The frequency-dependent number of characteristic modes for structures with one, two, and three layers are shown in Fig. 5, where perfect alignment with the analytic predictions of either \(2K\) or \(4K\) is observed. Results shown in Fig. 5 were calculated using the impedance formulation (6) and data produced by periodic MoM. Identical results were obtained using the scattering formulation (4) and data from FEM. Note that the single layer markers and \(2K\) curve in Fig. 5 align with the number of traces within each block of Fig. 3. ### _Analysis of a circular polarization-selective surface (CPSS)_ As a practical example of applying characteristic mode analysis to a more complex design, we consider the CPSS reported in [31]. The CPSS is designed to achieve polarization-selective operation in two bands. In the lower \(17.7\)-\(20.2\,\mathrm{GHz}\) band, the CPSS passes left-hand circular polarization (LHCP) and reflects right-hand circular polarization (RHCP). The opposite behavior occurs in the higher \(27.5\)-\(30.0\,\mathrm{GHz}\) band. The design consists of six metallic meander-like patterns separated by a combination of dielectric substrates, low-permittivity spacers, and bonding layers. For the purposes of characteristic mode analysis, all materials are assumed to be lossless. The S-parameters of the CPSS are simulated for normal incidence over a broad bandwidth below the onset of grating lobes (\(K=1\)) using CST FEM solver with \(60^{\circ}\) rhombus unit cells [32]. The design for one unit cell is illustrated in Fig. 6. Two datasets are collected and stored as \(\mathbf{S}\) and \(\mathbf{S}_{0}\), corresponding to the S-parameters with and without the CPSS present in the simulated unit cell, respectively. In defining circular polarizations at each port, a \(60^{\circ}\) offset between Cartesian coordinate systems on either side of the CPSS is used to maintain inverse symmetry of the system. Both sets Fig. 4: Characteristic surface current magnitude associated with the four modes with the largest eigenvalue magnitudes \([t_{n}]\) at the frequency \(kT/(2\pi)=1.2\) calculated using FEM (HFSS) S-parameter simulation of the single-layer unit cell drawn in Fig. 2 under normal incidence illumination. Dark (light) colors denote the maxima (minima) of the normalized surface current density magnitude. Arrows depict the dominant surface current orientations. Fig. 5: The effect of vertical structure and electrical size on the number of radiating characteristic modes. The frequency-dependent number of radiating characteristic modes for unit cells containing one, two, or three rectangular PEC patches are shown alongside scaled analytic calculations of the number of propagating Floquet harmonics, \(K\). Shaded regions correspond to the blocks tracked characteristic modes in Fig. 3. of S-parameters are de-embedded to a common reference plane. The characteristic modes are then computed via (4) and tracked using the correlation method in (17). Modal significances \(|t_{n}|\) for the four radiating characteristic modes on the structure are shown in Fig. 7. Modal and background contributions to the S-parameters are then calculated using (13). LHCP and RHCP transmission parameters (denoted generically as \(S_{ij}\)) are plotted in Figs. 8 and 9. In Fig. 8, the S-parameter magnitude (thick, black) and its modal (solid) and background (dashed) components are plotted as functions of frequency. The two operating bands are highlighted with gray shading. Fig. 9 shows the complex total, background, and modal S-parameter data over those two bands. These data offer several points of interpretation regarding the total device behavior in terms of background and modal contributions. In the passband of each polarization (LHCP / lower and RHCP / upper), modal contributions destructively combine with themselves, leaving the transmission dominated by the background transmission parameter - with a slight modification to the overall transmission phase arising from the weakly excited modes. The contrary occurs in the stopband of each polarization where the modal contributions destructively interact with the background. Low transmission is synthesized by a superposition of one (\(S^{4}\) for RHCP) or two (\(S^{2}\) and \(S^{3}\) for LHCP) modal transmission coefficients which cancel the transmission due to the background. These complex values and combinations can be further seen in Fig. 9. ### _Analysis of a periodic beamsteering metasurface_ The previous CPSS example was designed to operate below the onset of grating lobes with reflection and transmission confined to specular directions. To enhance coupling into non-specular directions above the onset of grating lobes, many designs employ electrically large supercells consisting of multiple dissimilar elements, each tailored for particular scattering phase and polarization characteristics. This periodic metasurface design approach is widespread in the design of microwave and optical devices [33]. Large, non-periodic structures can also be designed using this general approach [34, 35], but here we focus on the analysis of periodic systems consisting of large, variable element supercells. As an example, we consider the six-element beamsteering supercell reported in [36]. This design, shown in Fig. 10, Fig. 8: Total (black, thick), background (red, dashed), and modal (thin) transmission scattering parameter magnitude \(|S_{ij}|\) for LHCP (top) and RHCP (bottom). Lower (LHCP pass, RHCP stop) and upper (RHCP pass, LHCP stop) operational bands are highlighted in gray. Fig. 6: Illustration of the tilted layered metal meander-based dual-band CPSS design [31]. RHCP (red) and LHCP (blue) waves incident from the negative \(z\)-direction are reflected and transmitted, respectively, by the periodic structure. Fig. 7: Modal significances \(|t_{n}|\) computed for the CPSS structure shown in Fig. 6 and described further in [31]. Lower and upper operational bands are highlighted in gray. Fig. 9: Complex transmission scattering parameter \(S_{ij}\) for LHCP (top row) and RHCP (bottom row) in the lower (left column) and upper (right column) operational bands highlighted in Fig. 8. Trace colors and styles follow the legend of Fig. 8. Star and circle markers denote the lowest and highest frequency of each band, respectively. consists of a supercell containing six unique elements, each constructed from four PEC patches and three dielectric support layers. The design is optimized to convert a normally-incident \(20\) GHz plane wave into a transmitted plane wave exiting the structure at an angle of \(\theta=30^{\circ}\). For this particular size supercell, the normal and \(\theta=30^{\circ}\) waves correspond to the \(u=0\) and \(u=1\) Floquet harmonics, respectively. Scattering parameters describing normal-normal (\(S_{00}\)) and normal-\(30^{\circ}\) (\(S_{10}\)) transmission through the surface are shown in Fig. 11 at the design frequency of \(20\) GHz. Black markers indicate the total scattering parameters, with near-zero normal-normal transmission and near-unity normal-\(30^{\circ}\) transmission. The background scattering parameters, indicated by red markers, are essentially the opposite, since in the absence of the beamsteering surface the incident normal plane wave propagates directly through the system with no mode conversion or reflection. However, by (12), the total scattering parameters are obtained through the sum of the background parameters with the modal scattering parameters, marked in blue. In the case of normal-normal transmission, the background S-parameter is cancelled through the sum of many weakly contributing modal S-parameters sitting to the left of the origin. Conversely, the nearly-full normal-\(30^{\circ}\) transmission is produced by the coherent sum of many weakly contributing modal S-parameters sitting to the right of the origin. In this particular example, the total system response contains contributions from many radiating characteristic modes, making the interpretation more challenging than in the sparser CPSS example, _cf._ Fig. 9. Nevertheless, detailed examination of modal field and current distributions may elucidate how alterations to the structure could be made to alter its performance, _e.g._, shifting the operating frequency or switching to an alternate transmitted Floquet harmonic, as in the study of finite objects. ## V Conclusions In this work, the characteristic modes of periodic systems, such as frequency- or polarization-selective surfaces, are formulated using a scattering-based approach. The proposed method requires only knowledge of a system's scattering parameters. This lack of dependency on integral equations greatly expands the application of characteristic modes to unit cells consisting of arbitrary materials. The method also allows for the use of measured scattering parameters to compute characteristic mode data, though this is limited to the modal eigenvalues and not field quantities, such as modal currents. Due to the generally low number of characteristic modes, modal tracking is computationally efficient and straightforward to implement over blocks of frequencies away from the onset of additional grating lobes. Further, the vertical structure of a unit cell plays a crucial role in determining the number of radiating modes. Finally, the expansion of S-parameters into modal contributions again follows the theme of characteristic modes, which is to represent complex system responses as a sum of simpler modal responses. For the selected CPSS example, expansion into characteristic modes illustrates that the transmission in each pass-band is dominated by the background contribution, while each stop-band can be understood as a cancellation of that background contribution by one or two modal terms. Similar features are present in the case of the overmoded beamsteering metasurface, though in that example the modal sums are comprised of many weak contributions from nearly all characteristic modes. As in the study of finite objects, the impact of lossy materials on the interpretation and utility of characteristic modes must be carefully considered [12, 37], especially when the objects under study are specifically designed to dissipate large amounts of incident energy, _e.g._, absorbing metasurfaces. Adaptation of the proposed method to lossy systems, along with a detailed study of the properties of the resulting characteristic modes, is left as future work. ## Appendix A Equivalence of scattering and impedance-based characteristic modes Equivalence between the scattering- and impedance-based formulations is demonstrated here in four steps. First, the scattered Floquet mode coefficient \(b_{m}\) is written as a product of expansion coefficients \(\mathbf{I}\) describing a current density within a unit cell and a matrix \(\mathbf{U}_{m}\) representing projections of Bloch modes onto used basis functions. Second, projection of incident plane wave on the selected basis is shown to be representable in terms of the same operator \(\mathbf{U}_{m}\). The mappings between incident field, induced current density, and scattered Fig. 11: Transmission S-parameters describing normal-normal (left) and normal-\(30^{\circ}\) (right) scattering from the supercell in Fig. 10 at \(20\) GHz. Fig. 10: Six-element supercell design from [36] forming a unit cell in the \(xy\)-plane with dimensions \(T_{x}=T_{y}=\). The supercell consists of PEC patches (orange) on and within a Taconic TILY-5 substrate (blue, \(\epsilon_{t}=2.2\)). The size of elements within the supercell increases along the \(x\)-axis, with the rightmost element being very small with respect the others. The surface converts incoming normally incident plane waves (red, \(u=0\)) into a transmitted plane wave propagating in the \(\theta=30^{\circ}\) direction (blue, \(u=1\)). fields are combined to relate the matrix \(\tilde{\mathbf{S}}\) to an impedance matrix \(\mathbf{Z}\). Finally, that relation is used to demonstrate equivalency between the eigenvalue problems in (5) and (6) along with the eigenvalue conversion (7). Consider a periodic system supporting an equivalent volumetric current density of the form \[\boldsymbol{J}(x,y,z)=\boldsymbol{j}(x,y,z)\mathrm{e}^{-\mathrm{j}k_{zx}x} \mathrm{e}^{-\mathrm{j}k_{zy}y} \tag{18}\] where \(\boldsymbol{k}_{\mathrm{i}}\) is an incident wavevector (governing phase shift per unit cell) and \(\boldsymbol{j}(x,y,z)\) is a periodic function that is identical within each unit cell. A rectangular unit cell of dimension \(T_{x}\times T_{y}\) is considered below without loss of generality. The Fourier components of the periodic current are given by \[\tilde{\boldsymbol{\jmath}}_{\gamma}(z)=\frac{1}{T_{x}T_{y}}\int\boldsymbol{j }(x,y,z)\mathrm{e}^{\mathrm{j}\kappa_{\gamma x}x}\mathrm{e}^{\mathrm{j}\kappa_ {\gamma y}y}\,\mathrm{d}x\,\mathrm{d}y \tag{19}\] where \[\kappa_{\gamma x}=\frac{2\pi u}{T_{x}},\quad\kappa_{\gamma y}=\frac{2\pi v}{T_ {y}}, \tag{20}\] and \(\gamma\) denotes a combination of the Floquet indices \(u\) and \(v\). The current density \(\boldsymbol{J}\) is then given by \[\boldsymbol{J}\left(x,y,z\right)=\sum_{\gamma}\tilde{\boldsymbol{\jmath}}_{ \gamma}(z)\mathrm{e}^{-\mathrm{j}k_{\gamma x}x}\mathrm{e}^{-\mathrm{j}k_{ \gamma y}y} \tag{21}\] with \[k_{\gamma x}=k_{\mathrm{i}x}+\kappa_{\gamma x},\quad k_{\gamma y}=k_{\mathrm{ i}y}+\kappa_{\gamma y}. \tag{22}\] Above (\(+\)) and below (\(-\)) the planes delimiting the extent of the periodic scatterer, the scattered field can be written in terms of a superposition of plane waves corresponding to Floquet harmonics \[\boldsymbol{E}_{\pm}^{\mathrm{sc}}\left(x,y,z\right)=\sum_{\gamma}\tilde{ \boldsymbol{e}}_{\{\pm,\gamma\}}^{\mathrm{sc}}(z)\mathrm{e}^{-\mathrm{j}k_{ \gamma x}x}\mathrm{e}^{-\mathrm{j}k_{\gamma y}y}. \tag{23}\] The function \(\tilde{\boldsymbol{e}}_{\{\pm,\gamma\}}^{\mathrm{sc}}\left(z\right)\) associated with each scattered plane wave can be decomposed into two orthogonal polarizations (TE / TM or parallel / perpendicular) which are here denoted by unit vectors \(\{\boldsymbol{\hat{e}}_{p}\}\). Performing the same form of factorization as in (18) to the scattered field (23) and separating polarizations \(p\), leads to a further expansion of the periodic scattered field \[\boldsymbol{e}_{\pm}^{\mathrm{sc}}\left(\boldsymbol{r}\right)=k\sqrt{\eta} \sum_{\gamma,p}b_{\{\pm,\gamma,p\}}\boldsymbol{u}_{\{\pm,\gamma,p\}}\left( \boldsymbol{r}\right), \tag{24}\] where \[\boldsymbol{u}_{\{\pm,\gamma,p\}}\left(\boldsymbol{r}\right)=\boldsymbol{ \hat{e}}_{p}\sqrt{\frac{1}{kT_{x}T_{y}k_{\gamma z}}}\mathrm{e}^{-\mathrm{j}( \kappa_{\gamma x}x+\kappa_{\gamma y}y\pm k_{\gamma z}z)}. \tag{25}\] are Bloch functions and \[b_{\{\pm,\gamma,p\}}=\sqrt{\frac{T_{x}T_{y}k_{\gamma z}}{k\eta}}\boldsymbol{ \hat{e}}_{p}\cdot\tilde{\boldsymbol{e}}_{\{\pm,\gamma\}}^{\mathrm{sc}}(z) \mathrm{e}^{\pm\mathrm{j}k_{\gamma z}z}. \tag{26}\] The particular normalization terms are selected such that the Bloch functions \(\boldsymbol{u}_{\{\pm,\gamma,p\}}\) are unitless and the coefficients \(b_{\{\pm,\gamma,p\}}\) are related to the cycle mean power radiated per unit cell in the \(\pm z\) direction via \[P_{\pm}^{\mathrm{sc}}=\frac{1}{2}\sum_{\gamma,p}\left|b_{\{\pm,\gamma,p\}} \right|^{2}. \tag{27}\] In the language of microwave circuit theory [27], this choice makes the coefficients \(b_{\{\pm,\gamma,p\}}\) represent outgoing power waves. To connect the coefficients \(b_{\{\pm,\gamma,p\}}\) to the periodic current \(\boldsymbol{j}\), we write the scattered field \(\tilde{\boldsymbol{e}}_{\{\pm,\gamma\}}^{\mathrm{sc}}\) in terms of the corresponding Fourier component of the current density and a dyadic Green's function [25] \[\tilde{\boldsymbol{e}}_{\{\pm,\gamma\}}^{\mathrm{sc}}(z)=-\mathrm{j}k\eta\int \boldsymbol{g}_{\{\pm,\gamma\}}(z,z^{\prime})\cdot\tilde{\boldsymbol{\jmath}}_ {\gamma}(z^{\prime})\,\mathrm{d}z^{\prime}. \tag{28}\] Above and below the scatterer, the dyadic Green's function can be factored as (_cf._[25, Eq. (10.49)]) \[\boldsymbol{g}_{\{\pm,\gamma\}}(z,z^{\prime})=\tilde{\boldsymbol{g}}_{\{\pm, \gamma\}}\mathrm{e}^{\mp\mathrm{j}k_{\gamma z}z}\mathrm{e}^{\pm\mathrm{j}k_{ \gamma z}z^{\prime}} \tag{29}\] which reduces (28) to \[\tilde{\boldsymbol{e}}_{\{\pm,\gamma\}}^{\mathrm{sc}}(z)=-\mathrm{j}k\eta \mathrm{e}^{\mp\mathrm{j}k_{\gamma z}z}\int\tilde{\boldsymbol{g}}_{\{\pm, \gamma\}}\cdot\tilde{\boldsymbol{\jmath}}_{\gamma}(z^{\prime})\mathrm{e}^{ \pm\mathrm{j}k_{\gamma z}z^{\prime}}\mathrm{d}z^{\prime}, \tag{30}\] where the longitudinal wavenumber is given by \[k_{\gamma z}=\sqrt{k^{2}-k_{\gamma x}^{2}-k_{\gamma y}^{2}}. \tag{31}\] Expanding the periodic current density in a set of basis functions \(\{\boldsymbol{\psi}_{\alpha}\}\) \[\boldsymbol{j}(\boldsymbol{r})=\sum_{\alpha}I_{\alpha}\boldsymbol{\psi}_{ \alpha}(\boldsymbol{r}) \tag{32}\] and combining (19), (30), (25), (26) leads to the desired expression of \(b_{\{\pm,\gamma,p\}}\) in terms of the current expansion coefficients \(\mathbf{I}\), \[b_{\{\pm,\gamma,p\}}=-\sum_{\alpha}I_{\alpha}U_{\{\pm,\gamma,p\}}^{\alpha}=- \mathbf{U}_{\{\pm,\gamma,p\}}\mathbf{I}. \tag{33}\] The matrix \(\mathbf{U}_{\{\pm,\gamma,p\}}\) collects the projections of Bloch functions (25) onto basis functions \(\boldsymbol{\psi}\), _i.e._, \[U_{\{\pm,\gamma,p\}}^{\alpha}=\frac{k\sqrt{\eta}}{2}\int\boldsymbol{u}_{\{\pm, \gamma,p\}}^{*}\left(\boldsymbol{r}\right)\cdot\boldsymbol{\psi}_{\alpha}( \boldsymbol{r})\,\mathrm{d}\boldsymbol{r}. \tag{34}\] This last relation assumes that \(k_{\gamma z}\) is real valued (propagating Bloch modes) and utilizes the fact that the vectors \(\{\boldsymbol{\hat{e}}_{p}\}\) have the eigenvalue property \[\boldsymbol{\hat{e}}_{p}\cdot\tilde{\boldsymbol{g}}_{\{\pm,\gamma\}}=-\frac{ \mathrm{j}}{2k_{\gamma z}}\boldsymbol{\hat{e}}_{p}. \tag{35}\] For simplified notation in the remainder of the paper, the multi-index \(m=\{\pm,\gamma,p\}\) is adopted to completely define the propagation direction, Floquet harmonic, and polarization of a given Bloch wave, _e.g._, \(\mathbf{U}_{\{\pm,\gamma,p\}}=\mathbf{U}_{m}\). To show that operator \(\mathbf{U}_{m}\) can be used to represent incident plane wave excitation, consider the electric field integral equation (EFIE) relating the periodic current density to an incident field distribution via an appropriate surface or volumetric impedance operator. Utilizing the expansion in (32) and applying Galerkin testing leads to the method of moments (MoM) representation of the EFIE \[\mathbf{V}=\mathbf{Z}\mathbf{I} \tag{36}\] where \(\mathbf{Z}\) is the impedance matrix and \(\mathbf{V}\) contains the projec tion of the periodic incident field onto the selected basis \[V^{\alpha}=\int\mathbf{e}^{\mathrm{i}}\left(\mathbf{r}\right)\cdot\mathbf{\psi}_{\alpha} \left(\mathbf{r}\right)\mathrm{d}\mathbf{r}. \tag{37}\] Let the periodic part of incident field take the form similar to (24), _i.e._, \[\mathbf{e}_{\pm}^{\mathrm{i}}\left(\mathbf{r}\right)=k\sqrt{\eta}\sum_{m}a_{m}\mathbf{u}_{ m}\left(\mathbf{r}\right). \tag{38}\] Substitution into (37) and comparing with (34) leads to \[\mathbf{V}=2\sum_{m}a_{m}\mathbf{U}_{m}^{\mathrm{H}}. \tag{39}\] Similarly to scattered power (27), the cycle mean incident power passing through a rectangular patch of size \(T_{x}\times T_{y}\) which is normal to \(z\) can be written as \[P^{\mathrm{i}}=\frac{1}{2}\sum_{m}\left|a_{m}\right|^{2}. \tag{40}\] Using (36), (39) and (33) yields a relation between the impedance matrix \(\mathbf{Z}\) and the elements of the matrix \(\tilde{\mathbf{S}}\) \[\tilde{S}_{mm^{\prime}}=\frac{b_{m}}{a_{m^{\prime}}}=-2\mathbf{U}_{m}\mathbf{ Z}^{-1}\mathbf{U}_{m^{\prime}}^{\mathrm{H}}. \tag{41}\] The cycle mean scattered power per unit cell can be also written in terms of the radiation part of the impedance matrix or as the sum over all scattered powers, _i.e._, \[P_{\mathrm{sc}}=\frac{1}{2}\mathbf{I}^{\mathrm{H}}\mathbf{R}\mathbf{I}=\frac{ 1}{2}\mathbf{I}^{\mathrm{H}}\left(\sum_{m}\mathbf{U}_{m}^{\mathrm{H}}\mathbf{ U}_{m}\right)\mathbf{I}. \tag{42}\] Because the above expression holds for all currents \(\mathbf{I}\), it holds that the two matrices within the quadratic forms are equal. The impedance-based characteristic mode eigenvalue problem in (6) can be rearranged to \[\mathbf{Z}^{-1}\mathbf{R}\mathbf{I}_{n}=\frac{1}{1+\mathrm{j}\lambda_{n}} \mathbf{I}_{n} \tag{43}\] and substituting the alternate representation of the matrix \(\mathbf{R}\) from (42) into the above expression gives \[\mathbf{Z}^{-1}\left(\sum_{m}\mathbf{U}_{m}^{\mathrm{H}}\mathbf{U}_{m}\right) \mathbf{I}_{n}=\frac{1}{1+\mathrm{j}\lambda_{n}}\mathbf{I}_{n}. \tag{44}\] Rearranging the above expression and left multiplying with \(\mathbf{U}_{m}\) leads to \[\sum_{m^{\prime}}\mathbf{U}_{m}\mathbf{Z}^{-1}\mathbf{U}_{m^{\prime}}^{ \mathrm{H}}\mathbf{U}_{m^{\prime}}\mathbf{I}_{n}=\frac{1}{1+\mathrm{j}\lambda _{n}}\mathbf{U}_{m}\mathbf{I}_{n} \tag{45}\] which, by (41) and (33) reduces to \[\sum_{m^{\prime}}\tilde{S}_{mm^{\prime}}b_{n,m^{\prime}}=-\frac{2}{1+\mathrm{ j}\lambda_{n}}b_{n,m}. \tag{46}\] Collecting this form of equation for all indices \(m\) yields the eigenvalue problem in (4) and the relation between impedance- and scattering-based eigenvalues in (7).
2301.10127
Improving Open-Set Semi-Supervised Learning with Self-Supervision
Open-set semi-supervised learning (OSSL) embodies a practical scenario within semi-supervised learning, wherein the unlabeled training set encompasses classes absent from the labeled set. Many existing OSSL methods assume that these out-of-distribution data are harmful and put effort into excluding data belonging to unknown classes from the training objective. In contrast, we propose an OSSL framework that facilitates learning from all unlabeled data through self-supervision. Additionally, we utilize an energy-based score to accurately recognize data belonging to the known classes, making our method well-suited for handling uncurated data in deployment. We show through extensive experimental evaluations that our method yields state-of-the-art results on many of the evaluated benchmark problems in terms of closed-set accuracy and open-set recognition when compared with existing methods for OSSL. Our code is available at https://github.com/walline/ssl-tf2-sefoss.
Erik Wallin, Lennart Svensson, Fredrik Kahl, Lars Hammarstrand
2023-01-24T16:46:37Z
http://arxiv.org/abs/2301.10127v3
# Improving Open-Set Semi-Supervised Learning with Self-Supervision ###### Abstract Open-set semi-supervised learning (OSSL) is a realistic setting of semi-supervised learning where the unlabeled training set contains classes that are not present in the labeled set. Many existing OSSL methods assume that these out-of-distribution data are harmful and put effort into excluding data from unknown classes from the training objective. In contrast, we propose an OSSL framework that facilitates learning from all unlabeled data through self-supervision. Additionally, we utilize an energy-based score to accurately recognize data belonging to the known classes, making our method well-suited for handling uncurated data in deployment. We show through extensive experimental evaluations on several datasets that our method shows overall unmatched robustness and performance in terms of closed-set accuracy and open-set recognition compared with state-of-the-art for OSSL. Our code will be released upon publication. ## 1 Introduction Using a combination of labeled and unlabeled data for training a model, so-called _semi-supervised learning_ (SSL), has been a well-studied field of machine learning for a long time [29, 39, 19, 32, 30]. Large labeled datasets are often difficult or expensive to acquire, but extensive amounts of unlabeled data are more readily available. However, semi-supervised learning is typically studied in a _closed-set context_, where labeled and unlabeled data are assumed to follow the same distribution. Though in practice, one can expect that the labeled set is of a much more curated character, _e.g_., hand-picked examples from known classes, compared to its unlabeled counterpart, which may contain outliers or corrupted data. Semi-supervised learning where the unlabeled set contains more classes than the labeled set (see Fig. 1) is referred to as _open-set semi-supervised learning_ (OSSL). The classes present in the labeled set are considered in-distribution (ID), whereas other classes are recognized as out-of-distribution (OOD). A common approach for training models in OSSL is to use a standard SSL objective but only include unlabeled data that are predicted as ID [5, 9, 11, 40]. Other unlabeled data are, _e.g_., discarded [4] or given less importance [9]. This approach is motivated by an assumption that the training signal from OOD data hurts the performance. However, the obvious drawback is that they overlook utilizing all available data by restricting learning to samples predicted as ID. Some methods take steps towards better employment of unlabeled data in OSSL. For example, [26] uses a consistency regularization on all unlabeled data, [15] similarly introduces a self-supervised rotation loss that is used for all unlabeled data, and [16] identifies OOD data semantically similar to ID data that can be "recycled" as such. However, these methods are still centered around an SSL objective evaluated for predicted ID data. In this work, we propose a **S**elf-supervision-centric **F**ramework for **O**pen-set **S**emi-**S**upervised learning (Se-FOSS). Our proposed framework unconditionally promotes learning from all unlabeled data by utilizing a self-supervised consistency loss, which is effective in the traditional closed-set SSL setting [34]. Compared to previous methods focusing on predicted inliers through an SSL objective, we argue that making self-supervision the primary source of learning is key, not only for data efficiency but also for robustness in OSSL. The impact of classifying unlabeled data incorrectly as ID, or as the incorrect class, is intuitively much less significant for a self-supervised training objective. Moreover, we prioritize the final model's capac Figure 1: Comparison between open- and closed-set semi-supervised learning. ity for open-set recognition (predicting data as ID or OOD). To this end, we resort to the recently popularized _free-energy score_[22]. This theoretically founded approach for open-set recognition (OSR) is shown to be well-performing with the added benefit of not requiring any architectural modifications to the classification model. We conduct thorough experiments across wide-ranging datasets when evaluating our framework and comparing it to existing methods. The results show that SeFOSS gives unmatched overall performance in terms of closed-set accuracy and open-set recognition compared to the state-of-the-art. We also show that the training of SeFOSS is stable, avoiding the need for additional validation data for selecting the best model. Furthermore, our extensive experiments show that the previous assumption [9, 5, 11] that OOD data significantly hurt the closed-set accuracy of traditional SSL is not always valid. On the contrary, the seminal SSL method FixMatch [30] outperforms all evaluated OSSL methods in terms of closed-set accuracy. Thus, we suggest that designated methods for OSSL are mainly important when performance for open-set recognition is vital. Our main contributions are: * a self-supervision-centric OSSL framework for accurate closed-set performance and OSR. * an extensive evaluation across a wide range of open-set scenarios showing that our framework achieves SOTA results compared to existing methods. * a challenge to the prior assumption that OOD data in the unlabeled set significantly harms the closed-set performance of traditional SSL methods, indicating that research efforts should be put elsewhere. ## 2 Related work This section covers the most relevant background for SSL and OSR. Additionally, we summarize existing works in open-set semi-supervised learning. ### Semi-supervised learning Research in semi-supervised learning has a long history [29, 8, 1, 39, 19, 25]. Recently, frameworks such as FixMatch [30] and UDA [35] introduced a new paradigm for semi-supervised learning combining pseudo-labeling with consistency regularization using data augmentations. These works emphasize the importance of strong domain-specific data augmentations for high performance in SSL. In the image domain, these augmentations are, _e.g_., RandAugment [6] and Cutout [7]. The effectiveness of FixMatch and UDA sparked a new wave of research trying to extend or improve these frameworks. For example, FlexMatch [42], Dash [37], SimMatch [44], CCSSL [38], DP-SSL [36], and DoubleMatch [34] all propose ways to improve the strategies for pseudo-labeling and consistency regularization of UDA and FixMatch. For this work, we take particular inspiration from DoubleMatch [34], which highlighted the effectiveness of enabling learning from _all_ unlabeled data. DoubleMatch is motivated by the fact that UDA and FixMatch restrict learning from unlabeled data to samples for which the model produces confident predictions. In this regard, DoubleMatch adds a self-supervised cosine-similarity loss on the feature predictions across augmentations of all unlabeled data. By promoting prediction consistency also for uncertain unlabeled data, DoubleMatch sees improvements in terms of both final accuracy and training speed. In the context of OSSL, we suggest that this ability to safely learn from all unlabeled data, without inferring class predictions, is of particular interest as it can enable the model to learn from outlier data. For this reason, we include the self-supervision proposed by DoubleMatch as a core part of SeFOSS. ### Open-set recognition The ability to identify previously unseen classes is an important safety feature in many machine learning applications. The task of predicting if a class belongs to a pre-defined set of classes or not is often referred to as open-set recognition and is a widely studied problem [28, 2, 12, 13, 18, 20, 21]. Some existing solutions are based on, _e.g_., modeling class conditional feature distributions [20], using ensembles of models [18], or analyzing the model predictions under perturbations of input data [21]. Recently, Li _et al_. [22] proposed to use the _free-energy score_ for OSR. The free-energy score for a data point \(\mathbf{x}\) is obtained by viewing the logits \(f_{y}(\mathbf{x})\) for each class \(y\) as negative energies: \(E(\mathbf{x},y)=-f_{y}(\mathbf{x})\). The free-energy score is then given by \[F(\mathbf{x})=-\frac{1}{\beta}\log\sum_{y^{\prime}=1}^{C}e^{-\beta E(\mathbf{x },y^{\prime})}, \tag{1}\] where \(\beta\) is a hyperparameter and \(C\) is the number of classes. The free-energy score is theoretically aligned with the marginal distribution for ID data, \(p(\mathbf{x})\), so that we can expect \(F(\mathbf{x}_{\text{a}})<F(\mathbf{x}_{\text{b}})\) for an \(\mathbf{x}_{\text{a}}\) that is ID and an \(\mathbf{x}_{\text{b}}\) that is OOD. Worth noting is that for large \(\beta\), we get \(F(\mathbf{x})\approx\min_{y^{\prime}}E(\mathbf{x},y^{\prime})\), _i.e_., the maximum logit score, which also has been used successfully for OSR [33]. For SeFOSS, we utilize the free-energy score to determine which samples are ID or OOD. Our main motivation is its simplicity, _i.e_., not requiring architectural modifications or significant computational complexity, while still being a powerful discriminant. Another benefit of using a method that can be "plugged in" to any existing model is that it allows for easy and fair comparisons of results. ### Open-set semi-supervised learning In open-set semi-supervised learning, we study the case where the unlabeled training set, and sometimes also the test set, contain additional classes not present in the labeled set. Compared to (closed-set) SSL, this is a more general and much less studied problem, and compared to _open-set domain adaptation_ (OSDA) [24, 27], there are no assumptions regarding domain-shifts in the training sets. Many proposed methods for OSSL classify the unlabeled data as either ID or OOD and use only confident ID samples for the training objective in a traditional SSL scheme. One such example is **UASD**[5], where unlabeled data are classified as inliers based on thresholding the maximum of predicted softmax distribution. The method averages multiple predictions of the same unlabeled data from different time steps during training for increased predictive calibration. The framework **MTCF**[40] uses a similar strategy but employs a separate OSR head for predicting the probability of a sample being ID. The OSR head trains in an SSL fashion where the OSR head uses model predictions as optimization targets. In **SAFE-STUDENT**[11], the model is trained using both pseudo-in- and outliers, which are predicted from unlabeled data by using energy discrepancy. **OpenMatch**[26] and **T2T**[15] use a slightly different strategy built around a one-vs-all framework. In both cases, we have one head for each class predicting the probability of a sample belonging to the corresponding class. Predicted inliers are used for SSL objectives based on FixMatch (OpenMatch) or UDA (T2T). Both T2T and OpenMatch take steps towards utilizing all unlabeled data: OpenMatch through a consistency loss for its one-vs-all predictions, and T2T through a self-supervised rotation loss. In the **DS3L** method [9], the focus is on preserving closed-set performance by solving the OSSL problem via a bi-level optimization task. The inner task is to learn the classifier based on a standard two-term SSL optimization but scaling the loss for unlabeled samples using a data-dependent weight function. The outer task is thus to learn the weight function that minimizes the _labeled_ loss term of the inner task. This way, the outer optimization steers the training by using the labeled set as a proxy so that the model never drops in closed-set performance. Lastly, **TOOR**[16] proposes to identify "recyclable" OOD data,, semantically close to one of the ID classes. The method projects the recyclable OOD data on the space of ID data by domain adaptation. The "recycled" data, together with unlabeled data predicted as ID, is then used for training through an SSL objective. In summary, most existing contributions for OSSL are based on the assumption that OOD data are "harmful" and focus on detecting ID data from the unlabeled set to use in an SSL training objective. In SeFOSS, we instead facilitate learning from all unlabeled data while also learning to distinguish between ID and OOD. Additionally, in contrast to many prior works, we do not require extra model heads for classifying data as ID or OOD. Using the free energy score, we avoid additional model parameters or heavy computational complexity for solving the OSR task. ## 3 Method This section describes SeFOSS, our proposed method for open-set semi-supervised learning. The main philosophy behind this method and how it differs from existing works is that we encourage learning from _all_ unlabeled data, whether it is ID or OOD. Our proposed method achieves this by applying the self-supervised loss of DoubleMatch [34] on all unlabeled samples in each training batch. We complement the self-supervision with losses on unlabeled data confidently predicted as ID or OOD to improve OSR performance. For samples confidently predicted as ID, we apply a pseudo-labeling loss similar to those of FixMatch [30] and UDA [35]. For unlabeled data confidently predicted as outliers, we instead use energy regularization to increase the model's confidence that these are OOD. Figure 2 illustrates how SeFOSS treats unlabeled data. As per common practice in SSL, losses on unlabeled data are combined with a standard supervised cross-entropy loss on labeled data. The different parts of SeFOSS are detailed in the sections below. ### Self-supervision on all unlabeled data The central source of learning from unlabeled data in SeFOSS is self-supervision. To this end, we use the loss proposed by DoubleMatch [34],, a cosine similarity between feature predictions for different augmentations of unlabeled data: \[l_{s}=-\frac{1}{\mu B}\sum_{i=1}^{\mu B}\frac{h(\mathbf{v}_{i})\cdot\mathbf{z }_{i}}{\|h(\mathbf{v}_{i})\|\|\mathbf{z}_{i}\|}, \tag{2}\] where \(\mu B\) is the number of unlabeled samples in each batch, \(\mathbf{v}_{i}\) and \(\mathbf{z}_{i}\) are \(d\)-dimensional feature vectors from the penultimate network layer for weak and strong augmentations of sample \(i\), respectively. The operator \(\|\cdot\|\) is the \(l_{2}\) norm. The mapping \(h:\mathds{R}^{d}\rightarrow\mathds{R}^{d}\) is a trainable linear projection to allow for differences in feature predictions for weak and strong augmentations. When evaluating the gradient of \(l_{s}\), we consider \(\mathbf{z}_{i}\) as constant. A principal difference between the self-supervision of (2) and the losses for all unlabeled data in T2T [15] and OpenMatch [26], is that (2) makes use of strong data-augmentation whereas the corresponding losses of T2T and OpenMatch use weak data-augmentation only. ### Pseudo-labeling loss for pseudo-inliers SeFOSS uses the free-energy score [22] as defined in (1) to predict if unlabeled data belongs to one of the known classes. For convenience, we define the following equiv alent function \(s:\mathds{R}^{C}\rightarrow\mathds{R}\) that operates on the \(C\)-dimensional logits \(\mathbf{\sigma}\) as \[s(\mathbf{\sigma})=-\log\sum_{y^{\prime}=1}^{C}e^{\sigma_{y^{\prime}}}, \tag{3}\] where \(\sigma_{y^{\prime}}=f_{y^{\prime}}(\mathbf{x})\) is the logit associated with class \(y^{\prime}\) for data point \(\mathbf{x}\). For data that are confidently ID (pseudo-inliers), we expect \(s(\mathbf{\sigma})\) to be low. To amplify this confidence, we apply the pseudo-labeling loss \[l_{p}=\frac{1}{\mu B}\sum_{i=1}^{\mu B} \mathds{1}\{s(\mathbf{w}_{i})<\tau_{\text{id}}\} \tag{4}\] \[\times H(\operatorname*{argmax}(\mathbf{w}_{i}),\operatorname*{ softmax}(\mathbf{q}_{i})),\] where \(\tau_{\text{id}}\) is a confidence threshold. We let \(\operatorname*{argmax}:\mathds{R}^{C}\rightarrow\mathds{R}^{C}\) so that it returns a one-hot vector. Similarly, we define \(\operatorname*{softmax}:\mathds{R}^{C}\rightarrow\mathds{R}^{C}\) so that the \(j\):th element is \[\operatorname*{softmax}(\mathbf{\sigma})_{j}=\frac{e^{\sigma_{j}}}{\sum_{y^{ \prime}=1}^{C}e^{\sigma_{y^{\prime}}}},\text{ for }j=1,\dots,C. \tag{5}\] The inputs \(\mathbf{w}_{i}\) and \(\mathbf{q}_{i}\) in (4) are the predicted logits for unlabeled sample \(i\) for weak and strong data augmentations, respectively. Lastly, \(\mathds{1}\{\cdot\}\) is the indicator function and \(H(\cdot,\cdot)\) is the cross entropy between two discrete probability distributions \(\mathbf{p}^{a}\), \(\mathbf{p}^{b}\) given by \[H(\mathbf{p}^{a},\mathbf{p}^{b})=-\sum_{i=1}^{C}p_{i}^{a}\log p_{i}^{b}, \tag{6}\] where \(p_{i}^{a}\) and \(p_{i}^{b}\) are the \(i\):th elements of \(\mathbf{p}^{a}\in\mathds{R}^{C}\) and \(\mathbf{p}^{b}\in\mathds{R}^{C}\), respectively. As for \(l_{s}\) from (2), we consider the predictions on weakly augmented data as constant when evaluating the gradient of \(l_{p}\). Worth noting is that \(l_{p}\) is equivalent to the pseudo-labeling loss in FixMatch [30] with the exception that FixMatch selects pseudo-labels based on thresholding the maximum value of the predicted softmax distributions. We choose pseudo-labels instead based on thresholding the free-energy score, which is a better score for OSR than the maximum softmax probability [22]. ### Energy regularization for pseudo-outliers Many existing methods for OSSL [5, 9, 40] focus on identifying unlabeled data that are confidently ID. To boost the separation of OSR predictions between ID and OOD, we suggest including a loss for unlabeled data confidently predicted as OOD, _i.e_., pseudo-outliers. Inspired by [22], we employ a hinge loss on the free-energy score of the pseudo-outliers to stimulate the model to raise the free-energy score to a given margin. This energy regularization is given by \[l_{e}=\frac{\sum_{i=1}^{\mu B}\mathds{1}\{s(\mathbf{w}_{i})>\tau_{\text{ood}} \}\max(0,m_{\text{ood}}-s(\mathbf{w}_{i}))^{2}}{\sum_{i=1}^{\mu B}\mathds{1}\{ s(\mathbf{w}_{i})>\tau_{\text{ood}}\}}, \tag{7}\] where \(\tau_{\text{ood}}\in\mathds{R}\) is the confidence threshold for OOD data and \(m_{\text{ood}}\in\mathds{R}\) is the margin for the hinge loss. Note that this loss only uses the predictions for weakly augmented data because we empirically found that using predictions from strongly augmented data in \(l_{e}\) led to instabilities in training. ### Adaptive confidence thresholds In SeFOSS, we select pseudo-in- and outliers from unlabeled data, cf. (4) and (7), based on thresholding the free-energy score \(s\) defined in (3). The free-energy score is non-probabilistic and thus unbounded, so selecting these thresholds is not trivial. We propose adaptively calculating \(\tau_{\text{id}},\tau_{\text{ood}}\) and \(m_{\text{ood}}\) based on the distribution of \(s\) given our labeled training set at the end of a pre-training phase. During this pre-training phase, the model is trained only with a supervised loss on labeled data and the self-supervised loss given by (2) on unlabeled data. Following the pre-training phase, we compute \(s\) for the complete (unaugmented) labeled training set. The energy scores are evaluated using an exponential moving average of the model parameters for stability. Given the set of energy scores \(\{S_{i}:i=1,\dots,M\}\), where \(M\) is the number of labeled training data, we compute the median \(S_{m}\) and the interquartile range \(S_{iqr}\). The confidence thresholds and the margin for the energy regularization are then set as \[\tau_{\text{id}} \gets S_{m}-S_{iqr}\cdot\zeta_{\text{id}} \tag{8}\] \[\tau_{\text{ood}} \gets S_{m}+S_{iqr}\cdot\zeta_{\text{ood}}\] \[m_{\text{ood}} \gets S_{m}+S_{iqr}\cdot\zeta_{\text{ood}},\] Figure 2: Sorting of _unlabeled data_ in SeFOSS. Predicted outliers are used for energy regularization in \(l_{e}\). Predicted inliers are used for a pseudo-labeling loss in \(l_{p}\). All unlabeled data are used for self-supervision in \(l_{s}\). where \(\zeta_{\text{id}}\), \(\zeta_{\text{ood}}\), and \(\xi_{\text{ood}}\) are scalar hyperparameters. By using these adaptive metrics, we expect a tuned set of the hyperparameters \(\zeta_{\text{id}}\), \(\zeta_{\text{ood}}\), and \(\xi_{\text{ood}}\) to work for a wider set of problems, than if we instead would have tuned \(\tau_{\text{id}}\), \(\tau_{\text{ood}}\), and \(m_{\text{ood}}\) directly. ### Full training objective The full training loss is a weighted sum of five terms: \[l=l_{l}+w_{p}l_{p}+w_{s}l_{s}+w_{e}l_{e}+w_{w}l_{w}, \tag{9}\] where \(w_{p}\), \(w_{s}\), \(w_{e}\), and \(w_{w}\) are scalar hyperparameters for controlling the relative importance of each of the terms. Here, \(l_{s}\) is the self-supervision loss in (2), \(l_{p}\) is the pseudo-labeling loss in (4), and \(l_{e}\) is the energy regularization term in (7). We also use a supervised loss for the labeled data: \[l_{l}=\frac{1}{B}\sum_{i=1}^{B}H(\mathbf{y}_{i},\mathrm{softmax}(\mathbf{o}_{ i})), \tag{10}\] where \(B\) is the number of labeled data in each batch, \(\mathbf{y}_{i}\in\mathds{R}^{C}\) is the one-hot label vector for labeled sample \(i\), and \(\mathbf{o}_{i}\in\mathds{R}^{C}\) is the predicted logits for weakly augmented labeled sample \(i\). Lastly, we add weight-regularization \[l_{w}=\frac{1}{2}\|\boldsymbol{\theta}\|^{2}, \tag{11}\] where \(\boldsymbol{\theta}\) is the vector of all trainable model weights (excluding biases). ### Data augmentation and optimization Our method utilizes weak or strong data augmentation for both labeled and unlabeled data during training, where we follow one of the proposed augmentation strategies from FixMatch [30]. For the weak augmentations, we use a stochastic flip-and-shift strategy. The strong augmentation stacks the weak augmentation with two randomly selected transformations from RandAugment [6] followed by a final Cutout [7] transformation. For optimization, we use stochastic gradient descent with Nesterov momentum [31]. We define a scheme for the learning rate \(\eta\) where the learning rate stays constant in the pre-training phase and follows a cosine decay in the subsequent training phase: \[\eta(k)=\begin{cases}\eta_{0}&\text{for}\quad k<K_{p}\\ \eta_{0}\cos\left(\gamma\frac{\pi(k-K_{p})}{2(K-K_{p})}\right)&\text{otherwise} \end{cases}, \tag{12}\] where \(\eta_{0}\) is the initial learning rate, \(\gamma\) is a hyperparameter that controls the decay rate, \(k\) is the current training step, \(K_{p}\) is the number of pre-training steps, and \(K\) is the total number of training steps. Our training procedure is summarized in Algorithm 1, and each training step is detailed in Algorithm 2, where we denote the trainable parts of our model as \(f\), \(g\), and \(h\). The backbone model \(f:\mathds{R}^{D}\rightarrow\mathds{R}^{d}\) maps the input images of dimension \(D\) to the feature predictions of dimension \(d\). The classification head, \(g:\mathds{R}^{d}\rightarrow\mathds{R}^{C}\) is a linear head that predicts logits given feature predictions. Finally, \(h:\mathds{R}^{d}\rightarrow\mathds{R}^{d}\) is a projection head that performs a linear transformation on the feature predictions of strongly augmented data to the feature space of weakly augmented data, cf. (2). ``` 0: Strong augmentation \(\beta\), weak augmentation \(\alpha\), labeled batch \(\{(\mathbf{x}_{1},\mathbf{y}_{1}),\dots,(\mathbf{x}_{B},\mathbf{y}_{B})\}\), unlabeled batch \(\{\tilde{\mathbf{x}}_{1},\dots,\tilde{\mathbf{x}}_{B}\}\), scaling parameters \(w_{p}\), \(w_{s}\), \(w_{e}\), and \(w_{w}\), thresholds \(\tau_{\text{id}}\), \(\tau_{\text{ood}}\), and \(m_{\text{ood}}\), backbone model \(f\), prediction layer \(g\), projection layer \(h\) 1:\(\triangleright\) Cross-entropy loss for (weakly augmented) labeled data 2:for\(i=1,\cdots,B\)do 3:\(\mathbf{o}_{i}=g(f(\alpha(\mathbf{x}_{i})))\) 4:endfor 5:\(l_{l}=\frac{1}{B}\sum_{i=1}^{B}H(\mathbf{y}_{i},\mathrm{softmax}(\mathbf{o}_{ i}))\) 6:\(\triangleright\) Predictions on unlabeled data 7:for\(i=1,\cdots,\mu B\)do 8:\(\mathbf{z}_{i}=f(\alpha(\tilde{\mathbf{x}}_{i}))\)\(\triangleright\) Weak augmentation 9:\(\mathbf{v}_{i}=f(\beta(\tilde{\mathbf{x}}_{i}))\)\(\triangleright\) Strong augmentation 10:\(\mathbf{q}_{i}=g(\mathbf{v}_{i})\) 11:\(\mathbf{w}_{i}=g(\mathbf{z}_{i})\) 12:endfor 13: Compute \(l_{s}\), \(l_{p}\), \(l_{e}\), and \(l_{w}\) according to (2), (4), (7), and (11)return\(l_{l}+w_{p}l_{p}+w_{s}l_{s}+w_{e}l_{e}+w_{w}l_{w}\) ``` **Algorithm 2** SeFOSS training step ## 4 Experiments In this section, we report results from extensive experimental evaluations of SeFOSS together with multiple other methods for SSL and OSSL across a wide range of datasets. The OSSL methods that we compare with, MTCF [40], OpenMatch [26], and T2T [15], all seek to produce well-performing models in terms of both closed-set accuracy and OSR. For comparison, we include the widely used closed-set SSL method FixMatch [30] and a fully supervised baseline trained using only the labeled subset. We evaluate closed-set accuracy and OSR performance for each method, where the latter is measured as the AUROC for classifying data as ID or OOD. We calculate the AUROC exactly by evaluating pairwise comparisons as \[\text{AUROC}=\frac{\sum_{i=1}^{M_{\text{id}}}\sum_{j=1}^{M_{\text{ mod}}}\mathds{1}\{S(\mathbf{x}_{i}^{\text{id}})>S(\mathbf{x}_{j}^{\text{ood}})\}}{M _{\text{id}}M_{\text{ood}}}, \tag{13}\] where \(S(\cdot)\) is the method-specific ID score, \(x_{i}^{\text{id}}\) and \(x_{i}^{\text{ood}}\) are samples from the ID and OOD test sets, whereas \(M_{\text{id}}\) and \(M_{\text{ood}}\) are the number of samples in the ID and OOD test sets, respectively. For our method, FixMatch, and fully supervised, we use the free-energy score from (3) as the (negative) ID score \(S(\cdot)\), whereas for the others we use the method proposed in the respective paper. ### Datasets We use several datasets with different characteristics for a complete performance assessment. For ID data, we use CIFAR-10, CIFAR-100 [17] and ImageNet-30 [14]. The CIFAR sets are of size \(32^{2}\) and comprise 10 and 100 classes, respectively. Both have training sets of 50,000 images and test sets of size 10,000. When CIFAR-10/100 is used as ID, we use SVHN [23], uniform noise, and the corresponding CIFAR set as OOD. SVHN consists of images showing house numbers divided into 73,257 images in the training set and 26,032 for the test set. The uniform noise dataset has 50,000 training images and 10,000 test images. ImageNet-30 is a 30-class subset of ImageNet, selected such that there is no overlap between the classes. Following [26], we use the first 20 classes as ID and the last 10 classes as OOD. Each class has 1,300 training images and 100 test images. The images of ImageNet-30 are first resized so that the shortest side gets a length of 256 while keeping the aspect ratio. Each image is then center-cropped to size \(224^{2}\). The different open-set scenarios presented above pose distinct challenges. CIFAR-10 and CIFAR-100 originate from the same source set [17] and contain semantically similar classes, _e.g_., CIFAR-10 contains _dogs_ whereas CIFAR-100 has images of _wolves_, making for a challenging OSR problem. On the other hand, the similarities could potentially increase the possibility of learning useful features from the OOD set. Conversly, we have OOD data constituting pure noise images containing no semantic content or learnable features. However, they could cause unexpected behavior if misclassified and thus generate unwanted training signals. The in-between scenario is the SVHN set, which contains real images with learnable features but is semantically very different from CIFAR-10 and CIFAR-100. ImageNet-30 showcases how the methods perform on more realistic data similar to that used in practical applications. Example images are shown in Fig. 3. ### Limitations The experiments in this work only consider OSSL problems where the OOD data in training and testing follow the same distributions. Furthermore, we do not evaluate our method for very low-label regimes (_i.e_., only a few labeled training samples per class). Lastly, we only use ID sets that are balanced in terms of classes. ### Implementation details The architectures used for the experiments are WRN-28-2 [41] (ID: CIFAR-10), WRN-28-8 [41] (ID: CIFAR-100), and ResNet-18 [10] (ImageNet-30). In SeFOSS, when CIFAR-10 is ID, we use \(w_{e}=10^{-4},\ w_{s}=5.0,\ \eta_{0}=0.03,\gamma=7/8,B=64,\mu=7,\xi_{\text{id}}=0.2, \xi_{\text{ood}}=1.3,\zeta_{\text{ood}}=1.9,K_{p}=5\cdot 10^{4},K=4\cdot 10^{5},w_{d}=5 \cdot 10^{-4}\), and SGD momentum 0.9. When CIFAR-100 is ID we use \(w_{s}=15.0,w_{d}=10^{-4}\) and \(\gamma=5/8\) (following [34]), keeping the other hyperparameters the same. For ImageNet-30, we use the same hyperparameters as for CIFAR-10 except that we use \(K=2\cdot 10^{5}\) because of the more expensive training steps. We evaluate SeFOSS using an exponential moving average of the model parameters (with momentum 0.999). For T2T and OpenMatch, we use the original authors' implementations and hyperparameters. For a fair comparison, we implement MTCF with the FixMatch backbone (with original FixMatch hyperparameters). Our experiments with FixMatch use hyperparameters from the original work. The fully supervised baseline is trained for 50,000 steps, uses batch size 256, and a learning rate from (12) with \(\gamma=7/8\) and \(\eta_{0}=0.03\). ### OSSL performance **CIFAR-10/100:** The results from using CIFAR-10 and CIFAR-100 as ID data are shown in Tab. 1. For CIFAR-10, we use labeled sets of sizes 1,000 and 4,000, whereas, for CIFAR-100, the sets have sizes 2,500 and 10,000. The Figure 3: A few representative examples from the datasets used in our experiments. top row for each method shows closed-set accuracy in %, and the bottom row shows AUROC for OSR. We evaluate each combination of ID and OOD datasets for each method using five different sub-samplings of the complete labeled data. The reported numbers are the mean and standard deviation from these five training sessions. The results from each session are evaluated by taking median performance values from the last five model evaluations during training. From Tab. 1, we see that SeFOSS is the only method that reaches good performances for both closed-set accuracy and AUROC across all scenarios. MTCF shows overall good AUROC but generally performs worse than SeFOSS on closed-set accuracy. T2T reaches good results on a few scenarios (CIFAR-100 with 10,000 labels with noise as OOD) but does not consistently perform well across scenarios. A drawback of T2T, in particular, is that it displays high variance in AUROC for many scenarios. OpenMatch shows very good results in terms of closed-set accuracy when CIFAR-10 is ID. OpenMatch does however seem unable to handle noise as OOD data since it displays poor and high-variance AUROC for these scenarios. OpenMatch also drops drastically in performance in the CIFAR-100 experiments for both closed-set accuracy and AUROC, where it is outperformed by the fully supervised lower bound for some scenarios. Slightly surprisingly, the pure closed-set SSL method FixMatch obtains the highest closed-set accuracy in nearly all scenarios. More expected, however, is that FixMatch consistently gives poor AUROC since it can freely assign pseudo-labels to OOD samples, leading to high confidence on these data. **ImageNet-30:** For the results on ImageNet-30, we compare with numbers reported by [26]. The number of labeled data used here is 2,600. To make our results comparable with [26], we report the test performance also for SeFOSS at the point of best validation performance given a labeled validation set of 1,000 images. The reported numbers are means and standard deviations over three runs. The results in Tab. 2 show that SeFOSS reaches better results than MTCF and OpenMatch in terms of both AUROC and closed-set accuracy. Note also that the hyperparameters used for SeFOSS are the same as for CIFAR-10, indicating that SeFOSS scales well to high-resolution data. **Avoiding collapse using validation data:** To present the fairest possible evaluation of OpenMatch, we note that the poor results displayed in Tab. 1 when CIFAR-100 is ID are many times a result of a training collapse from a much better performance. This collapse can be avoided by using a labeled validation set and selecting the model that yields the best performance on the validation set during training. The official code of [26] uses 50 images per class for this purpose. The results from evaluating OpenMatch using this procedure are shown in Tab. 3, where we see that OpenMatch displays much better results, although it does not solve the poor AUROC for noise OOD. However, for fairness, as SeFOSS does not suffer from training collapse and, thus, has no need for a validation set, it is free to use the additional data during training instead, resulting in a significant boost to closed-set accuracy. As SSL methods are meant for situations where labeled data are scarce or expen \begin{table} \begin{tabular}{c|c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{3}{c}{CIFAR-100: 1,000 labels} & \multicolumn{3}{c}{CIFAR-10: 4,000 labels} & \multicolumn{3}{c}{CIFAR-100: 2,500 labels} & \multicolumn{3}{c}{CIFAR-100: 10,000 labels} \\ & CIFAR-100 & SVHN & Noise & CIFAR-100 & SVHN & Noise & CIFAR-10 & SVHN & Noise \\ \cline{2-11} Fully supervised & 54.51\(\pm\)1.82 & 54.03\(\pm\)2.05 & 55.39\(\pm\)2.67 & 75.57\(\pm\)1.78 & 76.70\(\pm\)2.27 & 77.62\(\pm\)1.79 & 34.62\(\pm\)1.43 & 33.19\(\pm\)1.80 & 4.03\(\pm\)1.50 & 59.12\(\pm\)0.91 & 60.32\(\pm\)0.54 & 59.40\(\pm\)2.11 \\ & 0.62\(\pm\)0.01 & 0.61\(\pm\)0.04 & 0.56\(\pm\)0.22 & 0.74\(\pm\)0.22 & 0.80\(\pm\)0.00 & 0.78\(\pm\)0.15 & 0.61\(\pm\)0.01 & 0.57\(\pm\)0.05 & 0.34\(\pm\)0.19 & 0.71\(\pm\)0.01 & 0.76\(\pm\)0.08 & 0.77\(\pm\)0.17 \\ \hline \multirow{2}{*}{FixMatch [30]} & 92.70\(\pm\)0.14 & 94.80\(\pm\)0.19 & 95.02\(\pm\)0.10 & 94.07\(\pm\)0.15 & 94.93\(\pm\)0.22 & 95.38\(\pm\)0.07 & 71.95\(\pm\)0.40 & 69.39\(\pm\)0.14 & 70.89\(\pm\)0.42 & 77.72\(\pm\)0.32 & 75.89\(\pm\)0.39 & 77.04\(\pm\)0.24 \\ & 0.66\(\pm\)0.00 & 0.67\(\pm\)0.03 & 0.71\(\pm\)0.05 & 0.69\(\pm\)0.01 & 0.67\(\pm\)0.02 & 0.73\(\pm\)0.01 & 0.46\(\pm\)0.01 & 0.49\(\pm\)0.03 & 0.67\(\pm\)0.17 & 0.51\(\pm\)0.01 & 0.48\(\pm\)0.03 & 0.65\(\pm\)0.04 \\ \hline \multirow{2}{*}{MTCF [40]} & 82.96\(\pm\)1.08 & 90.49\(\pm\)0.79 & 89.32\(\pm\)0.65 & 89.87\(\pm\)0.21 & 92.72\(\pm\)0.04 & 92.01\(\pm\)0.02 & 40.46\(\pm\)1.40 & 53.55\(\pm\)1.24 & 46.56\(\pm\)0.66 & 62.88\(\pm\)0.02 & 66.10\(\pm\)0.63 & 63.80\(\pm\)0.75 \\ & 0.81\(\pm\)0.00 & 1.00\(\pm\)0.00 & 1.00\(\pm\)0.00 & 1.00\(\pm\)0.00 & 0.82\(\pm\)0.01 & 1.00\(\pm\)0.00 & 1.00\(\pm\)0.00 & 0.80\(\pm\)0.00 & 1.00\(\pm\)0.00 & 1.00\(\pm\)0.00 \\ \cline{2-11} & - & - & - & - & - & - & - & - & - & - & - & - \\ \cline{2-11} & 58.69\(\pm\)1.09 & 91.83\(\pm\)1.30 & 91.13\(\pm\)1.12 & 86.11\(\pm\)1.91 & 92.16\(\pm\)1.00 & 92.91\(\pm\)0.57 & 38.30\(\pm\)0.27 & 58.44\(\pm\)1.81 & 53.33\(\pm\)0.39 & 62.02\(\pm\)3.73 & 70.93\(\pm\)4.78 & 73.01\(\pm\)0.37 \\ & 0.57\(\pm\)0.02 & 0.96\(\pm\)0.07 & 0.72\(\pm\)0.03 & 0.57\(\pm\)0.04 & 0.80\(\pm\)0.34 & 0.90\(\pm\)0.19 & 0.63\(\pm\)0.08 & 0.80\(\pm\)0.00 & 1.00\(\pm\)0.00 & 0.59\(\pm\)0.03 & 0.66\(\pm\)0.04 & 1.00\(\pm\)0.00 \\ \cline{2-11} & **92.20\(\pm\)**0.15 & **94.12\(\pm\)**0.34 & **94.07\(\pm\)**0.08 & **94.82\(\pm\)**0.21 & **94.73\(\pm\)**0.09 & **94.76\(\pm\)**0.15 & 20.84\(\pm\)**0.85 & 18.66\(\pm\)**2.59 & 16.10\(\pm\)0.30 & 40.95\(\pm\)0.34 & 32.69\(\pm\)0.68 & 21.19\(\pm\)0.55 \\ \cline{2-11} & 0.93\(\pm\)0.00 & 0.98\(\pm\)0.03 & 0.68\(\pm\)0.20 & 0.96\(\pm\)0.00 & 1.00\(\pm\)0.00 & 0.58\(\pm\)0.23 & 0.66\(\pm\)0.05 & 0.69\(\pm\)0.10 & 0.85\(\pm\)0.18 & 0.77\(\pm\)0.01 & 0.68\(\pm\)0.19 & 0.50\(\pm\)0.23 \\ \cline{2-11} & 0.91\(\pm\)0.16 & 91.16\(\pm\)0.27 & 92.78\(\pm\)1.00 & 93.73\(\pm\)0.73 & 92.60\(\pm\)0.04 & 94.14\(\pm\)0.06 & **68.48\(\pm\)**0.26 & **62.99\(\pm\)**0.39 & **64.54\(\pm\)**1.00 & **77.63\(\pm\)**0.21 & **73.60\(\pm\)**0.20 & **75.25\(\pm\)**0.34 \\ \cline{2-11} & 0.90\(\pm\)0.01 & 0.99\(\pm\)0.01 & 1.00\(\pm\)0.00 & 0.92\(\pm\)0.00 & 1.00\(\pm\)0.00 & 0.79\(\pm\)0.01 & 1.00\(\pm\)0.00 & 1.00\(\pm\)0 sive, assuming the presence of a sufficiently large labeled validation set during training goes against this philosophy. ### Influence of OOD data on SSL methods From Tab. 1, we see that FixMatch displays high closed-set accuracies for all datasets. These results contradict prior works where OOD data in SSL are assumed to significantly harm the closed-set performance [9, 11]. To investigate this further, we study how a few SSL methods perform when trained using unlabeled data containing different fractions of OOD data. The SSL methods that we evaluate are FixMatch, UDA [35], and MixMatch [3]. We also include SeFOSS for comparison. The dataset used consists of CIFAR-10 with 4,000 labels as ID data and CIFAR-100 as OOD data. The unlabeled datasets with different fractions of OOD data are created by adding CIFAR-100 data (up to 0.5) or removing CIFAR-10 data (above 0.5). For OSR, the SSL methods are evaluated using the free-energy score (3). The results are shown in Fig. 4. Most notable from the results is that FixMatch and UDA show no significant drop in closed-set accuracy when the fraction of OOD data is below 0.4. MixMatch loses closed-set accuracy faster, likely due to its use of mixup [43] augmentations, since it should intuitively not handle OOD data well. For AUROC, we see a quick drop in performance for all SSL methods as OOD data gets added to the unlabeled set. It is also here we see a significant difference in the performance of SeFOSS and the traditional SSL methods. Our framework consistently shows high AUROC (around 0.9) when the fraction of OOD data is below 0.7. ### Ablation SeFOSS uses three loss functions to learn from unlabeled data, see Fig. 2. To study the importance of these terms, we conduct experiments using 1) only self-supervision on unlabeled data, 2) self-supervision and energy regularization, and 3) self-supervision and pseudo-labeling. Additionally, we evaluate the OSR performance of case 1) using the maximum softmax confidence to confirm that the free-energy score gives better performance. The results in Tab. 4 show that using only the self-supervision gives nearly as good results as using the entire framework. Adding pseudo-labeling and energy regularization barely affects the closed-set accuracy but gives better AUROC by a couple of percentage points. This indicates that self-supervision alone, at least under these conditions, is a strong and safe training signal for OSSL. Moreover, we study the effect of manually adjusting \(\tau_{\text{id}}\). In Tab. 5 we see an increase in accuracy at the cost of a reduction in AUROC when lowering \(\tau_{\text{id}}\). The increase in accuracy can be explained by the model assigning pseudo-labels to more data. However, when \(\tau_{\text{id}}\) is low, many predicted inliers are likely outliers, causing weaker OSR performance. These experiments are done with \(w_{e}=0\) to isolate the effect of \(\tau_{\text{id}}\). ## Acknowledgment This work was supported by Saab AB, the Swedish Foundation for Strategic Research, and Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. The experiments were enabled by resources provided by the Swedish National Infrastructure for Computing (SNIC) at Chalmers Centre for Computational Science and Engineering (C3SE), and National Supercomputer Centre (NSC) at Linkoping University.
2308.04459
MCTS guided Genetic Algorithm for optimization of neural network weights
In this research, we investigate the possibility of applying a search strategy to genetic algorithms to explore the entire genetic tree structure. Several methods aid in performing tree searches; however, simpler algorithms such as breadth-first, depth-first, and iterative techniques are computation-heavy and often result in a long execution time. Adversarial techniques are often the preferred mechanism when performing a probabilistic search, yielding optimal results more quickly. The problem we are trying to tackle in this paper is the optimization of neural networks using genetic algorithms. Genetic algorithms (GA) form a tree of possible states and provide a mechanism for rewards via the fitness function. Monte Carlo Tree Search (MCTS) has proven to be an effective tree search strategy given states and rewards; therefore, we will combine these approaches to optimally search for the best result generated with genetic algorithms.
Akshay Hebbar
2023-08-07T18:11:51Z
http://arxiv.org/abs/2308.04459v1
# MCTS guided Genetic Algorithm for optimization of neural network weights ###### Abstract In this research, we investigate the possibility of applying a search strategy to genetic algorithms to explore the entire genetic tree structure. Several methods aid in performing tree searches; however, simpler algorithms such as breadth-first, depth-first, and iterative techniques are computation-heavy and often result in a long execution time. Adversarial techniques are often the preferred mechanism when performing a probabilistic search, yielding optimal results more quickly. The problem we are trying to tackle in this paper is the optimization of neural networks using genetic algorithms. Genetic algorithms (GA) form a tree of possible states and provide a mechanism for rewards via the fitness function. Monte Carlo Tree Search (MCTS) has proven to be an effective tree search strategy given states and rewards; therefore, we will combine these approaches to optimally search for the best result generated with genetic algorithms. genetic algorithm, mcts, optimization, reinforcement learning, neural network ## I Introduction Genetic algorithms belong to a subclass of evolutionary algorithms that provide a method for optimization based on genetic selection principles [8]. They serve a purpose in machine learning and research development, in addition to search tool optimization systems. The approach used in genetic algorithms is analogous to biological concepts of chromosome generation, with operators such as selection, crossover, mutation, and recombination. GA is a population-based approach that aims to provide a solution to successive generations. The process of evolution using GA involves starting with a random population and evolving it using crossover and mutation operators to generate children. The best-fit solution is then filtered, and the genetic process is repeated until the objective is achieved. We can observe that in the process of genetic algorithms, a tree of possible solutions is generated, and the best-fit solution is picked for successive iteration, which limits the search space and computational resources. Genetic algorithms are used for various problem domains such as decision trees, segmentation, classification, etc. However, in this paper, our focus will be on the application of GA in optimizing neural network weights. The Monte Carlo Tree Search approach was developed in 2006 as an application to game-tree search. This tree search works on the principles of cumulated reward calculated from children's nodes and uses Q and N values to balance between exploration and expansion approaches. The exploration approach considers the number of nodes visited and uses a quantitative approach to discovering child nodes that have not been visited. The expansion approach follows a qualitative strategy to discovering child nodes with Q-value indicating the cumulative sum of rewards. Figure 1: Simple Genetic Algorithm Figure 2: Outline of a Monte Carlo Tree Search A sufficient policy for MCTS - from prior research - has been found to be UCT (upper confidence trees). UCT provides an upper confidence bound to tree search. This policy helps the search with balancing exploration vs exploitation and navigate the search space optimally. Given that the MCTS is an adversarial tree search strategy, we may be able to apply it to the entire GA tree landscape to find optimal solutions rather than exploration based on the fitness alone. Thus, we discuss the novel approach of MCTS-GA for optimization of neural network in this research. ## II Problem and Data Description ### _GA for Neural Network weights_ The first problem to consider when applying GA to optimizing the neural network weights is to confirm the validity of the child nodes generated in the process. The crossover and mutation operator applied on these weights may result in a suboptimal solution in the lower generation, but these may evolve to become better solutions in the later generations. Conversely, a solution which had highest fitness in earlier generations may end up providing a sub-optimal result in the later generation due to the nature of genetic operators used. Thus, the only way to identify the overall best solution is to let the entire tree expand and calculate fitness of each child node until a valuable solution is found. However, this solution is computation heavy, and an exhaustive tree search is rarely optimal even in cases of smaller trees. The number of nodes for a given tree which makes up for the search space is given by the below formula. \(\frac{b^{h}-1}{b-1}\), branching factor b and height h _Figure 3. Application of Monte Carlo Tree Search applied to Genetic Algorithm_ A quick calculation for a tree with branching factor of 10 and depth 10 shows the search space to be 1,111,111,111. Genetic algorithms often perform better when the size of the tree is large and with increase in tree size the number of nodes generated increases exponentially and thus the search space itself. ### _Maintaining the Integrity of the weights_ A well-known issue with genetic algorithms is the problem of competing conventions wherein the child nodes generated because of evolution are not viable and have decreased fitness. An example in case of neural network is a cross over operator applied to the weights of the neural network. The operator shuffles the weights and applies a random mutation which can propagate between layers and dimensions. The modified weights may not be appropriate for optimizing the loss function of the given neural network and may lead to an invalid solution. ### _Data Description_ The diabetes dataset is chosen to forecast the onset of diabetes mellitus. This dataset is originally from the National Institute of Diabetes and Digestive and Kidney Diseases. The objective of the dataset is to diagnostically predict whether a patient has diabetes, based on certain diagnostic measurements included in the dataset [1]. The dataset has been preprocessed by balancing using under sampling and random shuffling. A minmax scaler is used to scale the data. For this classification problem we have developed a feedforward neural network with 4 hidden layers. There are 8 input nodes and (16-84-1) hidden nodes respectively in each layer. Fig. 4: Heatmap of the diabetes dataset Fig. 3: Application of Monte Carlo Tree Search applied to Genetic Algorithm The neural network uses sigmoid activation function, binary cross entropy loss function and Adam optimizer with 0.01 learning rate. The network is trained for 200 epochs in batch size of 10. The weights of this neural network are used as the point of optimization for our MCTS-GA approach. Each layer weights are vectorized, combined and labelled to form an individual upon which the algorithm is applied. ## III Approach In our approach we try to address both the issues mentioned above by combining the approaches of MCTS and GA. We can take advantage of the genetic algorithm structure as it generates a tree with a mechanism to evaluate the reward in terms of the fitness of the individual. MCTS will use the same underlying tree structure along with the fitness function to calculate the Q-value which indicates the cumulative sum of rewards; the N-value which indicates the visit count of each node is also maintained for the calculation of upper bounds. The overall structure follows the complete expansion of child nodes using GA and using MCTS with UCT policy to search for the optimal nodes with fitness function as a method to assign rewards. The process of evolution using GA involves starting with a random population which in our case is the weights of the neural network. We generate the initial population P\({}_{0}\) of certain size by adding a random uniform distribution on each dimension of the weights of the neural network and creating children with random weights. Next, we introduce the concept of _genetic action_ for MCTS as illustrated by dashed box in figure 5. The genetic action consists of the following three genetic operations applied on the parent population. * \[\circ\] Selection \[\circ\] Crossover \[\circ\] Mutation Selection: Various selection strategies such as roulette wheel, tournament, rank, steady state etc. are available for filtering individuals of higher fitness. The selection mechanism used in this approach is the k-way tournament selection. Tournament selection applies selection of k random individuals from the population and selects the best fit among the randomly selected individuals. Crossover: A crossover operator is then applied on the initial population to generate children. 1-point crossover is applied where we have restricted the crossover operator to each layer of the neural network to mitigate the problem of competing conventions and maintain the integrity of the weights of the neural network. Mutation: The mutation operator is used to introduce random mutation in the population. For our approach we have used random mutation on each layer of the neural network by swapping the weight values of 2 randomly chosen individuals. The subsequent child nodes are selected with the UCT policy of MCTS from the root node until a leaf node is found. The children among the leaf nodes are selected randomly and their corresponding Q and N values are backpropagated. The UCT policy using UCB1 is as follows. Fig. 5: Application of Monte Carlo Tree Search applied to Genetic Algorithm \[\text{UCB1}=\sqrt{\frac{c+\text{ln}\left(N\right)}{N+1}}\,,\text{N}_{\text{i}}\text{ is the i}^{\text{th}}\text{ child node and N is}\] the number of child nodes _Equation 1. Upper confidence bounds_ The next step in the process is the application of the concept of _rollout_ (simulation) based on the MCTS approach. For the selected child node, an _evolutionary rollout_ operation is applied wherein the individual is evolved using (\(\text{\text{\text{\text{\text{u}}}}}^{+}\lambda\)) evolutionary strategy unlike the regular random action simulations performed in prior research experiments [3]. The reason for applying the evolutionary rollout is to find out if the genetically mutated individual is of the highest fitness possible for its phenotype. We define this process as _ageing_ the individual in comparison to the biological phenomenon of ageing. Thus, the ageing process introduced in rollout determines the best possible age (generation) in which the individual would be best suited to genetically mutate again. The rollout/ageing process will replace the individual if a better fitness is found in later generations of the evolutionary strategy. This process is repeated until the specified tree depth is reached. This approach provides computational flexibility in terms of configurable parameters such as tree height, tournament selection, number of rollout generations and branching factor. These parameters can be used in combination with each other as per the computational capacity available. Thus, the application of genetic action and evolutionary rollout in amalgamation with MCTS provides the basis for the approach discussed in this paper. ## IV Results The MCTS-GA approach is a novel mechanism for optimization and is aimed to be data agnostic. The genetic algorithm representation can be configured in multiple ways which makes this approach suitable for wide range of optimization problems. In this experiment MCTS-GA was run using UCT for 20 generations (tree depth) with rollouts configured to 10 generations and branching factor 5. The GA was run for 200 generation and the neural net was run for 200 epochs. The results obtained from primary testing has proven to be positive. The MCTS-GA approach was able to optimize the neural network weights for better classification of the diabetes data and obtained better accuracy results. The comparison of the results obtained to that of the original neural network and canonical genetic algorithm are shown below. The classification accuracy can also be noticed from the roc-auc curves indicated below. The confusion matrix for the three approaches compared are shown below. ## V Discussion The experiment confirms the working of MCTS-GA on optimization of neural net weights. The optimization of weights and thus the classification is seen to be better achieved by the MCTS-GA over the genetic algorithm and feedforward neural network approach. Although, the improvement is not large MCTS-GA does run in a comparable time in comparison to the other two techniques. There is scope for improvement of the algorithm discussed here and the representations of different problems are Fig. 6: ROC-AUC curve for neural network, genetic algorithm and MCTS-GA respectively Fig. 7: confusion matrix for neural network, genetic algorithm and MCTS-GA respectively to be tested. In all, we discussed a novel approach that can prove to be a strong and valid technique for optimization techniques in the future.
2305.19392
DuoSearch: A Novel Search Engine for Bulgarian Historical Documents
Search in collections of digitised historical documents is hindered by a two-prong problem, orthographic variety and optical character recognition (OCR) mistakes. We present a new search engine for historical documents, DuoSearch, which uses ElasticSearch and machine learning methods based on deep neural networks to offer a solution to this problem. It was tested on a collection of historical newspapers in Bulgarian from the mid-19th to the mid-20th century. The system provides an interactive and intuitive interface for the end-users allowing them to enter search terms in modern Bulgarian and search across historical spellings. This is the first solution facilitating the use of digitised historical documents in Bulgarian.
Angel Beshirov, Suzan Hadzhieva, Ivan Koychev, Milena Dobreva
2023-05-30T20:10:44Z
http://arxiv.org/abs/2305.19392v1
# DuoSearch: A Novel Search Engine for Bulgarian Historical Documents ###### Abstract Search in collections of digitised historical documents is hindered by a two-prong problem, orthographic variety and optical character recognition (OCR) mistakes. We present a new search engine for historical documents, DuoSearch, which uses ElasticSearch and machine learning methods based on deep neural networks to offer a solution to this problem. It was tested on a collection of historical newspapers in Bulgarian from the mid-19th to the mid-20th century. The system provides an interactive and intuitive interface for the end-users allowing them to enter search terms in modern Bulgarian and search across historical spellings. This is the first solution facilitating the use of digitised historical documents in Bulgarian. Keywords:Historical newspapers search engine, Orthographic variety, Post-OCR text correction, BERT ## 1 Introduction and Related Work Many libraries and archives digitised sizeable collections and made an additional step towards datafication applying Optical Character Recognition (OCR). However, digitised information is not easily accessible by end-users due to two main hindering blocks. First, the OCR process introduces errors in recognition because of challenging layouts and orthographic variety due to the nine language reforms applied to the Bulgarian language. Second, because the collections of historical documents include texts in a mixture of orthographic conventions the users should be able to use the historical forms of the search keywords. We are applying a novel approach that builds upon the automated techniques for post-OCR text correction in combination with spelling conversion and extended searching to tackle both issues (OCR errors and orthographic variety). Our search engine was used for a case study with a historical newspaper collection from The National Library "Ivan Vazov" (NILV) in Plovdiv. The purpose of our research was to build a prototype search engine which addresses the two issues mentioned above. DuoSearch can be used for all kinds of digitised historical Bulgarian documents within the same period where OCR was not followed by quality control. The tool uses dictionary for Bulgarian but can be modified and adapted for other languages. The paper would be useful for anyone who is developing search tools for historical collections of texts with errors and/or linguistic variance. The system can perform a search in a collection of documents that are written in different spellings and have a relatively high number of erroneous tokens. It was implemented using ElasticSearch and uses machine learning techniques to improve the quality of the indexed data. It also provides an intuitive and interactive web interface, export of the results and easy navigation across the returned documents. The interest in developing digital resources and tools which answer historians' needs for searching across collections of newspapers developed over the last decades [4]. Some researchers focused on newspaper collections metadata as a solution for improved search [5]. Many challenges arise when working with cultural heritage, like historical newspapers, some of which are described in [6]. Common interests around these challenges in different disciplines have led to projects such as the European project NewsEye [1]. It uses data in a few languages and provides enhanced access to historical newspapers for a wide range of users. In Bulgaria, there is already a collection of digitised historical newspapers, but access to it is cumbersome, due to OCR errors and the multiple language reforms. There was a competition [2] supported by NewsEye whose purpose was to compare and evaluate automatic approaches for correcting OCR-ed texts from historical documents. The competition provides data in 10 European languages including Bulgarian. ## 2 System Design The design of the system is shown on figure 1. The system uses a three-tier architecture, where we have the user interface, the business logic and the data as independent modules. The presentation layer is responsible for the direct interaction between the user and the system, sending requests to the system and visualizing results. The search API component transforms and proxies the request to and from ElasticSearch in order to retrieve the relevant documents for the search query. To handle the linguistic variance, the Search API component contains a converter, which transforms the text entered by the user into the historical spelling and sends two requests. The processor component does the data preprocessing before sending it to ElasticSearch for indexing. The preprocessing includes correcting mistakes from the post-OCRed text and removing redundant metadata from the pages. The post-OCR text correction process can be divided into two phases: error detection and error correction. For error detection we have used pretrained multilingual model BERT [3] which is context-aware and helps in identifying both syntactical and grammatical mistakes. The data is first tokenized into words, which are then transformed into BERT sub-tokens. Afterwards, we feed these subtokens to the BERT model and the output for each sub-token is p layers with different kernel sizes: 2, 3, 4, 5 and 6, each one with 32 filters. After the convolutional layers, we have maxpool layers with a stride of 1, which intuitively represents the information exchange between the n-grams. The outputs from these layers are then concatenated and fed to the last layer, which is linear and does the final classification of the sub-token. If more than one BERT sub-token is predicted to be erroneous, then the whole token is erroneous. For error correction we use character-level sequence to sequence model which takes the classified as erroneous tokens from the error detection model and corrects them. For further improvement over the text correction we have used the CLaDA-BG dictionary [7] which contains over 1.1 million unique words in all of their correct forms. To address different orthographic varieties for post-OCR text correction, we need to know the version of the language in which the document is written, then we apply converter of the dictionary and use it in our model. DuoSearch supports two search types: regular search and extended search. The regular search is a full text search using the standard match query from ElasticSearch. The extended search supports additional options, like proximity searching, boolean operators support and others. It is implemented using the query DSL of ElasticSearch. For testing the current search engine prototype we have used a set of Bulgarian newspapers provided by NLIV, which is around 4 GB from the period 1882-1930. The source code is available on GitHub1 and live demo version on AWS.2. Footnote 1: [https://github.com/angelbeshirov/DuoSearch](https://github.com/angelbeshirov/DuoSearch) Footnote 2: [http://ec2-18-225-9-151.us-east-2.compute.amazonaws.com:8080](http://ec2-18-225-9-151.us-east-2.compute.amazonaws.com:8080) Figure 1: System architecture ## 3 Evaluation We have used the Bulgarian dataset provided by the organizers of the ICDAR 2019 competition [2] to evaluate our post-OCR text correction model. It contains 200 files with around 32,000 unique words from the period before 1945. For evaluation metrics of our text correction model we have used F-measure and % of improvement. Our results are shown in Table 1 compared with the Clova AI model. Our model is similar to the one of the Clova AI team from the competition with some improvements for the Bulgarian dataset. We managed to achieve an improvement of 9% over it, using the additional dictionary [7]. The % of improvement is measured by comparing the weighted sum of the Levenshtein distances between the raw OCR text and the corrected one with the gold standard. We used the trained model afterwards to correct the mistakes from the PDF documents before indexing by the search engine. In future, additional evaluation will be done with the users of the system. ## 4 Conclusion and Contributions In this paper, we have described a new search engine that combines various technologies to allow for fast searching across a collection of historical newspapers. This contributes to the overall problem in Bulgaria with improving the access to historical documents. It has been acknowledged by Europeana as an example of successful partnership between universities and libraries. In future, we will work on the text correction by developing a model which predicts the language revision version in which the text is written and apply different models for each orthography, as some of the words are being flagged as incorrect when they are written in different language version from the one the model is trained on. The system also has to be installed on the servers in the library and index the whole collection of newspapers, which is around 200 GB. ## Acknowledgements This research is partially supported by Project UNITe BG05M2OP001-1.001-0004 funded by the OP "Science and Education for Smart Growth" and co-funded by the EU through the ESI Funds. The contribution of M. Dobreva is supported by the project KP-06-DB/6 DISTILL funded by the NSF of Bulgaria. \begin{table} \begin{tabular}{c c c} \hline Model & F-score \% of improvement \\ \hline Clova AI & 0.77 & 9\% \\ DuoSearch & 0.79 & 18.7\% \\ \hline \end{tabular} \end{table} Table 1: Evaluation results
2310.12071
Higgs-Induced Screening Mechanisms in Scalar-Tensor Theories
We consider the theory of a light conformally coupled scalar field, i.e., one that is coupled directly to the Ricci scalar of the gravitational sector. This theory can be written equivalently as one of a light scalar that is coupled to the Standard Model of particle physics with a particular combination of Higgs-portal couplings. When the conformal coupling function contains terms that are linear and quadratic in the conformally coupled scalar, we find that the effective mass of the light propagating mode and its coupling to matter fields, obtained after expanding around a minimum of the classical potential, depend on the energy density of the background environment. This is despite the absence of non-linear terms in the original equation of motion for the light conformally coupled field. Instead, we find that the non-linearities of the prototype Higgs potential are communicated to the light mode. In this way, we present a novel realisation of screening mechanisms, in which light degrees of freedom coupled to the Standard Model are able to avoid experimental constraints through environmental and thin-shell effects.
Clare Burrage, Peter Millington
2023-10-18T16:06:23Z
http://arxiv.org/abs/2310.12071v1
# Higgs-Induced Screening Mechanisms in Scalar-Tensor Theories ###### Abstract We consider the theory of a light conformally coupled scalar field, i.e., one that is coupled directly to the Ricci scalar of the gravitational sector. This theory can be written equivalently as one of a light scalar that is coupled to the Standard Model of particle physics with a particular combination of Higgs-portal couplings. When the conformal coupling function contains terms that are linear and quadratic in the conformally coupled scalar, we find that the effective mass of the light propagating mode and its coupling to matter fields, obtained after expanding around a minimum of the classical potential, depend on the energy density of the background environment. This is despite the absence of non-linear terms in the original equation of motion for the light conformally coupled field. Instead, we find that the non-linearities of the prototype Higgs potential are communicated to the light mode. In this way, we present a novel realisation of screening mechanisms, in which light degrees of freedom coupled to the Standard Model are able to avoid experimental constraints through environmental and thin-shell effects. _Keywords--_ Scalar Fields, Higgs Portal, Fifth Forces, Dark Energy, Dark Matter, Modified Gravity Introduction Light scalar fields are a popular candidate for physics beyond the Standard Model (SM), with significant motivation coming from theories of dark matter [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], dark energy [13, 14, 15] and modified gravity [16, 17, 18, 19]. However, there is, as yet, no evidence of new, light scalars coupled to the SM particles. One way to explain the lack of evidence of new scalars is to tune the coupling of the scalar to the SM to be small [20]. If we wish to avoid this tuning, there are currently two options available. The first is to couple the scalar field conformally to a fully scale-invariant SM Lagrangian. In this case, a symmetry suppresses all interactions between the scalar field and the fermions of the SM [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33]. However, to preserve scale invariance, the theory requires an unusual approach to renormalisation [34, 35, 36, 37, 38]. A second option is offered by theories with environmentally dependent screening, where observable effects, such as fifth forces, can be naturally suppressed in the neighbourhood of experiments [39, 40, 41, 14]. The cost paid for this behaviour is that the equations of motion of the theory must be non-linear. These non-linearities can involve non-trivial self-interactions of the scalar, non-linear matter couplings or non-canonical kinetic terms, or a combination of all three. Renormalisable self-interactions are not forbidden for scalar field theories. Indeed, the one scalar field that we have observed -- the Higgs field -- is thought to possess non-trivial quartic self-interactions, which, along with the quadratic term of the Higgs potential, are vital for electroweak symmetry breaking. Theories of non-linear light scalar fields with environmentally dependent behaviour are often referred to as screened scalars. The commonly studied models with screening, including chameleon [42, 43], symmetron [44, 45] and Vainshtein screening [46, 47, 48], have all, to varying degrees, faced challenges about their naturalness, and whether the light masses can be protected from corrections due to interactions with heavier fields. In this work, we attempt to address these challenges to screened theories by considering a scalar field with a small mass, which couples to the SM conformally, i.e., via a non-minimal coupling to the Ricci scalar. This means that SM particles move on geodesics of a metric that is conformally re-scaled by a function of the additional scalar field. Such couplings naturally arise in UV theories with extra dimensions, e.g., string-theory dilatons [49, 50, 51, 52], which may be screened [54, 55], as well as theories of modified gravity, such as \(f(R)\) theories [53]. In previous work, and by virtue of the scale-symmetry breaking provided by the quadratic term in the Higgs potential, we have shown how models involving conformally coupled scalars can be rewritten as Higgs-portal models [56], being related by the Weyl rescaling of the metric from the so-called Jordan frame to the Einstein frame. This is to say that there is a field basis in the Einstein frame in which the scalar only interacts directly with the Higgs (at dimension four) and has no direct couplings to the fermions of the SM. Fifth-force couplings of the light degree of freedom to the SM fermions can then be seen to arise as a result of mixing with the Higgs, or after diagonalising this mixing [56]. An equivalent result can be obtained directly in the Jordan frame, wherein the fifth-force coupling to SM fermions arises after diagonalising the kinetic mixing of the conformally coupled scalar and the graviton [57]. The Higgs portal offers the lowest-dimension, renormalisable portal by which to couple new fields (also known as hidden sectors) to the SM [58, 59, 60, 61]. Light scalars coupled through the Higgs portal have received much recent attention [62, 63], but the possibilities of screening through non-linearities, which are naturally present, have largely been overlooked, with a few exceptions (see, e.g., Ref. [64]). In the following section, we introduce our model, both in terms of its conformal and Higgs-portal couplings. In Section 3, we see the first signs of environmental dependence through the expectation value of the conformally coupled scalar field, which will be seen to depend on the local energy density. We derive the effective equation of motion for the light scalar mode by expanding to leading order in fluctuations around a density-dependent minimum of the classical potential. In Section 4, we then show how this environmental dependence leads to suppression of the interactions between the light mode and matter, and how this leads to dynamical screening of the fifth forces sourced by massive compact objects. We discuss the implications and limitations of these results further in Section 5, and conclude in Section 6. ## 2 Conformal couplings and the Higgs portal In our previous work, Ref. [56] (see also Refs. [65, 57]), we showed that conformally coupled theories were equivalent, at tree level, to Higgs-portal models. We started with a generic action for a conformally coupled scalar-tensor theory, written in the Einstein frame as \[S\ =\ \int\!{\rm d}^{4}x\,\sqrt{-\,\tilde{g}}\bigg{[}\frac{M_{\rm Pl}^{2}}{2} \,\tilde{\cal R}\,-\,\frac{1}{2}\,\tilde{g}^{\mu\nu}\,\partial_{\mu}\chi\, \partial_{\nu}\chi-\,V(\chi)\bigg{]}\,+\,S_{\rm SM}\big{[}A^{2}(\chi)\tilde{g} _{\mu\nu},\{\psi\}\big{]}\;, \tag{1}\] where the light scalar \(\chi\) has a canonical kinetic term and a potential \(V(\chi)\). \(\tilde{\cal R}\) denotes the Ricci scalar for the Einstein frame metric \(\tilde{g}_{\mu\nu}\), and \(M_{\rm Pl}\) is the Planck mass. The term \(S_{\rm SM}\) is the SM action, whose fields are indicated by \(\{\psi\}\). These fields move on geodesics of the Jordan-frame metric \(g_{\mu\nu}=A^{2}(\chi)\tilde{g}_{\mu\nu}\). We work throughout with signature convention \((-,+,+,+)\). We write a toy SM (with one fermion \(\psi\) and a real prototype of the Higgs field \(\phi\)) in terms of the Jordan-frame metric as \[S_{\rm SM}[g_{\mu\nu},\{\psi\}]\ =\ \int\!{\rm d}^{4}x\,\sqrt{-\,g}\bigg{[}-\, \frac{1}{2}\,g^{\mu\nu}\,\partial_{\mu}\phi\,\partial_{\nu}\phi\,+\,\frac{1}{ 2}\,\mu^{2}\,\phi^{2}\,-\,\frac{\lambda}{4!}\,\phi^{4}\] \[\ We note here the explicit appearance of the coupling function \(A(\chi)\). After redefining the Higgs and fermion fields according to their classical scaling dimensions as \[\tilde{\phi}\ \equiv\ A(\chi)\phi\;,\qquad\tilde{\psi}\ \equiv\ A^{3/2}(\chi)\psi\;, \tag{4}\] our toy SM Lagrangian becomes \[\tilde{\cal L}\ =\ -\ \frac{1}{2}\,\tilde{g}^{\mu\nu}\,\partial_{\mu} \tilde{\phi}\,\partial_{\nu}\tilde{\phi}\;+\;\tilde{g}^{\mu\nu}\,\tilde{\phi} \,\partial_{\mu}\tilde{\phi}\,\partial_{\nu}\ln A(\chi)\] \[\ \ \ \ -\ \frac{1}{2}\,\tilde{g}^{\mu\nu}\,\tilde{\phi}^{2}\, \partial_{\mu}\ln A(\chi)\,\partial_{\nu}\ln A(\chi)\] \[\ \ \ \ +\ \frac{1}{2}\,\mu^{2}\,A^{2}(\chi)\,\tilde{\phi}^{2}\ -\ \frac{\lambda}{4!}\,\tilde{\phi}^{4}\ -\ \frac{3}{2}\,A^{4}(\chi)\,\frac{\mu^{4}}{\lambda}\] \[\ \ \ \ -\ \tilde{\psi}\,\tilde{i}\,\overleftarrow{\tilde{\partial}} \,\tilde{\psi}\,-\ y\,\tilde{\psi}\tilde{\phi}\tilde{\psi}\;, \tag{5}\] where \(\tilde{\not{\partial}}\equiv\tilde{e}^{\mu}_{a}\gamma^{a}\partial_{\mu}=A^{- 1}(\chi)e^{\mu}_{a}\gamma^{a}\partial_{\mu}\) and the antisymmetrisation of the kinetic term with \(\overleftarrow{\partial}_{\mu}=\frac{1}{2}(\overrightarrow{\partial_{\mu}}- \overleftarrow{\partial_{\mu}})\) avoids the appearance of the spin connection in the Lagrangian (see, e.g., Ref. [24]). From equation (5), we see that the light scalar does not couple directly to fermions, and instead only couples to the Higgs through the 'Higgs portal', a coupling which depends on the Higgs mass \(\mu\). This is unsurprising as the Higgs mass is the only explicit mass scale in our toy SM. In previous work, we showed how this coupling leads to tree-level fifth forces due to the mixing between the light scalar and the Higgs, and how this fifth force can be suppressed if part, or all, of the Higgs mass scale arises dynamically.[56] A similar result can be obtained directly in the Jordan frame.[57] In order to study this theory further, we write the coupling function as a power series of the form \[A^{2}(\chi)\ =\ 1\ +\ b\,\frac{\chi}{M}\ +\ c\,\frac{\chi^{2}}{M^{2}}\ +\ {\cal O} \biggl{(}\frac{\chi^{3}}{M^{3}}\biggr{)}\;, \tag{6}\] where \(b\) and \(c\) are dimensionless constants, and \(M\) is a mass scale. The latter controls the strength of the interaction between the scalar, the \(\chi\) field and matter, and could be considered as the cut-off of the theory. Equation (6) can be considered as the leading-order approximation to the true form of the coupling function. For example, in dilaton models, the coupling function would be a series of powers of exponential functions. However, as long as \(\chi\ll M\), our calculations will remain valid. We also include a mass term in the potential for the \(\chi\) field, taking \[V(\chi)\ =\ \frac{1}{2}\,\mu_{\chi}^{2}\,\chi^{2}\;. \tag{7}\] More complicated potentials are, of course, allowed and may lead to a more varied phenomenology. However, we will see that even with this minimal choice, which might naively be expected to lead to linear equations of motion for the conformally coupled scalar, the interactions that the conformally coupled scalar obtains with the Higgs field will lead to non-linearities that are sufficient to induce screening mechanisms for the fifth-force mediating light degree of freedom. Defining \[\tilde{\chi}\ \equiv\ \biggl{(}1\ +\ \frac{b^{2}\tilde{\phi}^{2}}{4M^{2}} \biggr{)}^{1/2}\chi\;, \tag{8}\] to approach canonical normalisation for the \(\chi\) field, we have (keeping terms up to order \(\tilde{\chi}^{2}/M^{2}\) and \(\tilde{\phi}^{2}/M^{2}\)) \[\tilde{\cal L} = -\,\frac{1}{2}\,\tilde{g}^{\mu\nu}\,\partial_{\mu}\tilde{\chi}\, \partial_{\nu}\tilde{\chi}\,-\,\frac{1}{2}\,\tilde{g}^{\mu\nu}\,\partial_{\mu} \tilde{\phi}\,\partial_{\nu}\tilde{\phi}\] (9) \[\mbox{}+\,\frac{1}{2}\,\tilde{g}^{\mu\nu}\,\frac{\tilde{\phi}}{M }\left(b+2c\,\frac{\tilde{\chi}}{M}-b^{2}\,\frac{\tilde{\chi}}{2M}\right)\, \partial_{\mu}\tilde{\phi}\,\partial_{\nu}\tilde{\chi}\] \[\mbox{}+\,\frac{1}{2}\,\mu^{2}\,\tilde{\phi}^{2}\left(1+b\,\frac{ \tilde{\chi}}{M}+c\,\frac{\tilde{\chi}^{2}}{M^{2}}\right)\,-\,\frac{\lambda} {4!}\,\tilde{\phi}^{4}\] \[\mbox{}-\,\frac{3}{2}\,\frac{\mu^{4}}{\lambda}\left(1+2b\,\frac{ \tilde{\chi}}{M}+2c\,\frac{\tilde{\chi}^{2}}{M^{2}}+b^{2}\,\frac{\tilde{\chi} ^{2}}{M^{2}}\right)\] \[\mbox{}-\,\frac{1}{2}\mu_{\chi}^{2}\tilde{\chi}^{2}\left(1-\frac {b^{2}\tilde{\phi}^{2}}{4M^{2}}\right)\bar{\tilde{\psi}}\,\stackrel{{ \leftrightarrow}}{{\partial}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! the light mode to the matter source (here, the single Dirac fermion). We will see, however, that the effective mass and matter coupling strengths of this light mode depend on the ambient matter density, as a result of the original mixing between the Higgs and \(\chi\) fields. In this way, the non-linearities of the Higgs potential are communicated to the dynamics of the fifth-force mediator, leading to the screening effects that we describe in Section 4. ### Minima of the potential The full Einstein-frame potential for the fields \(\tilde{\chi}\), \(\tilde{\phi}\) and \(\tilde{\psi}\) in the Lagrangian of equation (9) is \[\tilde{V}(\tilde{\chi},\tilde{\phi},\tilde{\psi})= -\ \frac{1}{2}\,\mu^{2}\,\tilde{\phi}^{2}\left(1+b\,\frac{\tilde{ \chi}}{M}+c\,\frac{\tilde{\chi}^{2}}{M^{2}}\right)\:+\:\frac{\lambda}{4!}\, \tilde{\phi}^{4}\] \[\qquad+\:\frac{3}{2}\,\frac{\mu^{4}}{\lambda}\left(1+2b\,\frac{ \tilde{\chi}}{M}+2c\,\frac{\tilde{\chi}^{2}}{M^{2}}+b^{2}\,\frac{\tilde{\chi}^ {2}}{M^{2}}\right)\] \[\qquad+\:\frac{1}{2}\mu_{\chi}^{2}\tilde{\chi}^{2}\left(1-\frac{ b^{2}\tilde{\phi}^{2}}{4M^{2}}\right)\:+\:y\,\tilde{\psi}\tilde{\phi}\tilde{ \psi}\;. \tag{13}\] We now assume that we are working in an environment with a background density of fermions. When these fermions are non-relativistic, their energy-momentum tensor can be related directly to the mass term in the Lagrangian, such that we can write \(\rho_{\psi}=y\tilde{\phi}_{m}\langle\tilde{\bar{\psi}}\tilde{\psi}\rangle\), where \(\tilde{\phi}_{m}\) is the value of \(\tilde{\phi}\) at the minimum of the potential. This expression can be interpreted as a mean-field approximation for the non-relativistic limit of the fermion energy-momentum tensor, valid when taking the classical limit in the case of high occupation numbers. After making this assumption for the behaviour of the fermions, we can study the behaviour of the scalar fields in this environment. Varying equation (13) with respect to \(\tilde{\phi}\) and \(\tilde{\chi}\), we find equations for the values \(\tilde{\phi}_{m}\) and \(\tilde{\chi}_{m}\) of the fields at the minima of the potential. These are \[\frac{\mu^{2}\tilde{\phi}_{m}^{4}}{v^{2}}-\tilde{\phi}_{m}^{2}\left[\mu^{2} \left(1+\frac{b\tilde{\chi}_{m}}{M}+\frac{c\tilde{\chi}_{m}^{2}}{M^{2}}\right) +\frac{b\mu_{\chi}^{2}\tilde{\chi}_{m}^{2}}{4M^{2}}\right]+\rho_{\psi}=0\;, \tag{14}\] \[\tilde{\chi}_{m}\left[\frac{\mu^{2}v^{2}}{M^{2}}\left(c-\frac{c\tilde{\phi}_{ m}^{2}}{v^{2}}+\frac{b^{2}}{2}\right)+\mu_{\chi}^{2}\left(1-\frac{b^{2}\tilde{ \phi}_{m}^{2}}{4M^{2}}\right)\right]=\frac{b\mu^{2}v^{2}}{2M}\left(1-\frac{ \tilde{\phi}_{m}^{2}}{v^{2}}\right)\;, \tag{15}\] where we have set \(v^{2}=6\mu^{2}/\lambda\). Keeping terms only to order \(1/M^{2}\), assuming that \(v\ll M\) and taking the mass scale of the light scalar to be much smaller than the mass scale of the Higgs, i.e., \(\mu_{\chi}\ll\mu\), we can solve these equations to find \[\tilde{\phi}_{m}^{2} =v^{2}\left[1+\frac{b\tilde{\chi}_{m}}{M}+\frac{c\tilde{\chi}_{ m}^{2}}{M^{2}}+\frac{b^{2}\mu_{\chi}^{2}\tilde{\chi}_{m}^{2}}{4\mu^{2}M^{2}}\right.\] \[\left.-\:\frac{\rho_{\psi}}{\mu^{2}v^{2}}\left(1-\frac{b\tilde{ \chi}_{m}}{M}-\frac{c\tilde{\chi}_{m}^{2}}{M^{2}}+\frac{b^{2}\tilde{\chi}_{m} ^{2}}{M^{2}}-\frac{b^{2}\mu_{\chi}^{2}\tilde{\chi}_{m}^{2}}{4\mu^{2}M^{2}} \right)\right]\;, \tag{16}\] \[\frac{\tilde{\chi}_{m}}{M}=-\frac{b\rho_{\psi}}{2\mu_{\chi}^{2}M^{2}+(2c-b^{2} )\rho_{\psi}}\;. \tag{17}\] We note that one might expect the terms proportional to \(\rho_{\psi}/(\mu^{2}v^{2})\) inside the square bracket in equation (16) to be negligibly small. In fact, it is important to keep them in order to determine the minimum for \(\tilde{\chi}\) correctly (and for computing the effective potential for the light mode in the next section), as leading-order terms cancel. The cancellation of the leading-order terms is a direct consequence of our choice to set the contribution of the Higgs field to the cosmological constant to zero -- notice that there is no contribution to \(\tilde{\chi}_{m}/M\) proportional to \(\mu^{2}v^{2}\), as one would otherwise expect. In equation (17), we see the first signs of environmental dependence in this theory, as the minimum for \(\tilde{\chi}\) varies significantly depending on whether the environmental density is greater or smaller than a critical density \(\rho_{\rm crit}=2\mu_{\chi}^{2}M^{2}/(2c-b^{2})\). The limiting cases are \[\frac{\tilde{\chi}_{m}}{M}=\left\{\begin{array}{ll}-\frac{b\rho_{\psi}}{2M^{ 2}\mu_{\chi}^{2}}\;,&\mbox{ if }\rho_{\psi}\ll\rho_{\rm crit}\;,\\ \frac{b}{b^{2}-2c}\;,&\mbox{ if }\rho_{\psi}\gg\rho_{\rm crit}\;.\end{array}\right. \tag{18}\] We note that a small tuning of our dimensionless constants \(b\) and \(c\) is required to ensure that \(\tilde{\chi}_{m}<M\) and our theory remains well defined in high-density environments. This means that it is not possible for the coupling function \(A^{2}(\chi)\) to be a pure exponential, as if \(b=1\) and \(c=1/2\) in equation (6), then equation (18) implies that \(\tilde{\chi}_{m}\) diverges in high density environments. We will explore the two regimes of behaviour that can be seen in equation (18) further below. It is important to recognise, however, that this phenomenology is only possible if \(\mu_{\chi}\) and therefore \(\rho_{\rm crit}\) are non-zero. For the field to remain truly massless requires a symmetry, e.g., scale or shift symmetry. In the absence of such a symmetry, a mass for the light scalar will be generated by quantum effects, and the calculations presented in this work will apply. ### Equation of motion Many experiments that search for light scalars, e.g., fifth-force experiments, are performed at energies well below that of the electroweak scale. At these low energies, we can expand around the minima of the classical potential and ignore fluctuations of heavy modes, with masses of order the electroweak scale. We will do this by performing a mean-field expansion, under the assumption that the heavy field is slowly varying, and will consider the equation of motion for fluctuations of the light mode to first order. This is to say that we perform both a zeroth-order semi-classical approximation and a zeroth-order gradient expansion. The former amounts to neglecting corrections generated by integrating out the heavy fluctuations.b The latter amounts to neglecting gradients in the mean fields and terms with higher-order derivatives. Footnote b: Integrating out heavy fluctuations around the classical (saddle-point) configurations will induce radiative corrections, and generate effective operators involving the light mode that carry additional electroweak-scale suppression. The latter are expected to be subdominant in the low-energy limit relevant to fifth forces. One could proceed to compute the evolution of the reduced density matrix for the light mode using the Feynman-Vernon influence functional to derive the relevant master equation. Integrating out the fluctuations in the heavy mode would then lead to non-local effective operators. In Ref. [66], we performed this calculation for a closely related model, finding that corrections to the field evolution beyond the semi-classical contributions can be associated with the expected loop We perturb the \(\tilde{\phi}\) and \(\tilde{\chi}\) fields around the field values that minimise the potential, given in equations (16) and (17), writing \(\tilde{\chi}=\tilde{\chi}_{m}+\delta\tilde{\chi}\) and \(\tilde{\phi}=\tilde{\phi}_{m}+\delta\tilde{\phi}\). We keep terms in the equations of motion only to first order in perturbations. We find that mass terms in the equations of motion mix fluctuations of the two fields, meaning that the heavy mode of the theory -- the "Higgs" boson -- does not directly correspond to fluctuations of the Higgs field \(\delta\tilde{\phi}\). The mixing between the fields can be expressed in terms of a mixing angle \(\theta\). Assuming that the mixing angle \(\theta\) is small (large mixing angles are excluded by collider searches[62]), we find that, keeping terms only to order \(1/M^{2}\), \[\theta\approx-\frac{v}{2M}\left[b+\frac{(4c-b^{2})\tilde{\chi}_{m}}{2M}+\frac {\rho_{\psi}}{\mu^{2}v^{2}}\left(b+\frac{2c\tilde{\chi}_{m}}{M}-\frac{5b^{2} \tilde{\chi}_{m}}{2M}\right)\right]\;. \tag{19}\] Herein, we have again neglected terms of order \(\mu_{\chi}^{2}/\mu^{2}\), but we keep terms to first order in \(\rho_{\psi}/(\mu^{2}v^{2})\), since leading-order terms cancel in the calculation of the effective mass and coupling constant for the light mode. We can now identify the heavy and light mass eigenstates in our theory, \(h\) and \(s\), respectively (working in this field basis removes non-derivative interactions between the fields in the equations of motion). These are defined as \[h =\delta\tilde{\phi}\cos\theta+\delta\tilde{\chi}\sin\theta\;, \tag{20}\] \[s =\delta\tilde{\chi}\cos\theta-\delta\tilde{\phi}\sin\theta\;. \tag{21}\] We will obtain an effective potential for the light mode \(s\) by inverting equations (20) and (21) to write \(\delta\tilde{\phi}\) and \(\delta\tilde{\chi}\) in terms of \(h\) and \(s\), and substituting these expressions into the equations of motion for the fields. We then neglect derivatives of \(h\), assuming that we are considering sufficiently low-energy experiments that the heavy mode is not perturbed from the minimum of the field potential. The resulting equation of motion for \(s\) is \[\left(1+\frac{b^{2}v^{2}}{8M^{2}}\right)\Box s=m_{\rm eff}^{2}s+\frac{\beta( \rho_{\psi})}{M}\delta\rho_{\psi}\;. \tag{22}\] We note that there is no density dependence in the leading corrections to the kinetic terms. Hereafter, we omit terms that are suppressed by \(v^{2}/M^{2}\), which could otherwise be re-scaled into the effective mass and coupling constants \[m_{\rm eff}^{2}=\mu_{\chi}^{2}+\frac{(4c-b^{2})\rho_{\psi}}{4M^{2}}+{\cal O} \left(\frac{v^{2}}{M^{2}}\right)\;, \tag{23}\] and \[\beta(\rho_{\psi})=\frac{2b\mu_{\chi}^{2}M^{2}}{2\mu_{\chi}^{2}M^{2}+(2c-b^{2 })\rho_{\psi}}+{\cal O}\left(\frac{\rho_{\psi}}{\mu^{2}v^{2}}\right)\;. \tag{24}\] In low-density environments, the mass of the light scalar remains small in the leading semi-classical approximation. This is a consequence of the choice to set the Higgs contribution to the cosmological constant to zero in the Jordan frame, resulting in the subtraction of \(3\mu^{4}/(2\lambda)\) from the Higgs potential in equation (2).c If we had not subtracted the Higgs contribution to the Jordan-frame cosmological constant, the light scalar mass would have received corrections of order \(\mu^{2}\).d When the density exceeds the critical density, the mass of the light mode increases as \(\sim\sqrt{\rho_{\psi}}/M\). It is also important to notice that the strength of the coupling of the light mode to matter perturbations (which will control the strength of the scalar-mediated fifth force) varies with the environmental density and is suppressed when the density exceeds the critical density. Footnote c: This article does not offer a solution to the cosmological constant problem, and we work under the assumption that a mechanism that sets the contribution from the electroweak minimum of the Higgs potential must be present. Footnote d: This observation is reminiscent of the ideas behind Higgs-dilaton models, [27, 30] where the dilaton potential is generated by the Jordan frame cosmological constant in order to realise a quintessence-like scenario. ## 4 Screening Considering the equation of motion for the light mode in equation (22), we see that the coupling of the light scalar to matter is suppressed in regions of high density (above the critical density \(\rho_{\rm crit}=2\mu_{\chi}^{2}M^{2}/(2c-b^{2})\)). This comes from the variation of \(\tilde{\chi}_{m}\) with the density of the environment. The light mode mediates a fifth force, on a test particle with unit mass, of strength \[F_{s}=-\frac{\beta(\rho_{\psi})}{M}\nabla s\;. \tag{25}\] As the coupling \(\beta(\rho_{\psi})\) varies with the environment, so will the strength of the scalar-mediated fifth force. This Section explores the phenomenological consequences of this environmental dependence. ### Environmental screening We first consider the situation where the fifth force mediated by the light mode is suppressed because the environment in which we make our observations has a density that exceeds the critical density. The characteristic scale over which \(s\) can vary is given by the Compton wavelength \(\sim 1/m_{\rm eff}\). For the fifth force to be suppressed, or screened, the density must exceed the critical density \[\frac{\rho_{\psi}}{{\rm gcm}^{-3}}\gtrsim\frac{0.46}{2c-b^{2}}\left(\frac{ \mu_{\chi}}{{\rm eV}}\right)^{2}\left(\frac{M}{{\rm GeV}}\right)^{2}\;, \tag{26}\] over a region of spatial extent at least as large as the Compton wavelength. We should take care when applying this requirement, because in regions of high density (above the critical density), the mass of the light field will increase and the Compton wavelength will shorten. Above the critical density, the coupling function becomes \[\beta(\rho_{\psi})\approx\frac{2b\mu_{\chi}^{2}M^{2}}{(2c-b^{2})\rho_{\psi}}\;, \tag{27}\] and the coupling is dynamically suppressed compared to our naive expectation of \(\beta\sim 1\). Two useful examples of this condition for environmental screening are: * The density of the interstellar medium is \(\sim 10^{-26}\ {\rm g/cm}^{3}\). This exceeds the critical density if \[\frac{1}{(2c-b^{2})^{1/2}}\left(\frac{\mu_{\chi}}{\rm eV}\right)\left(\frac{M} {\rm GeV}\right)\lesssim 2.1\times 10^{-13}\;.\] (28) * The density of the Earth is \(\sim 6\ {\rm g/cm}^{3}\). This exceeds the critical density if \[\frac{1}{(2c-b^{2})^{1/2}}\left(\frac{\mu_{\chi}}{\rm eV}\right)\left(\frac{M} {\rm GeV}\right)<5.1\;.\] (29) If the first of these conditions is satisfied, we expect the fifth force to be suppressed within the solar system. If the second, weaker, bound is satisfied, the fifth force will be suppressed within the Earth, but this may not be a sufficient condition to suppress the effects of the scalar in all terrestrial experiments. ### Thin-shell screening of fifth forces If the environment in which an experiment is performed is not dense enough to exceed the critical density, it is still possible that the force sourced by large objects may be suppressed through the so called 'thin-shell' effect.[42, 43] To see this, we return to working in terms of the field \(\tilde{\chi}\), whose background value in a region of density \(\rho_{\psi}\) is given by equation (17). Fluctuations around this value are given by \(\delta\chi=s\cos\theta\), and the perturbations \(s\) have a density-dependent mass given by equation (23). The thin-shell effect can occur when \(\rho_{\rm out}<\rho_{\rm crit}\) but \(\rho_{\rm in}>\rho_{\rm crit}\). We consider the profile of the field around a spherical compact object. We centre our spherical coordinate system on the centre of the object and assume that the object has a constant density \(\rho_{\rm in}\) when \(r\leq R\), where \(R\) is the radius of the object. The object is embedded in a background of constant density \(\rho_{\rm out}\). Assuming that \(\rho_{\rm in}\gg\rho_{\rm crit}\) and \(\rho_{\rm out}\ll\rho_{\rm crit}\), we find \[\frac{\tilde{\chi}}{M}=\left\{\begin{array}{ll}-\frac{b}{2c-b^{2}}\left(1- \frac{2(1+m_{\rm out}R)}{m_{\rm in}r}e^{-m_{\rm in}R}\sinh m_{\rm in}r\right) \;,&\mbox{ if }r\leq R\;,\\ -\frac{b\rho_{\rm out}}{2M^{2}\mu_{\chi}^{2}}-\frac{bR}{(2c-b^{2})r}e^{m_{\rm out }(R-r)}\;,&\mbox{ if }r>R\;,\end{array}\right. \tag{30}\] where \(m_{\rm in}\) and \(m_{\rm out}\) indicate the effective mass of the light scalar mode evaluated at the densities \(\rho_{\rm in}\) and \(\rho_{\rm out}\), respectively. We have additionally assumed that \(m_{\rm in}R\gg 1\) and will confirm when this condition holds shortly. Substituting the expression for the field profile, equation (30), around the compact object into the expression for the fifth force on a test particle, equation (25), we find that the fifth force on a test particle of unit mass at some \(r>R\) is \[F_{5}=-\frac{b^{2}R}{(b^{2}-2c)r^{2}}(1+m_{\rm out}r)e^{m_{\rm out}(R-r)}\;. \tag{31}\] This can be compared to the strength of the fifth force of a canonical light scalar with Yukawa couplings to matter controlled by the energy scale \(M\) and with mass \(m_{\rm out}\approx\mu_{\chi}\), given by \[F_{\rm Yuk}=-\frac{M_{c}}{M^{2}r^{2}}e^{-m_{\rm out}r}\;, \tag{32}\] where \(M_{c}=4\pi\rho_{\rm in}R^{3}/3\). We find \[\frac{F_{5}}{F_{\rm Yuk}}=\frac{b^{2}RM^{2}}{(b^{2}-2c)M_{c}}(1+m_{\rm out}r)e ^{m_{\rm out}R}\;. \tag{33}\] If \(m_{\rm out}R\ll 1\) then \[\frac{F_{5}}{F_{\rm Yuk}} \approx\frac{b^{2}RM^{2}}{(b^{2}-2c)M_{c}}\] \[\approx\frac{3b^{2}(4c-b^{2})}{16\pi(b^{2}-2c)(m_{\rm in}R)^{2}}\;, \tag{34}\] and the fifth force is suppressed if \[\frac{M_{c}}{R}\gg\frac{|b^{2}-2c|M^{2}}{b^{2}}\;. \tag{35}\] When the condition in equation (35) is satisfied, the source is sufficiently compact that the fifth force it sources is suppressed and constraints on the model parameters from fifth-force searches are weakened, similarly to chameleon [42, 43] and symmetron models [44, 45] (for earlier related work, see Refs. [52, 68, 69, 70, 71]). Consistency of our solution requires the closely related condition \[m_{\rm in}^{2}R^{2}=\frac{3(4c-b^{2})}{16\pi M^{2}}\frac{M_{c}}{R}\gg 1\;. \tag{36}\] As an example, we consider the Earth embedded in the Interstellar Medium (ISM). We assume there is no environmental screening, so that the density of the ISM does not exceeds the critical density, but the density of the Earth does exceed the critical density. This allows for the possibility of thin-shell screening. Combining equations (28) and (29), such a situation can occur when \[2.1\times 10^{-13}\lesssim\frac{1}{(2c-b)^{1/2}}\left(\frac{\mu_{\chi}}{\rm eV }\right)\left(\frac{M}{\rm GeV}\right)\lesssim 5.1\;. \tag{37}\] Then the fifth force sourced by the Earth is screened through a thin-shell effect if equation (35) is satisfied for the Earth, which requires \[\frac{M}{\rm GeV}\ll\frac{(3\times 10^{14})b}{|b^{2}-2c|^{1/2}}\;. \tag{38}\] ## 5 Discussion We have presented a theory in which a light scalar field is added to the SM, but the long-range fifth forces mediated by this scalar can be suppressed through environmental screening. The screening occurs because of non-linearities in the scalar potential. In this way, the screening is similar to a number of commonly studied models. The existence of a critical density above which screening can occur means the phenomenology of the theory is particularly similar to that of symmetron models of screening. However, the key difference between our model and those in the existing literature is the source of the non-linearities. Prior to this work, it was assumed that screening could only occur if non-linear terms were added to the Lagrangian of the light scalar. Here, we have shown that if the light scalar has a potential which contains only a mass term, and couples to matter through the Higgs portal, then the self-interactions in the Higgs potential are sufficient to induce screening at low energies, where the Higgs field (or, more precisely, the heavy mode of the coupled theory) can be assumed to be non-dynamical. In this work, we have only kept terms in the conformal coupling up to order \(1/M^{2}\). This led to an effective theory, where the mass of the light scalar and its coupling to matter depend on the environmental density. The kinetic term for the light mode in our theory is also re-scaled, as can be seen in equation (22), but this is independent of the environment. It is possible that if we were to extend the calculation to include terms of higher order in \(1/M\), we would find environmental dependence occurring in the kinetic sector as well, opening up the possibility of additional Vainshtein-like screening occurring if it becomes more challenging for the light mode to propagate in regions of high density. We have only studied a toy model Lagrangian, with a prototype real-valued Higgs field and a single fermion. This fails to capture the dynamics of the QCD sector, potentially a significant failing, since the QCD binding energy provides approximately 99% of the mass of nucleons. In our previous work,[56] following earlier references,[72, 73, 74] we showed that an interaction between a conformallly coupled scalar and baryons does arise, mediated through the Higgs field and the conformal anomaly. This allows baryonic matter to act as a source of energy density in the equations of motion for the light mode of our model in the same way as the fermionic density we have used in the above calculation. Even so, the differing origins of the interactions may lead to an effective violation of the weak equivalence principle between the SM leptons and hadrons.[56] It is important to recognise that the analysis presented here considers only tree-level interactions. The potential generated for the light degree of freedom, as described here, will be subject to radiative corrections. A detailed study of these radiative corrections is beyond the scope of this article and may be presented elsewhere. In addition, we leave a detailed analysis of experimental constraints for future work. As well as allowing a theory to avoid existing constraints, screening also introduces novel observational signatures, and these would be smoking-gun signatures of this type of new physics. For example, long-lived environmentally dependent scalars could have different displaced vertices in the ATLAS and CMS detectors because of the differing design and construction of the detectors.[75] ## 6 Conclusions We have studied light conformally coupled scalar fields, a widely considered type of new physics beyond the SM. We have assumed that the bare potential for these light scalars only contains a mass term. After a series of field redefinitions, we have shown that such a theory is equivalent to a Higgs-portal model, with a particular combination of Higgs-portal couplings. This combination of couplings may not seem intuitive when viewed as a Higgs-portal model, but we have seen how this arises naturally from the conformal coupling. This, of course, does not preclude large radiative corrections that would require fine-tuning. In the case of a toy SM, we proceeded by expanding in fluctuations of the scalar fields around a classical minimum of the Einstein-frame potential, and derived the effective equation of motion for the light mode -- the fifth-force mediator of the scalar-tensor theory. Choosing the would-be electroweak minimum to have vanishing potential, so as to eliminate any contribution to the Jordan-frame cosmological constant, the mass of the light mode does not receive large electroweak-scale corrections at tree-level in the Einstein frame. In all models of screening to date, the screening of fifth forces has occurred because of non-linearities in the equation of motion of the additional scalar field. In contrast, in the model we study here, there are no terms in the Lagrangian, which involve powers of the light field higher than quadratic, and so its equations of motion are linear. The only field with non-trivial self interactions is the Higgs. In the leading semi-classical approximation, we find that these non-linearities are communicated to the fifth-force mediator, resulting in an environmentally dependent effective theory for the light mode. This environmental dependence appears as density-dependent effective masses and couplings, leading to the screening of the light mode and the fifth force that it mediates. We find that in different regions of parameter space, the effects of the scalar near the surface of the Earth could be screened by the environment, or by a thin-shell effect. This occurs, despite an absence of non-linear terms in the original Jordan-frame Lagrangian for the light field, as a result of the conformal coupling to the Ricci scalar and the non-linearities in the Higgs potential. ## Acknowledgments This work was supported by a Research Leadership Award from the Leverhulme Trust (CB), a United Kingdom Research and Innovation (UKRI) Future Leaders Fellowship [Grant No. MR/V021974/2] (PM), and the Science and Technology Facilities Council (STFC) [Grant Nos. ST/T000732/1 (CB) and ST/X00077X/1 (PM)]. For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. ## Data Access Statement No data were created or analysed in this study. ## Author Contribution Both authors have contributed equally to the original ideas and calculations in this work, and to the writing of the text. ## Competing Interests Neither author has any competing interest that may affect or be affected by the research reported in this paper.
2303.12149
SPARTAN: Self-supervised Spatiotemporal Transformers Approach to Group Activity Recognition
In this paper, we propose a new, simple, and effective Self-supervised Spatio-temporal Transformers (SPARTAN) approach to Group Activity Recognition (GAR) using unlabeled video data. Given a video, we create local and global Spatio-temporal views with varying spatial patch sizes and frame rates. The proposed self-supervised objective aims to match the features of these contrasting views representing the same video to be consistent with the variations in spatiotemporal domains. To the best of our knowledge, the proposed mechanism is one of the first works to alleviate the weakly supervised setting of GAR using the encoders in video transformers. Furthermore, using the advantage of transformer models, our proposed approach supports long-term relationship modeling along spatio-temporal dimensions. The proposed SPARTAN approach performs well on two group activity recognition benchmarks, including NBA and Volleyball datasets, by surpassing the state-of-the-art results by a significant margin in terms of MCA and MPCA metrics.
Naga VS Raviteja Chappa, Pha Nguyen, Alexander H Nelson, Han-Seok Seo, Xin Li, Page Daniel Dobbs, Khoa Luu
2023-03-06T16:58:27Z
http://arxiv.org/abs/2303.12149v4
# SPARTAN: Self-supervised Spatiotemporal Transformers Approach to Group Activity Recognition ###### Abstract In this paper, we propose a new, simple, and effective Self-supervised Spatio-temporal Transformers (SPARTAN) approach to Group Activity Recognition (GAR) using unlabeled video data. Given a video, we create local and global Spatio-temporal views with varying spatial patch sizes and frame rates. The proposed self-supervised objective aims to match the features of these contrasting views representing the same video to be consistent with the variations in spatiotemporal domains. To the best of our knowledge, the proposed mechanism is one of the first works to alleviate the weakly supervised setting of GAR using the encoders in video transformers. Furthermore, using the advantage of transformer models, our proposed approach supports long-term relationship modeling along spatio-temporal dimensions. The proposed SPARTAN approach performs well on two group activity recognition benchmarks, including NBA and Volleyball datasets, by surpassing the state-of-the-art results by a significant margin in terms of MCA and MPCA metrics1. Footnote 1: The implementation of SPARTAN is available at [https://github.com/uark-cviu/SPARTAN](https://github.com/uark-cviu/SPARTAN) ## 1 Introduction Group Activity Recognition (GAR) aims to classify the collective actions of individuals in a video clip. This field has gained significant attention due to its diverse applications such as sports video analysis, video monitoring, and interpretation of social situations. Far apart from conventional action recognition methods that focus on understanding individual actions [11, 59, 62, 51], GAR requires a thorough and exact knowledge of interactions between several actors, which poses fundamental challenges such as actor localisation and modelling their spatiotemporal relationships. Most existing methods for GAR [16, 20, 26, 28, 37, 43, 45, 69, 64, 67, 69] require ground-truth bounding boxes of individual actors for training and testing, as well as their action class labels for training. The bounding box labels, in particular, are used to extract features of individual actors, such as _RoIPool_[52] and _RoIAlign_[25], and precisely discover their spatio-temporal relations; such actor features are aggregated while considering the relationships between actors to form a group-level video representation, which is then fed to a group activity classifier. Despite the fact that these approaches performed admirably on the difficult task, their reliance on bounding boxes at inference and substantial data labelling annotations makes them unworkable and severely limits their application. To overcome this problem, one approach is to simultaneously train person detection and group activity recognition using bounding box labels [7, 72]. This method estimates the bounding boxes of actors in inference. However, this method calls for individ Figure 1: Visualization of attention captured by the model. (i) The attention in this example focuses on how the relationship is established between the actors. Original sequence from NBA dataset [68] (top), Attention captured by DFNSGRAR [30] (middle), and SPARTAN model (bottom). Red-colored actors are the irrelevant information to determine the group activity, whereas green-colored actors, including their positions, are the most relevant. (ii) illustrates that DFNSGR predicts the category wrong due to the effects shown in (i) whereas SPARTAN is more confident in the prediction, which is further justified by the t-SNE plot as shown in Fig. 5. ual actor ground-truth bounding boxes for training videos. Yan [68] presented the Weakly Supervised GAR (WS-GAR) learning approach, which does not need actor-level labels in both training and inference, to further lower the annotation cost. They generate actor box suggestions using a detector that has been pre-trained on an external dataset in order to solve the absence of bounding box labels. They then learn to eliminate irrelevant possibilities. Recently, Kim [30] introduced a detector-free method for WS-GAR task which captures the actor information using partial contexts of the token embeddings. However, the previous methods [30, 68] have various drawbacks as follows. First, a detector [68] often leads to missing detection of people in case of occlusion, which minimizes overall accuracy. Second, partial contexts [30] can only learn if and only if there is movement in consecutive frames. This can be inferred from the illustration in Fig. 1. Third, the temporal information among the tokens must be consistent, and [30] does not consider different tokens. In this paper, we introduce a new simple but effective Self-Supervised **S**patio-temporal **T**ransformers (SPARTAN) approach to the task of Group Action Recognition that is independent of ground-truth bounding boxes, labels during pre-training, and object detector. Our mechanism only exploits the motion as a supervisory signal from the RGB data modality. As seen in Fig. 1 (i), our model captures not only the key actors but also their positions, which shows that our method is more effective in group activity classification than DFWSGAR [30]. Our approach is designed to benefit from varying spatial and temporal details within the same deep network. We use a video transformer [8] based approach to handle varying temporal resolutions within the same architecture. Furthermore, the self-attention mechanism in video transformers can capture local and global long-range dependencies in both space and time, offering much larger receptive fields compared to standard convolutional kernels [42]. The contributions of this work can be summarized as follows. * Instead of considering only motion features across consecutive frames [30], we introduce the first training approach to GAR by exploiting spatial-temporal correspondences. The proposed method varies the space-time features of the inputs to learn long-range dependencies in spatial and temporal domains. * A new self-supervised learning strategy is performed by jointly learning the inter-frame, i.e., frame-level _temporal_, and intra-frame, i.e., patch-level _spatial_, correspondences further forming into _Inter Teacher-Inter Student loss_ and _Inter Teacher-Intra Student loss_. In particular, the spatiotemporal features global, from the entire sequence, and local from the sampled sequence are matched by the learning objectives of the frame level and the patch level in the latent space. * With extensive experiments on NBA [68], and Volleyball [28] datasets, the proposed method shows the State-of-the-Art (SOTA) performance results using only RGB inputs. ## 2 Related Work ### Group Activity Recognition (GAR) Due to the wide range of applications, GAR has recently gained more attention. The initial approaches in the field utilized probabilistic graphical methods [1, 2, 3, 32, 63, 63] and AND-OR grammar methods [4, 54] to process the extracted features. As deep learning evolved over the years, methods involving Convolutional Neural Networks (CNN) [7, 28], Recurrent Neural Networks (RNN) [7, 13, 27, 28, 38, 46, 55, 60] achieved outstanding performance thanks to their learning power of high-level information and temporal context. Recent methods for identifying group actions [16, 20, 26, 37, 45, 68, 70] typically utilize attention-based models and require explicit character representations to model spatial-temporal relations in group activities. Graph convolution networks, as described in [64, 70], are used to learn spatial and temporal information of actors by constructing relational graphs, while Rui [68] suggest building spatial and temporal relation graphs to infer actor links. Kirill [20] use a transformer encoder-based technique with different backbone networks to extract features for learning actor interactions from multimodal inputs. Li [37] use a clustered attention approach to capture contextual spatial-temporal information. Mingfei [23] proposed MAC-Loss which is a combination of spatial and temporal transformers in two complimentary orders to enhance the learning effectiveness of actor interactions and preserve actor consistency at the frame and video levels. **Weakly supervised group activity recognition (GAR).** Several techniques have been developed to tackle weakly supervised group activity recognition (GAR) with less supervision, such as using bounding boxes to train built-in detectors or activity maps. One approach is WSGAR, which does not rely on bounding box annotations during training or inference and incorporates an off-the-shelf item detector into the model. Another technique, proposed by Zhang et al. [73], uses activity-specific characteristics to improve WSGAR, but is not specifically designed for GAR. Kim et al. [30] proposed a detector-free method that uses transformer encoders to extract motion features. Our proposed method is a self-supervised training approach dedicated to WSGAR, which does not require actor-level annotations, object detectors, or labels. **Transformers in Vision**. Vaswani [58] introduced the transformer architecture for sequence-to-sequence machine translation. This architecture has since been widely adopted to many various natural processing tasks. Dosovitskiy [15] introduced a transformer architecture that is not based on convolution for image recognition tasks. For different downstream computer vision tasks, these works [36, 40, 61, 71] used transformer architecture as a general backbone to make exceptional performance progress. In the video domain, many works [5, 9, 17, 22, 34, 48, 50, 17, 44] exploited spatial and temporal self-attention to learn video representation efficiently. Patrick [44] introduce a self-attention block that focuses on the trajectory, which tracks the patches of space and time in a video transformer. ## 3 Spartan The proposed method aims to recognize a group activity in a given video without using person-bounding boxes or a detector. The general architecture of our self-supervised training within the teacher-student framework for group activity recognition is illustrated in Fig. 2. Unlike the other contrastive learning methods, we process two clips from the same video by changing their spatial-temporal characteristics, which do not rely on the memory banks. The proposed loss formulation matches the features of the two dissimilar clips to impose consistency in motion and spatial changes in the same video. The proposed SPARTAN framework will be discussed further in the following sections. ### Self-Supervised Training Given the high temporal dimensionality of videos, motion and spatial characteristics of the group activity will be learned, such as 3p.-succ. (from NBA dataset [68]) or l-spike (from Volleyball dataset [28]) during the video. Thus, several video clips with different motion characteristics can be sampled from a single video. A key novelty of the proposed approach involves predicting these different video clips with varying temporal characteristics from each other in the feature space. It leads to learning contextual information that defines the underlying distribution of videos and makes the network invariant to motion, scale, and viewpoint variations. Thus, self-supervision for video representation learning is formulated as a motion prediction problem that has three key components: **a)** We generate multiple temporal views consisting of different numbers of clips with varying motion characteristics from the same video as in Sec. 3.1.1, **b)** In addition to motion, we vary spatial characteristics of these views as well by generating local, i.e., smaller spatial field, and global, i.e., higher spatial field, of the sampled clips as in Sec. 3.1.2, and **c)** We introduce a loss function in Sec. 3.2 that matches the varying views across spatial and temporal dimensions in the latent space. Figure 2: **The proposed SPARTAN Framework** samples gave input video into global and local views. The sampling strategy for video clips results in different frame rates and spatial characteristics between global views and local views, which are subject to spatial augmentations and have limited fields of view. The teacher model processes global views (\(\mathbf{g}_{t}\)) to generate a target, while the student model processes local views (\(\mathbf{l}_{t}\) & \(\mathbf{l}_{s}\)) where \(Kl\leq K_{g}\). The network weights are updated by matching the online student local views to the target teacher global views, which involves _cross-view correspondences_ and _motion correspondences_. Our approach utilizes a standard ViT backbone with separate space-time attention [8] and an MLP to predict target features from online features. #### 3.1.1 Motion Prediction as Self-supervision Learning The frame rate is a crucial aspect of a video as it can significantly alter the motion context of the content. For instance, the frame rate can affect the perception of actions, such as walking slowly versus walking quickly, and can capture subtle nuances, such as the slight body movements in walking. Traditionally, video clips are sampled at a fixed frame rate [47, 65]. However, when comparing views with different frame rates, i.e., varying numbers of clips, predicting one view from another in feature space requires explicitly modeling object motion across clips. Furthermore, predicting subtle movements captured at high frame rates compels the model to learn contextual information about motion from a low frame rate input. **Temporal Views:** We refer to a collection of clips sampled at a specific video frame rate as a temporal view. We generate different views by sampling at different frame rates, producing temporal views with varying resolutions. The number of temporal tokens (\(T\)) input to ViT varies in different views. Our proposed method enforces the correspondences between such views, which allows for capturing different motion characteristics of the same action. We randomly sampled these views to create motion differences among them. Our ViT models process these views, and we predict one view from the other in the latent space. In addition to varying temporal resolution, we also vary the resolution of clips across the spatial dimension within these views. It means that the spatial size of a clip can be lower than the maximum spatial size (224), which can also decrease the number of spatial tokens. Similar sampling strategies have been used [18, 29] but under multi-network settings, while our approach handles such variability in temporal resolutions with a single ViT model by using vanilla positional encoding [58]. #### 3.1.2 Cross-View Correspondences Our training strategy aims to learn the relationships between a given video's temporal and spatial dimensions. To this end, we propose novel cross-view correspondences by altering the field of view during sampling. We generated global and local temporal views from a given video to achieve this. **Global Temporal Views (\(\mathbf{g_{t}}\)):** We randomly sample \(K_{g}\) (is equal to \(T\)) frames from a video clip with spatial size fixed to \(W_{global}\) and \(H_{global}\). These views are fed into the teacher network which yields an output denoted by \(\mathbf{\tilde{f}_{\mathbf{g_{t}}}}\). **Local Spatiotemporal Views (\(\mathbf{l_{t}}\) and \(\mathbf{l_{s}}\)):** Local views cover a limited portion of the video along both spatial and temporal dimensions. We generate local temporal views by randomly sampling several frames \(K_{l}\) (\(\leq K_{g}\)) with a spatial size fixed to \(W_{local}\) and \(H_{local}\). These views are fed into the student network which yields two outputs denoted by \(\mathbf{\tilde{f}_{\mathbf{l_{t}}}}\) and \(\mathbf{\tilde{f}_{\mathbf{l_{s}}}}\) respectively. **Augmentations:** We apply different data augmentation techniques to the spatial dimension, that is, to the clips sampled for each view. Specifically, we apply color jittering and gray scaling with probability 0.8 and 0.2, respectively, to all temporal views. We apply Gaussian blur and solarization with probability 0.1 and 0.2, respectively, to global temporal views. Our approach is based on the intuition that learning to predict a global temporal view of a video from a local temporal view in the latent space can help the model capture high-level contextual information. Specifically, our method encourages the model to model both spatial and temporal context, where the spatial context refers to the possibilities surrounding a given spatial crop and the temporal context refers to possible previous or future clips from a given temporal crop. It is important to note that spatial correspondences also involve a temporal component, as our approach attempts to predict a global view at timestamp \(t=j\) from a local view at timestamp \(t=i\). To enforce these cross-view correspondences, we use a similarity objective that predicts different views from each other. ### The Proposed Objective Function Our model is trained with an objective function that predicts different views from each other. These views represent different spatial-temporal variations that belong to the same video. Given a video \(\mathbf{X}=\{\mathbf{x}_{t}\}_{t=1}^{T}\), where \(T\) represents the number of frames, let \(\mathbf{g}_{t}\), \(\mathbf{l}_{t}\) and \(\mathbf{l}_{s}\) represent global temporal views, local temporal and spatial views such that \(\mathbf{g}_{t}=\{\mathbf{x}_{t}\}_{t=1}^{K_{g}}\) and \(\mathbf{l}_{t}=\mathbf{l}_{s}=\{\mathbf{x}_{t}\}_{t=1}^{K_{t}}\), where \(\mathbf{g}_{t}\), \(\mathbf{l}_{t}\) and \(\mathbf{l}_{s}\) are subsets of video \(\mathbf{X}\) and \(K_{l}\leq K_{g}\) where \(K_{g}\) and \(K_{l}\) are the number of frames for teacher and student (global and local) inputs. We randomly sample \(K_{g}\) global and \(K_{l}\) local temporal views as in Sec. 3.1.2. These temporal views are passed through the student and teacher models to get the corresponding class tokens or feature \(\mathbf{f}_{g}\) and \(\mathbf{f}_{l}\). These class tokens are normalized as follows. \[\mathbf{\tilde{f}}^{(i)}=\frac{\text{exp}(\mathbf{f}^{(i)})/\tau}{\sum_{i=1}^{n}\text{ exp}(\mathbf{f}^{(i)})/\tau}, \tag{1}\] where \(\tau\) is a temperature parameter used to control the sharpness of the exponential function [10] and \(\mathbf{f}^{(i)}\) is each element in \(\mathbf{\tilde{f}^{(i)}}\in\mathbb{R}^{n}\). **Inter Teacher-Inter Student Loss:** Our \(\mathbf{g}_{t}\) have the same spatial size but differ in temporal content because the number of clips/frames is randomly sampled for each view. One of the \(\mathbf{g}_{t}\) always passes through the teacher model that serves as the target label. We map the student's \(\mathbf{l}_{t}\) with the teacher's \(\mathbf{g}_{t}\) to create a global-to-local temporal loss as in Eqn. (2). \[\mathcal{L}_{\mathbf{g}_{t}\!-\!l_{t}}=-\mathbf{\tilde{f}}_{\mathbf{g}_{t}}*log(\mathbf{ \tilde{f}}_{t_{t}}), \tag{2}\] where \(\mathbf{\tilde{f}_{g_{t}}}\) and \(\mathbf{\tilde{f}_{t_{t}}}\) are the tokens of the class for \(\mathbf{g}_{t}\) and \(\mathbf{l}_{t}\) produced by the teacher and student, respectively. **Inter Teacher-Intra Student Loss:** Our \(\mathbf{l}_{t}\) have a limited field of vision along the spatial and temporal dimensions compared to the \(\mathbf{g}_{t}\). However, the number of local views is four times higher than that of global views. All \(\mathbf{l}_{s}\) are passed through the student model and mapped to \(\mathbf{g}_{t}\) from the teacher model to create the loss function as in Eq. (3). \[\mathcal{L}_{g_{t}-l_{s}}=\sum_{n=1}^{q}-\mathbf{\tilde{f}_{g_{t}}}*log(\mathbf{\tilde {f}_{l_{s}}^{(n)}}), \tag{3}\] where \(\mathbf{\tilde{f}_{l_{s}}}\) are the tokens of the class for \(\mathbf{l}_{s}\) produced by the student and \(q\) represents the number of local temporal views set to sixteen in all our experiments. The overall loss to train our model is simply a linear combination of both losses, as in Eqn. (2) and Eqn. (3), given as in Eqn. (4). \[\mathcal{L}=\mathcal{L}_{g_{t}-l_{t}}+\mathcal{L}_{g_{t}-l_{s}} \tag{4}\] ### Inference Fig. 3 illustrates our inference framework. During this stage, fine-tuning of the trained self-supervised model is performed. We use the pre-trained SPARTAN model and fine-tune the model with the available labels, followed by a linear classifier. We use this on downstream tasks to improve performance. ## 4 Experiments ### Datasets **Volleyball Dataset**[28] comprises 3,493 training and 1,337 testing clips, totaling 4,830 labeled clips, from 55 videos. The dataset contains annotations for eight group activity categories and nine individual action labels with corresponding bounding boxes. However, in our WSGAR experiments, we only use the group activity labels and ignore the individual action annotations. For evaluation, we use Multi-class Classification Accuracy (MCA) and Merged MCA metrics, where the latter merges the right set and right pass classes into right pass-set and the left set and left pass classes into left pass-set, as in previous works such as SAM [68] and DFWSGAR [30]. This is done to ensure a fair comparison with existing methods. **NBA Dataset**[68] in our experiment comprises a total of 9,172 labeled clips from 181 NBA videos, with 7,624 clips used for training and 1,548 for testing. Each clip is annotated with one of nine group activities, but there is no information on individual actions or bounding boxes. In evaluating the model, we use the Multi-class Classification Accuracy (MCA) and Mean Per Class Accuracy (MPCA) metrics, with MPCA used to address the issue of class imbalance in the dataset. ### Deep Network Architecture Our video processing approach uses a vision transformer (ViT) [8] to apply individual attention to both the temporal and spatial dimensions of the input video clips. The ViT consists of 12 encoder blocks and can process video clips of size \((B\times T\times C\times W\times H)\), where \(B\) and \(C\) represent the batch size and the number of color channels, respectively. The maximum spatial and temporal sizes are \(W=H=224\) and \(T=18\), respectively, meaning that we sample 18 frames from each video and rescale them to \(224\times 224\). Our network architecture (see Fig. 2) is designed to handle variable input resolution during training, such as differences in frame rate, number of frames in a video clip, and spatial size. However, each ViT encoder block processes a maximum of 196 spatial and 16 temporal tokens, and each token has an embedding dimension of \(\mathbb{R}^{m}\)[15]. Along with these spatial and temporal input tokens, we also use a single classification token as a characteristic vector within the architecture [14]. This classification token represents the standard features learned by the ViT along the spatial and temporal dimensions of a given video. During training, we use variable spatial and temporal resolutions that are \(W\leq 224\), \(H\leq 224\), and \(T\leq 18\), which result in various spatial and temporal tokens. Finally, we apply a projection head to the class token of the final ViT encoder [10, 21]. **Self-Distillation.** In our approach (shown in Fig. 2), we adopt a teacher-student setup for self-distillation inspired by [10, 21]. The teacher model has the same architecture as the student model, including the ViT backbone and predictor MLP, but it does not undergo direct training. Instead, during each training step of the student model, we update the teacher weights using an exponential moving average (EMA) of the student weights [10]. This approach enables us to use a single shared network to process multiple input clips. ### Implementation Details For both the NBA and Volleyball datasets, frames are sampled at a rate of T (\(K_{g}\)) using segment-based sampling [59]. The frames are then resized to \(W_{g}=224\) & Figure 3: **Inference**. We uniformly sample the video clip and pass it through a shared network and generate feature vectors (class tokens). These vectors are fed to the downstream task classifier. \(H_{g}=224\) for the teacher input and \(W_{l}=96\) & \(H_{l}=96\) for the student input, respectively. For the Volleyball dataset, we use \(K_{g}\) = 5 (\(K_{l}\in 3,5\)), while for the NBA dataset, we use \(K_{g}\) = 18 (\(K_{l}\in 2,4,8,16,18\)). We randomly initialize weights relevant to temporal attention, while spatial attention weights are initialized using a ViT model trained self-supervised over ImageNet-1K [53]. This initialization setup allows us to achieve faster convergence of space-time ViT similar to the supervised setting [8]. We use an Adam optimizer [31] with a learning rate of \(5\times 10^{-4}\), scaled using a cosine schedule with a linear warm-up for five epochs [12, 56]. We also use weight decay scaled from 0.04 to 0.1 during training. For the **downstream task**, we train a linear classifier on our pretrained SPARTAN backbone. During training, the backbone is frozen, and the classifier is trained for 100 epochs with a batch size of 32 on a single NVIDIA-V100 GPU using SGD with an initial learning rate of 1e-3 and a cosine decay schedule. We also set the momentum to 0.9. ### Comparison with state-of-the-art methods **NBA dataset** We compare our approach to the state-of-the-art in GAR and WSGAR, which leverage bounding box recommendations produced by SAM [68], as well as to current video backbones in the weakly supervised learning environment, using the NBA dataset. We exclusively utilise RGB frames as input for each approach, including the video backbones, to ensure a fair comparison. Table 1 lists the findings. Please take note that the reproduction of the SAM [68] has greater scores than those listed in the original article. With 6.3%p of MCA and 1.6%p of MPCA, the proposed method outperforms existing GAR and WSGAR methods by a significant margin. Additionally, our approach is contrasted with two current video backbones utilised in traditional action detection, ResNet-18 TSM [39] and VideoSwin-T [41]. These strong backbones perform admirably in WSGAR, but ours is the finest. **Volleyball dataset.** For the volleyball dataset, we compare our approach to the most recent GAR and WSGAR approaches in two different supervision levels: fully supervised and weakly supervised. The usage of actor-level labels, such as individual action class labels and ground-truth bounding boxes, in training and inference differs across the two settings. For a fair comparison, we report the results of previous methods [69, 67, 37, 46, 6] using only the RGB input, and the reproduced results [70, 45, 64, 20, 66] using the ResNet-18 backbone. Note that the first is from the original papers, and the second is the MCA values of [70]. We eliminate the individual action classification head and substitute an object detector trained on an external dataset for the ground-truth bounding boxes in the weakly supervised situation. Table 2 presents the results. Results from earlier techniques in fully supervised and weakly supervised environments are displayed in the first and second sections, respectively. In weakly supervised conditions, our technique significantly outperforms all GAR and WSGAR models, outperforming them by 2.4% of MCA and 1.2% of Merged MCA when compared to the models' utilising ViT-Base backbone. Our technique outperforms current GAR methods, such as [66, 7, 20, 45, 20], by employing more thorough actor-level supervision. ### Ablation Study We perform a comprehensive analysis of the different components that contribute to the effectiveness of our method. Specifically, we evaluate the impact of five individ \begin{table} \begin{tabular}{l|c c} \hline Method & MCA & MPCA \\ \hline \multicolumn{3}{c}{**Video backbone**} \\ \hline TSM [39] & 66.6 & 60.3 \\ VideoSwin [41] & 64.3 & 60.6 \\ \hline \multicolumn{3}{c}{**GAR model**} \\ \hline ARG [64] & 59.0 & 56.8 \\ AT [20] & 47.1 & 41.5 \\ SACRF [45] & 56.3 & 52.8 \\ DIN [70] & 61.6 & 56.0 \\ SAM [68] & 54.3 & 51.5 \\ DFWSGAR [30] & 75.8 & 71.2 \\ \hline **Ours** & **82.1** & **72.8** \\ \hline \end{tabular} \end{table} Table 1: Comparisons with the State-of-the-Art GAR models and video backbones on the NBA dataset [68]. \begin{table} \begin{tabular}{l|c c c} \hline Method & Backbone & MCA & \begin{tabular}{c} Merged \\ MCA \\ \end{tabular} \\ \hline \multicolumn{3}{c}{**Fully supervised**} \\ \hline SSU [7] & Inception-v3 & 89.9 & - \\ PCTDM [66] & ResNet-18 & 90.3 & 94.3 \\ StagNet [46] & VGG-16 & 89.3 & - \\ ARG [64] & ResNet-18 & 91.1 & 95.1 \\ CRM [6] & I3D & 92.1 & - \\ HiGCIN [67] & ResNet-18 & 91.4 & - \\ AT [20] & ResNet-18 & 90.0 & 94.0 \\ SACRF [45] & ResNet-18 & 90.7 & 92.7 \\ DIN [70] & ResNet-18 & 93.1 & **95.6** \\ TCE+STBiP [69] & VGG-16 & **94.1** & - \\ GroupFormer [37] & Inception-v3 & **94.1** & - \\ \hline \multicolumn{3}{c}{**Weakly supervised**} \\ \hline PCTDM [66] & ResNet-18 & 80.5 & 90.0 \\ ARG [64] & ResNet-18 & 87.4 & 92.9 \\ AT [20] & ResNet-18 & 84.3 & 89.6 \\ SACRF [45] & ResNet-18 & 83.3 & 86.1 \\ DIN [70] & ResNet-18 & 86.5 & 93.1 \\ SAM [68] & ResNet-18 & 86.3 & 93.1 \\ DFWSGAR [30] & ResNet-18 & 90.5 & 94.4 \\ \hline **Ours** & ViT-Base & **92.9** & **95.6** \\ \hline \end{tabular} \end{table} Table 2: Comparison with the state-of-the-art methods on the Volleyball dataset. [28] ual elements: **a)** various combinations of local and global view correspondences; **b)** different field of view variations along the temporal and spatial dimensions; **c)** the choice of temporal sampling strategy; **d)** the use of spatial augmentations; and **e)** the inference approach. **View Correspondences**: We propose cross-view correspondences (VC) to learn correspondences between local and global views. To investigate the effect of predicting each type of view from the other, we conduct experiments presented in Table 3. Our results show that jointly predicting \(\mathbf{l_{t}}\to\mathbf{g_{t}}\) and \(\mathbf{l_{s}}\to\mathbf{g_{t}}\) view correspondences leads to optimal performance. However, predicting \(\mathbf{g_{t}}\to\mathbf{l_{t}}\) or \(\mathbf{l_{s}}\to\mathbf{l_{t}}\) views results in reduced performance, possibly because joint prediction emphasizes learning rich context, which is absent for individual cases. We also observe a consistent performance drop for \(\mathbf{l_{s}}\to\mathbf{l_{t}}\) correspondences (no overlap views), consistent with previous findings on the effectiveness of temporally closer positive views for contrastive self-supervised losses [19, 47]. **Spatial vs. Temporal Field of View**: We determine the optimal combination of spatio-temporal views in Table 3 by varying the field of view (crops) along both spatial and temporal dimensions (as described in Sec.3.1.2). To evaluate the effects of variations along these dimensions, we conduct experiments as presented in Table4. Specifically, we compare the performance of our approach with no variation along the spatial dimension (where all frames have a fixed spatial resolution of \(224\times 224\) with no spatial cropping) and with no variation along the temporal dimension (where all frames in our views are sampled from a fixed time-axis region of a video). Our findings show that temporal variations have a significant impact on NBA, while variations in the field of view along both spatial and temporal dimensions lead to the best performance (as shown in Table 4). **Temporal Sampling Strategy**: Our investigation examines the possibility of replacing the temporal sampling strategy for motion correspondences (MC) proposed in our study with alternate sampling methods. To evaluate the effectiveness of MC, we replace it with an alternative approach within SPARTAN. Specifically, we test the temporal interval sampling (TIS) strategy introduced in [47], which has achieved state-of-the-art performance in self-supervised contrastive video settings with CNN backbones. Our experiments incorporating TIS in SPARTAN (Table 5) demonstrate that our proposed MC sampling strategy offers superior performance compared to TIS. **Spatial Augmentations**: We then investigate the impact of standard spatial augmentations (SA) on video data by experimenting with different patch sizes. Previous studies have shown that varying patch sizes can enhance the performance of CNN-based video self-supervision approaches. In our study, we evaluate the effect of patch size on our approach and present the results in Table 6, indicating that a patch size of 16 yields the best improvements. Based on these \begin{table} \begin{tabular}{c|c|c} \hline \hline Multi-view & NBA & Volleyball \\ \hline \hline ✗ & 76.17 & 88.35 \\ ✓ & **81.20** & **90.80** \\ \hline \hline \end{tabular} \end{table} Table 6: **Spatial Augmentations (SA)**: Applying different patch sizes randomly over the spatial dimensions for different views leads to consistent improvements on both NBA and Volleyball datasets. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Spatial & Temporal & NBA & Volleyball \\ \hline ✓ & ✗ & 69.38 & 78.59 \\ ✗ & ✓ & 72.90 & 81.45 \\ ✓ & ✓ & **81.20** & **90.80** \\ \hline \hline \end{tabular} \end{table} Table 4: **Spatial vs Temporal variations.** The best results are achieved by utilizing cross-view correspondences with varying fields of view along both spatial and temporal dimensions. It is observed that temporal variations between views have a greater impact on performance compared to applying only spatial variation. \begin{table} \begin{tabular}{c|c|c|c|c|c} \hline \hline \(\mathbf{l_{t}}\to\mathbf{g_{t}}\) & \(\mathbf{l_{s}}\to\mathbf{g_{t}}\) & \(\mathbf{l_{s}}\to\mathbf{l_{t}}\) & \(\mathbf{g_{t}}\to\mathbf{l_{t}}\) & NBA & Volleyball \\ \hline ✓ & ✗ & ✗ & ✗ & 61.03 & 62.70 \\ ✗ & ✓ & ✗ & ✗ & 62.59 & 65.40 \\ ✓ & ✓ & ✗ & ✗ & **81.20** & **90.80** \\ ✓ & ✓ & ✓ & ✗ & 72.11 & 77.62 \\ ✓ & ✓ & ✗ & ✓ & 78.17 & 85.88 \\ ✗ & ✗ & ✓ & ✓ & 64.36 & 71.87 \\ \hline \hline \end{tabular} \end{table} Table 3: **View Correspondences (VC).** The most optimal combination for predicting view correspondences involves predicting local-to-global (temporal) and local-to-global (spatial) views, outperforming other combinations. \begin{table} \begin{tabular}{c|c|c} \hline \hline Method & NBA & Volleyball \\ \hline Ours + TIS [47] & 78.45 & 88.11 \\ Ours + MC & **81.20** & **90.80** \\ \hline \hline \end{tabular} \end{table} Table 5: **Temporal Sampling Strategy**. We evaluate the effectiveness of our proposed temporal sampling strategy, called ”_motion correspondences (MC)_” (Sec. 3.1.1), by comparing it with an alternate approach, the ”temporal interval sampler (TIS)” [47], used with CNNs under contrastive settings. findings, we incorporate a patch size of 16 in our SPARTAN training process. **Inference**: To assess the impact of our proposed inference method (Sec. 3.3), we analyze the results presented in Table 7. Our findings demonstrate that our approach yields greater improvements on the NBA [68] and Volleyball [28] datasets, which contain classes that can be more easily distinguished using motion information [24]. ### Qualitative Results We show the attention visualisations derived from the final Transformer encoder layer on the NBA dataset in Fig. 4. The results indicate that the model learnt to pay attention to essential concepts, such as the position of the players, and to follow the activity in a specific video clip. The \(t\)-SNE [57] visualisation results of our model and its modifications are shown in Fig. 5. Each model's final group representation on NBA is shown in two-dimensional space. The recommended modules help to clearly separate each class. ## 5 Conclusion Our work introduces SPARTAN, a self-supervised video transformer-based model. The approach involves generating multiple spatio-temporally varying views from a single video at different scales and frame rates. Two sets of correspondence learning tasks are then defined to capture the motion properties and cross-view relationships between the sampled clips. The self-supervised objective involves reconstructing one view from the other in the latent space of teacher and student networks. Moreover, our SPARTAN can model long-range spatio-temporal dependencies and perform dynamic inference within a single architecture. We evaluate SPARTAN on two group activity recognition benchmarks and find that it outperforms the current state-of-the-art models. **Limitations:** Our paper investigates the application of SPARTAN in the context of the RGB input modality. Currently, we do not utilize the additional supervision provided by alternate modalities in large-scale multimodal video datasets. However, in future work, we plan to explore ways in which we can modify SPARTAN to take advantage of multimodal data sources. **Acknowledgment** This work is supported by Arkansas Biosciences Institute (ABI) Grant, NSF WVAR-CRESH and NSF Data Science, Data Analytics that are Robust and Trusted (DART). Figure 4: Visualization of the Transformer attention maps for NBA dataset. (top) Original sequence from NBA dataset [68], (middle) Attention maps from DFWSGAR [30] and (bottom) Attention maps from our SPARTAN model. Figure 5: \(t\)-SNE [57] visualization of feature embedding learned by different variants of our SPARTAN model in the NBA dataset.
2304.07285
Ideals in the convolution algebra of periodic distributions
The ring of periodic distributions on ${\mathbb{R}}^{\tt d}$ with usual addition and with convolution is considered. Via Fourier series expansions, this ring is isomorphic to the ring ${\mathcal{S}}'({\mathbb{Z}}^{\tt d})$ of all maps $f:{\mathbb{Z}}^{\tt d}\rightarrow \mathbb{C}$ of at most polynomial growth (i.e., there exist a real $M>0$ and a nonnegative integer ${\tt m}$ such that for all $\mathbf{n}=({\tt n}_1,\cdots, {\tt n}_{\tt d})\in {\mathbb{Z}}^{\tt d}$, $ |f(\mathbf{n})|\leq M(1+|{\tt n}_1|+\cdots+|{\tt n}_{\tt d}|)^{\tt m}$), with pointwise operations. It is shown that finitely generated ideals in ${\mathcal{S}}'({\mathbb{Z}}^{\tt d})$ are principal, and ideal membership is characterised analytically. Calling an ideal in ${\mathcal{S}}'({\mathbb{Z}}^{\tt d})$ fixed if there is a common index $\mathbf{n} \in {\mathbb{Z}}^{\tt d}$ where each member vanishes, the fixed maximal ideals are described, and it is shown that not all maximal ideals are fixed. It is shown that finitely generated proper prime ideals in ${\mathcal{S}}'({\mathbb{Z}}^{\tt d})$ are fixed maximal ideals. The Krull dimension of ${\mathcal{S}}'({\mathbb{Z}}^{\tt d})$ is proved to be infinite, while the weak Krull dimension is shown to be equal to $1$.
Amol Sasane
2023-04-14T17:55:27Z
http://arxiv.org/abs/2304.07285v1
# Ideals in the convolution algebra of periodic distributions ###### Abstract. The ring of periodic distributions on \(\mathbb{R}^{4}\) with usual addition and with convolution is considered. Via Fourier series expansions, this ring is isomorphic to the ring \(\mathcal{S}^{\prime}(\mathbb{Z}^{4})\) of all maps \(f:\mathbb{Z}^{4}\to\mathbb{C}\) of at most polynomial growth (i.e., there exist a real \(M>0\) and a nonnegative integer \(\mathtt{m}\) such that for all \(\boldsymbol{n}=(\mathtt{n}_{1},\cdots,\mathtt{n}_{\mathtt{d}})\in\mathbb{Z}^{ \mathtt{d}}\), \(|f(\boldsymbol{n})|\leq M(1+|\mathtt{n}_{1}|+\cdots+|\mathtt{n}_{\mathtt{d}}|) ^{\mathtt{n}}\)), with pointwise operations. It is shown that finitely generated ideals in \(\mathcal{S}^{\prime}(\mathbb{Z}^{4})\) are principal, and ideal membership is characterised analytically. Calling an ideal in \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\) fixed if there is a common index \(\boldsymbol{n}\in\mathbb{Z}^{\mathtt{d}}\) where each member vanishes, the fixed maximal ideals are described, and it is shown that not all maximal ideals are fixed. It is shown that finitely generated proper prime ideals in \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\) are fixed maximal ideals. The Krull dimension of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\) is proved to be infinite, while the weak Krull dimension is shown to be equal to \(1\). Key words and phrases:Krull dimension, prime ideals, maximal ideals, ring of periodic distributions, convolution, sequences of at most polynomial growth 2010 Mathematics Subject Classification: Primary 54C40; Secondary 13A15, 15A24, 13J99 ## 1. Introduction The aim of this article is to study ideals in a naturally arising ring in harmonic analysis and distribution theory, namely the ring \(\mathcal{D}^{\prime}_{\mathbf{V}}(\mathbb{R}^{\mathtt{d}})\) of periodic distributions with the usual addition \(+\) distributions, and with convolution \(*\) taken as multiplication. Via a Fourier series expansion, the ring \((\mathcal{D}^{\prime}_{\mathbf{V}}(\mathbb{R}^{\mathtt{d}}),+,*)\) is isomorphic to the ring \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\) consisting of all \(f:\mathbb{Z}^{\mathtt{d}}\to\mathbb{C}\) of at most polynomial growth, with pointwise operations, and we recall this below. ### The ring of periodic distributions For background on periodic distributions and its Fourier series theory, we refer the reader to [2, Chapter 16] and [7, pp.527-529]. Let \(\mathbb{N}=\{1,2,3,\cdots\}\) be the set of natural numbers and \(\mathbb{Z}\) be the set of integers. Consider the space \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}})\) of all complex valued maps on \(\mathbb{Z}^{\mathsf{d}}\) of at most polynomial growth, that is, \[\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}}):=\Big{\{}f:\mathbb{Z}^{\mathsf{ d}}\to\mathbb{C}\;\Big{|}\;\begin{array}{l}\exists\text{ a real }M>0\;\exists\,\mathfrak{m}\in\mathbb{N}\cup\{0\}\;\text{ such that }\\ \forall\boldsymbol{n}\in\mathbb{Z}^{\mathsf{d}},\;|f(\boldsymbol{n})|\leq M(1+ \boldsymbol{|n|})^{\mathfrak{m}}\end{array}\Big{\}},\] where \(\boldsymbol{|n|}:=|\mathbf{n}_{1}|+\cdots+|\mathbf{n}_{\mathsf{d}}|\) for all \(\boldsymbol{n}=(\mathbf{n}_{1},\cdots,\mathbf{n}_{\mathsf{d}})\in\mathbb{Z}^{ \mathsf{d}}\). Then \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}})\) is a unital commutative ring with pointwise operations, and the multiplicative unit element \(1_{\mathbb{Z}^{\mathsf{d}}}\) is the constant function \(\mathbb{Z}^{\mathsf{d}}\ni\boldsymbol{n}\mapsto 1\). The set \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}})\) equipped with pointwise operations, is a commutative, unital ring. Moreover, \((\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}}),+,\cdot)\) is isomorphic as a ring, to the ring \((\mathcal{D}^{\prime}_{\mathbf{V}}(\mathbb{R}^{\mathsf{d}}),+,*)\), where \(\mathcal{D}^{\prime}_{\mathbf{V}}(\mathbb{R}^{\mathsf{d}})\) is the set of all periodic distributions (with periods described by \(\mathbf{V}\), see the definition below), with the usual pointwise addition of distributions, and multiplication taken as convolution of distributions. Let \(\mathcal{D}(\mathbb{R}^{d})\) denote the space of compactly supported infinitely many times differentiable complex valued functions on \(\mathbb{R}^{\mathsf{d}}\), and \(\mathcal{D}^{\prime}(\mathbb{R}^{\mathsf{d}})\) the space of distributions on \(\mathbb{R}^{\mathsf{d}}\). For \(\boldsymbol{v}\in\mathbb{R}^{\mathsf{d}}\), the _translation operator_\(\mathbf{S}_{\boldsymbol{v}}:\mathcal{D}^{\prime}(\mathbb{R}^{\mathsf{d}})\to \mathcal{D}^{\prime}(\mathbb{R}^{\mathsf{d}})\), is defined by \(\langle\mathbf{S}_{\boldsymbol{v}}(T),\varphi\rangle=\langle T,\varphi( \cdot+\boldsymbol{v})\rangle\) for all \(\varphi\in\mathcal{D}(\mathbb{R}^{\mathsf{d}})\). A distribution \(T\in\mathcal{D}^{\prime}(\mathbb{R}^{\mathsf{d}})\) is called _periodic with a period_\(\boldsymbol{v}\in\mathbb{R}^{\mathsf{d}}\setminus\{\boldsymbol{0}\}\) if \(T=\mathbf{S}_{\boldsymbol{v}}(T)\). Let \(\mathbf{V}:=\{\boldsymbol{v}_{1},\cdots,\boldsymbol{v}_{\mathsf{d}}\}\) be a linearly independent set of \(\mathsf{d}\) vectors in \(\mathbb{R}^{\mathsf{d}}\). Let \(\mathcal{D}^{\prime}_{\mathbf{V}}(\mathbb{R}^{\mathsf{d}})\) denote the set of all distributions \(T\) that satisfy \(\mathbf{S}_{\boldsymbol{v}_{\mathsf{k}}}(T)=T\) for all \(\mathtt{k}\in\{1,\cdots,\mathsf{d}\}\). From [1, SS34], \(T\) is a tempered distribution, and from the above it follows by taking Fourier transforms that \((1-e^{2\pi i\boldsymbol{v}_{\mathsf{k}}\cdot\boldsymbol{y}})\widehat{T}=0\), for \(\mathtt{k}\in\{1,\cdots,\mathsf{d}\}\), \(\boldsymbol{y}\in\mathbb{R}^{d}\). Then \(\widehat{T}=\sum_{\boldsymbol{v}\in V^{-1}\mathbb{Z}^{\mathsf{d}}}\alpha_{ \boldsymbol{v}}(T)\delta_{\boldsymbol{v}}\), for some scalars \(\alpha_{\boldsymbol{v}}(T)\in\mathbb{C}\), and where \(V\) is the matrix with its rows equal to the transposes of the column vectors \(\boldsymbol{v}_{1},\cdots,\boldsymbol{v}_{\mathsf{d}}\): \(V^{\mathrm{t}}:=\left[\,\boldsymbol{v}_{1}\,\cdots\,\boldsymbol{v}_{\mathsf{ d}}\,\right],\) with \(V^{\mathrm{t}}\) denoting the transpose of the matrix \(V\). Also, in the above, \(\delta_{\boldsymbol{v}}\) denotes the usual Dirac measure with support in \(\boldsymbol{v}\), i.e., \(\langle\delta_{\boldsymbol{v}},\varphi\rangle=\varphi(\boldsymbol{v})\) for all \(\varphi\in\mathcal{D}(\mathbb{R}^{\mathsf{d}})\). Then the Fourier coefficients \(\alpha_{\boldsymbol{v}}(T)\) give rise to an element in \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}})\), and vice versa, every element in \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}})\) is the set of Fourier coefficients of some periodic distribution. In fact, the ring \((\mathcal{D}^{\prime}_{\mathbf{V}}(\mathbb{R}^{\mathsf{d}}),+,*)\) of periodic distributions on \(\mathbb{R}^{\mathsf{d}}\) is isomorphic to the ring \((\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}}),+,\cdot)\). In [5], some algebraic-analytical properties of \((\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}}),+,\cdot)\) were established; see also [4]. In this article, the structure of ideals in this ring is studied, akin to an analogous investigation in [8] for a ring of entire functions. ### Main results and organisation of the article * In SS2, we show that finitely generated ideals in \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}})\) are principal, and ideal membership is characterised analytically. * In SS3, we describe fixed maximal ideals in \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}})\), and it is shown that not all maximal ideals are fixed. * In SS4, we show that finitely generated proper prime ideals in \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}})\) are fixed maximal ideals. Also, the Krull dimension of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}})\) is proved to be infinite, while the weak Krull dimension is shown to be equal to \(1\). ## 2. Finitely generated ideals **Proposition 2.1**.: \(g\) _is a divisor of \(f\) in \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}})\) if and only if there exist a real number \(M>0\) and a nonnegative integer \(\mathtt{m}\) such that for all \(\boldsymbol{n}\in\mathbb{Z}^{\mathsf{d}}\), \(|f(\boldsymbol{n})|\leq M(1+\biguup{\boldsymbol{n}}\bigdown)^{\mathtt{n}}|g( \boldsymbol{n})|\)._ Proof.: ('If' part:) Define \(d:\mathbb{Z}^{\mathsf{d}}\to\mathbb{C}\) by \[d(\boldsymbol{n})=\left\{\begin{array}{cl}\frac{f(\boldsymbol{n})}{g( \boldsymbol{n})}&\text{if }g(\boldsymbol{n})\neq 0,\\ 0&\text{if }g(\boldsymbol{n})=0.\end{array}\right.\] Thus for \(g(\boldsymbol{n})\neq 0\), we have \(|d(\boldsymbol{n})|\leq M(1+\biguup{\boldsymbol{n}}\bigdown)^{\mathtt{n}}\), and this also holds trivially when \(g(\boldsymbol{n})=0\), since the left-hand side is \(0\). Thus \(d\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}})\). Moreover, for \(g(\boldsymbol{n})\neq 0\), we have \(d(\boldsymbol{n})g(\boldsymbol{n})=f(\boldsymbol{n})\), and when \(g(\boldsymbol{n})=0\), the inequality \(|f(\boldsymbol{n})|\leq M(1+\biguup{\boldsymbol{n}}\bigdown)^{\mathtt{n}}|g( \boldsymbol{n})|\) yields \(f(\boldsymbol{n})=0\) too, showing that \(d(\boldsymbol{n})g(\boldsymbol{n})=d(\boldsymbol{n})0=0=f(\boldsymbol{n})\). Hence \(dg=f\), as wanted. ('Only if' part:) Suppose that \(d\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}})\) is such that \(dg=f\). Since \(d\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathsf{d}})\), there exist \(M>0\) and a nonnegative integer \(\mathtt{m}\) such that \(|d(\boldsymbol{n})|\leq M(1+\biguup{\boldsymbol{n}}\bigdown)^{\mathtt{n}}\). So \(|f(\boldsymbol{n})|\!\leq\!|d(\boldsymbol{n})||g(\boldsymbol{n})|\!\leq\!M(1+ \biguup{\boldsymbol{n}}\bigdown)^{\mathtt{n}}|g(\boldsymbol{n})|\) for all \(\boldsymbol{n}\in\mathbb{Z}^{\mathsf{d}}\). In particular, \(f\) is invertible in \(\mathcal{S}^{\prime}(\mathbb{Z}^{d})\) if and only if there exists a real number \(\delta>0\) and a nonnegative integer \(\mathtt{m}\) such that for all \(\boldsymbol{n}\in\mathbb{Z}^{d}\), \(|f(\boldsymbol{n})|\geq\delta(1+|\boldsymbol{n}|)^{-\mathtt{m}}\). **Proposition 2.2**.: _Every finite number of elements \(f_{1},\cdots,f_{\mathbb{K}}\in\mathcal{S}^{\prime}(\mathbb{Z}^{d})\)\((\mathbb{K}\in\mathbb{N})\) have a greatest common divisor \(d\). The element \(d\) is given \((\)up to invertible elements\()\) by \(d(\boldsymbol{n})=\max\{|f_{1}(\boldsymbol{n})|,\cdots,|f_{\mathbb{K}}( \boldsymbol{n})|\}\)\((\boldsymbol{n}\in\mathbb{Z}^{d})\)._ Proof.: Let \(d(\boldsymbol{n})=\max\{|f_{1}(\boldsymbol{n})|,\cdots,|f_{\mathbb{K}}( \boldsymbol{n})|\}\) for all \(\boldsymbol{n}\in\mathbb{Z}^{d}\). Clearly \(d\in\mathcal{S}^{\prime}(\mathbb{Z}^{d})\). As \(|f_{\mathbb{k}}(\boldsymbol{n})|\leq|d(\boldsymbol{n})|\) for all \(\boldsymbol{n}\in\mathbb{Z}^{d}\) and all \(\mathbb{k}\in\{1,\cdots,\mathbb{K}\}\), Proposition 2.1 implies that \(d\) is a common divisor of \(f_{1},\cdots,f_{\mathbb{K}}\). If \(\widetilde{d}\in\mathcal{S}^{\prime}(\mathbb{Z}^{d})\) is a common divisor of \(f_{1},\cdots,f_{\mathbb{K}}\), then by Proposition 2.1 again, there exist real \(M_{\mathbb{k}}>0\) and positive integers \(\mathtt{m}_{\mathbb{k}}\), for each \(\mathbb{k}\in\{1,\cdots,\mathbb{K}\}\), such that \(|f_{\mathbb{k}}(\boldsymbol{n})|\leq M_{\mathbb{k}}(1+|\boldsymbol{n}|)^{ \mathtt{m}}|\widetilde{d}(\boldsymbol{n})|\) for all \(\boldsymbol{n}\in\mathbb{Z}^{d}\). Setting \(M:=\max\{M_{1},\cdots,M_{\mathbb{K}}\}\) and \(m:=\max\{\mathtt{m}_{1},\cdots,\mathtt{m}_{\mathbb{K}}\}\), we get \(|d(\boldsymbol{n})|\leq M(1+|\boldsymbol{n}|)^{\mathtt{m}}|\widetilde{d}( \boldsymbol{n})|\) for all \(\boldsymbol{n}\in\mathbb{Z}^{d}\). By Proposition 2.1, \(\widetilde{d}\) divides \(d\) in \(\mathcal{S}^{\prime}(\mathbb{Z}^{d})\). **Proposition 2.3**.: _Let \(\langle f_{1},\cdots,f_{\mathbb{K}}\rangle\) denote the ideal generated \(\mathbb{K}\in\mathbb{N}\) elements \(f_{1},\cdots,f_{\mathbb{K}}\in\mathcal{S}^{\prime}(\mathbb{Z}^{d})\). Then \(f\in\langle f_{1},\cdots,f_{\mathbb{K}}\rangle\) if and only if there exists an \(M>0\) and a nonnegative integer \(\mathtt{m}\) such that_ \[|f(\boldsymbol{n})|\leq M(1+|\boldsymbol{n}|)^{\mathtt{m}}\sum\limits_{ \mathbb{k}=1}^{\mathbb{K}}|f_{\mathbb{k}}(\boldsymbol{n})|\text{ for all }\boldsymbol{n}\in\mathbb{Z}^{d}.\] Proof.: ('If' part:) For \(\mathbb{k}\in\{1,\cdots,\mathbb{N}\}\), define \(g_{\mathbb{k}}:\mathbb{Z}^{d}\to\mathbb{C}\) by \[g_{\mathbb{k}}(\boldsymbol{n})=\left\{\begin{array}{cl}\frac{\overline{f_{ \mathbb{k}}(\boldsymbol{n})}}{\sum\limits_{j=1}^{\mathbb{K}}|f_{j}(\boldsymbol {n})|^{2}}f(\boldsymbol{n})&\text{if }\sum\limits_{j=1}^{\mathbb{K}}|f_{j}( \boldsymbol{n})|^{2}\neq 0,\\ 0&\text{if }\sum\limits_{j=1}^{\mathbb{K}}|f_{j}(\boldsymbol{n})|^{2}=0.\end{array}\right.\] If \(Q(\boldsymbol{n}):=\sum\limits_{j=1}^{\mathbb{K}}|f_{j}(\boldsymbol{n})|^{2}\neq 0\), then (by Cauchy-Schwarz in the last step), \[|g_{\mathbb{k}}(\boldsymbol{n})|=\frac{|\overline{f_{\mathbb{k}} (\boldsymbol{n})}|}{\sum\limits_{j=1}^{\mathbb{K}}|f_{j}(\boldsymbol{n})|^{2} }|f(\boldsymbol{n})|\leq\frac{\sum\limits_{j=1}^{\mathbb{K}}|f_{j}(\boldsymbol {n})|}{\sum\limits_{j=1}^{\mathbb{K}}|f_{j}(\boldsymbol{n})|^{2}}M(1+| \boldsymbol{n}|)^{\mathtt{m}}\sum\limits_{k=1}^{\mathbb{K}}|f_{\mathbb{k}}( \boldsymbol{n})|\\ \leq\frac{(\sum\limits_{j=1}^{\mathbb{K}}|f_{j}(\boldsymbol{n})|)^{2}}{ \sum\limits_{j=1}^{\mathbb{K}}|f_{j}(\boldsymbol{n})|^{2}}M(1+|\boldsymbol{n }|)^{\mathtt{m}}\leq KM(1+|\boldsymbol{n}|)^{\mathtt{m}}.\] So \(g_{1},\cdots,g_{\mathbb{K}}\in\mathcal{S}^{\prime}(\mathbb{Z}^{\rm d})\). We claim that \(f_{1}g_{1}+\cdots+f_{\mathbb{K}}g_{\mathbb{K}}=f\). The evaluation of the left-hand side at an \(\boldsymbol{n}\in\mathbb{Z}^{\rm d}\) such that \(Q(\boldsymbol{n})\neq 0\) is easily seen to be \(f(\boldsymbol{n})\) by the definition of \(g_{1},\cdots,g_{\mathbb{K}}\). On the other hand, if \(Q(\boldsymbol{n})=0\), then each \(f_{\mathbb{k}}(\boldsymbol{n})=0\), and by the given inequality in the statement of the proposition, so is \(f(\boldsymbol{n})=0\). Thus in this case the evaluations at \(\boldsymbol{n}\) of both sides of \(f_{1}g_{1}+\cdots+f_{\mathbb{K}}g_{\mathbb{K}}=f\) are zeroes. ('Only if' part:) If \(f\in\langle f_{1},\cdots,f_{\mathbb{K}}\rangle\), then there exist \(g_{1},\cdots,g_{\mathbb{K}}\in\mathcal{S}^{\prime}(\mathbb{Z}^{\rm d})\) such that \(f=f_{1}g_{1}+\cdots+f_{\mathbb{K}}g_{\mathbb{K}}\). Let \(M_{\mathbb{k}}>0\) and \(\mathfrak{m}_{\mathbb{k}}\in\mathbb{N}\cup\{0\}\), \(\mathbb{k}\in\{1,\cdots,\mathbb{K}\}\), be such that \(|g_{\mathbb{k}}(\boldsymbol{n})|\leq M_{\mathbb{k}}(1+\boldsymbol{n}|)^{ \mathfrak{m}_{\mathbb{k}}}\) (\(\boldsymbol{n}\in\mathbb{Z}^{\rm d}\)). Then with \(M:=\max\{M_{1},\cdots,M_{\mathbb{K}}\}\) and \(\mathfrak{m}:=\max\{\mathfrak{m}_{1},\cdots,\mathfrak{m}_{\mathbb{K}}\}\), we get \[|f(\boldsymbol{n})|\leq\sum_{\mathbb{k}=1}^{\mathbb{K}}|f_{\mathbb{k}}( \boldsymbol{n})||g_{\mathbb{k}}(\boldsymbol{n})|\leq M(1+\boldsymbol{n}|)^{ \mathfrak{m}}\!\!\sum_{\mathbb{k}=1}^{\mathbb{K}}\!\!|f_{\mathbb{k}}( \boldsymbol{n})|.\qed\] It follows from Propositions 2.2 and 2.3 that every finite generated ideal is principal. (Indeed, \(\langle f_{1},\cdots,f_{\mathbb{K}}\rangle=\langle d\rangle\): That \(f_{\mathbb{k}}\subset\langle d\rangle\) for each \(\mathbb{k}\) is obvious as \(d\) divides \(f_{\mathbb{k}}\), in turn showing \(\langle f_{1},\cdots,f_{\mathbb{K}}\rangle\subset\langle d\rangle\). For the reverse inclusion, \(|d(\boldsymbol{n})|=\max\{|f_{1}(\boldsymbol{n})|,\cdots,|f_{\mathbb{k}}( \boldsymbol{n})|\}\leq\sum_{\mathbb{k}=1}^{\mathbb{K}}|f_{\mathbb{k}}( \boldsymbol{n})|\)\((\boldsymbol{n}\in\mathbb{Z}^{\rm d})\), and so by Proposition 2.3, \(d\in\langle f_{1},\cdots,f_{\mathbb{K}}\rangle\). Thus we get \(\langle d\rangle\subset\langle f_{1},\cdots,f_{\mathbb{K}}\rangle\).) ## 3. Maximal ideals **Definition 3.1**.: An ideal \(\mathfrak{i}\) of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\rm d})\) is _fixed_ if there exists an \(\boldsymbol{k}\in\mathbb{Z}^{\rm d}\) such that for all \(f\in\mathfrak{i}\), \(f(\boldsymbol{k})=0\). **Theorem 3.2**.: _For \(\boldsymbol{k}\in\mathbb{Z}^{\rm d}\), let \(\mathfrak{m}_{\boldsymbol{k}}:=\{f\in\mathcal{S}^{\prime}(\mathbb{Z}^{\rm d}): f(\boldsymbol{k})=0\}\). Then \(\mathfrak{m}_{\boldsymbol{k}}\) is a fixed maximal ideal of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\rm d})\). Every fixed maximal ideal of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\rm d})\) is equal to \(\mathfrak{m}_{\boldsymbol{k}}\) for some \(\boldsymbol{k}\in\mathbb{Z}^{\rm d}\)._ Proof.: The fixedness of \(\mathfrak{m}_{\boldsymbol{k}}\) is clear. We now show that maximality. As \(1_{\mathbb{Z}^{\rm d}}\in\mathcal{S}^{\prime}(\mathbb{Z}^{\rm d})\setminus \mathfrak{m}_{\boldsymbol{k}}\), \(\mathfrak{m}_{\boldsymbol{k}}\subsetneq\mathcal{S}^{\prime}(\mathbb{Z}^{\rm d})\). Let \(\mathfrak{i}\) be an ideal such that \(\mathfrak{m}_{\boldsymbol{k}}\subsetneq\mathfrak{i}\). Suppose that \(f\in\mathfrak{i}\setminus\mathfrak{m}_{\boldsymbol{k}}\). Then \(f(\boldsymbol{k})\neq 0\). Define \(g\in\mathcal{S}^{\prime}(\mathbb{Z}^{\rm d})\) by \(g=1_{\mathbb{Z}^{\rm d}}-\frac{f}{f(\boldsymbol{k})}\). As \(g(\boldsymbol{k})=0\), we have \(g\in\mathfrak{m}_{\boldsymbol{k}}\subset\mathfrak{i}\). Also, \(\frac{f}{f(\boldsymbol{k})}\in\mathfrak{i}\). Thus \(1_{\mathbb{Z}^{\rm d}}=g+\frac{f}{f(\boldsymbol{k})}\in\mathfrak{i}\), i.e., \(\mathfrak{i}=\mathcal{S}^{\prime}(\mathbb{Z}^{\rm d})\). Next, let \(\mathfrak{m}\) be a fixed maximal ideal of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\rm d})\). Since \(\mathfrak{m}\) is fixed, there exists a \(\boldsymbol{k}\in\mathbb{Z}^{\rm d}\) such that \(\mathfrak{m}\subset\mathfrak{m}_{\boldsymbol{k}}\subsetneq\mathcal{S}^{\prime}( \mathbb{Z}^{\rm d})\). By the maximality of \(\mathfrak{m}\), we conclude that \(\mathfrak{m}=\mathfrak{m}_{\boldsymbol{k}}\) **Example 3.3** (Non-fixed maximal ideals).: Let \((\mathtt{k}_{\mathtt{j}})_{\mathtt{j}\in\mathbb{N}}\) be any subsequence of the sequence of natural numbers. Set \(\boldsymbol{k}_{\mathtt{j}}=(\mathtt{k}_{\mathtt{j}},\cdots,\mathtt{k}_{ \mathtt{j}})\in\mathbb{Z}^{\mathtt{d}}\). Define \(\mathtt{i}:=\{f\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}}):\lim_{ \mathtt{j}\to\infty}e^{\mathtt{k}_{\mathtt{j}}}f(\boldsymbol{k}_{\mathtt{j}}) =0\}.\) Then \(\mathtt{i}\) is an ideal of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\). (It is clear that if \(f,g\in\mathtt{i}\), then \(f+g\in\mathtt{i}\). If \(f\in\mathtt{i}\) and \(g\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\), then there exist a real \(M>0\) and an \(\mathtt{m}\in\mathbb{N}\cup\{0\}\) such that \(|g(\boldsymbol{n})|\leq M(1+\boldsymbol{\left|\boldsymbol{n}\right|})^{ \mathtt{m}}\) for all \(\boldsymbol{n}\in\mathbb{Z}^{\mathtt{d}}\), and so \[|(fg)(\boldsymbol{k}_{\mathtt{j}})|\leq|f(\boldsymbol{k}_{\mathtt{j}})|M(1+ \mathtt{dk}_{\mathtt{j}})^{\mathtt{m}}=e^{\mathtt{k}_{\mathtt{j}}}|f( \boldsymbol{k}_{\mathtt{j}})|e^{-\mathtt{k}_{\mathtt{j}}}M(1+\mathtt{dk}_{ \mathtt{j}})^{\mathtt{m}}\stackrel{{\mathtt{j}\to\infty}}{{ \longrightarrow}}0,\] showing that \(fg\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\).) Moreover, \(\mathtt{i}\neq\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\) since \(1_{\mathbb{Z}^{\mathtt{d}}}\not\in\mathtt{i}\): \(e^{\mathtt{k}_{\mathtt{j}}}|1_{\mathbb{Z}^{\mathtt{d}}}(\boldsymbol{k}_{ \mathtt{j}})|=e^{\mathtt{k}_{\mathtt{j}}}1>1\) for all \(n\in\mathbb{N}\). Hence there exists a maximal ideal \(\mathfrak{m}\) in \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\) such that \(\mathtt{i}\subset\mathfrak{m}\). We note that for each \(\boldsymbol{k}\in\mathbb{Z}^{\mathtt{d}}\), \(\mathfrak{m}\neq\mathfrak{m}_{\boldsymbol{k}}\): Define \(1_{\{\boldsymbol{k}\}}:\mathbb{Z}^{\mathtt{d}}\to\mathbb{C}\) by \(1_{\{\boldsymbol{k}\}}(\boldsymbol{n})=0\) for all \(\boldsymbol{n}\neq\boldsymbol{k}\) and \(1_{\{\boldsymbol{k}\}}(\boldsymbol{k})=1\). Then \(1_{\{\boldsymbol{k}\}}\in\mathtt{i}\subset\mathfrak{m}\), but \(1_{\{\boldsymbol{k}\}}\not\in\mathfrak{m}_{\boldsymbol{k}}\) as \(1_{\{\boldsymbol{k}\}}(\boldsymbol{k})=1\neq 0\). \(\Diamond\) ## 4. Prime ideals ### Finitely generated proper prime ideals **Theorem 4.1**.: _Let \(\mathfrak{p}\) be a finitely generated proper prime ideal of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\). Then there exists an \(\boldsymbol{n}_{*}\in\mathbb{Z}^{\mathtt{d}}\) such that_ \[\mathfrak{p}=\{f\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}}):f(\boldsymbol {n}_{*})=0\}.\] Proof.: We carry out the proof in several steps. **Step 1. \(\mathfrak{p}\) is principal. If \(\mathfrak{p}\!=\!\langle d\rangle\), then \(d\) has at most one zero.** As \(\mathfrak{p}\) is finitely generated, it is principal. Let \(d\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\) be such that \(\mathfrak{p}=\langle d\rangle\). Let \(\boldsymbol{m},\boldsymbol{n}\in\mathbb{Z}^{\mathtt{d}}\) be distinct and \(d(\boldsymbol{n})=0=d(\boldsymbol{m})\). Define \(a:\mathbb{Z}^{\mathtt{d}}\to\mathbb{C}\) by \(a(\boldsymbol{k})=1\) for all \(\boldsymbol{k}\neq\boldsymbol{n}\) and \(a(\boldsymbol{n})=0\). Then \(a\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\). Also, let \(b:\mathbb{Z}^{\mathtt{d}}\to\mathbb{C}\) be defined by \(b(\boldsymbol{k})=d(\boldsymbol{k})\) for all \(\boldsymbol{k}\not\in\{\boldsymbol{m},\boldsymbol{n}\}\), \(b(\boldsymbol{m})=0\) and \(b(\boldsymbol{n})=1\). Then clearly \(b\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\) too (since it matches with \(d\) everywhere except at the single index \(\boldsymbol{n}\)). Now \((ab)(\boldsymbol{k})=d(\boldsymbol{k})\) for all \(\boldsymbol{k}\in\mathbb{Z}^{\mathtt{d}}\): If \(\boldsymbol{k}\not\in\{\boldsymbol{m},\boldsymbol{n}\}\) this is clear from the definitions since the left-hand side is \(1\cdot d(\boldsymbol{k})\), and if \(\boldsymbol{k}=\boldsymbol{m}\) or \(\boldsymbol{n}\), then both sides are \(0\). So \(ab=d\in\langle d\rangle=\mathfrak{p}\). But \(a\not\in\langle d\rangle\) since otherwise \(a=d\tilde{a}\) for some \(\tilde{a}\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\) and then \(1=a(\boldsymbol{m})=d(\boldsymbol{m})\tilde{a}(\boldsymbol{m})=0\tilde{a}( \boldsymbol{m})=0\), a contradiction. Also, \(b\not\in\langle d\rangle\) since otherwise \(b=d\tilde{b}\) for some \(\tilde{b}\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\) and then \(1=b(\boldsymbol{n})=d(\boldsymbol{n})\tilde{b}(\boldsymbol{n})=0\tilde{b}( \boldsymbol{m})=0\), a contradiction. So neither \(a\) nor \(b\) belong to \(\langle d\rangle=\mathfrak{p}\), contradicting the primality of \(\mathfrak{p}\). **Step 2.** Let \(\mathfrak{p}\!=\!\langle d\rangle\) (as in Step 1). For each \(\boldsymbol{n}\!\in\!\mathbb{Z}^{\mathfrak{d}}\), let \(d(\boldsymbol{n})\!=\!|d(\boldsymbol{n})|e^{i\theta(\boldsymbol{n})}\) for some \(\theta(\boldsymbol{n})\in(-\pi,\pi]\). Define \(h\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathfrak{d}})\) by \(h(\boldsymbol{n})\!=\!\sqrt{|d(\boldsymbol{n})|}e^{i\theta(\boldsymbol{n})/2}\) for all \(\boldsymbol{n}\in\mathbb{Z}^{\mathfrak{d}}\). Then \(h^{2}=d\in\mathfrak{p}\), and as \(\mathfrak{p}\) is prime, \(h\in\mathfrak{p}\). **Step 3.**\(d\) **has exactly one zero.** We will now show that there exists an \(\boldsymbol{n}_{*}\in\mathbb{Z}^{\mathfrak{d}}\) such that \(d(\boldsymbol{n}_{*})=0\). Suppose this is not true. Then by Step 1, \(d(\boldsymbol{n})\neq 0\) for all \(\boldsymbol{n}\in\mathbb{Z}^{\mathfrak{d}}\). If \(h\in\mathfrak{p}\) is as in Step 2, then there exists a \(k\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathfrak{d}})\) such that \(h=kd\), i.e., \(\sqrt{|d(\boldsymbol{n})|}e^{i\theta(\boldsymbol{n})/2}=|d(\boldsymbol{n})|e ^{i\theta(\boldsymbol{n})}k(\boldsymbol{n})\), which yields \(1=d(\boldsymbol{n})(k(\boldsymbol{n}))^{2}\) (\(\boldsymbol{n}\in\mathbb{Z}^{\mathfrak{d}}\)). Thus \(dk^{2}=1_{\mathbb{Z}^{\mathfrak{d}}}\), showing that \(d\) is invertible in \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathfrak{d}})\), contradicting the properness of the ideal \(\mathfrak{p}\). **Step 4. We now show that \(\mathfrak{p}=\{f\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathfrak{d}}):f( \boldsymbol{n}_{*})=0\}\).** That \(\mathfrak{p}\subset\{f\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathfrak{d}}):f( \boldsymbol{n}_{*})=0\}\) is clear. Let \(f\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathfrak{d}})\) be such that \(f(\boldsymbol{n}_{*})=0\). Define \(g:\mathbb{Z}^{\mathfrak{d}}\to\mathbb{C}\) by \[g(\boldsymbol{n})=\left\{\begin{array}{cl}\frac{f(\boldsymbol{n})}{d( \boldsymbol{n})}&\mbox{if }\boldsymbol{n}\neq\boldsymbol{n}_{*},\\ 0&\mbox{if }\boldsymbol{n}=\boldsymbol{n}_{*}.\end{array}\right.\] Then \(f=dg\) (note that \(f(\boldsymbol{n})=d(\boldsymbol{n})g(\boldsymbol{n})\) for \(\boldsymbol{n}\neq\boldsymbol{n}_{*}\) follows from the definition of \(g\), and \(f(\boldsymbol{n}_{*})=d(\boldsymbol{n}_{*})g(\boldsymbol{n}_{*})\) too since both sides are \(0\)). As the \(h\) from Step 2 is in \(\mathfrak{p}\), there exists a \(k\in\mathcal{S}^{\prime}(Z^{\mathfrak{d}})\) such that \(h=kd\), and so for all \(\boldsymbol{n}\in\mathbb{Z}^{\mathfrak{d}}\), we get \(\sqrt{|d(\boldsymbol{n})|}e^{i\theta(\boldsymbol{n})/2}=|d(\boldsymbol{n})|e ^{i\theta(\boldsymbol{n})}k(\boldsymbol{n})\), giving \(1=|d(\boldsymbol{n})||k(\boldsymbol{n})|^{2}\). Hence for \(\boldsymbol{n}\neq\boldsymbol{n}_{*}\), \(\frac{1}{|d(\boldsymbol{n})|}=|k(\boldsymbol{n})|^{2}\leq M(1+|\boldsymbol{n} |)^{\mathfrak{m}}\) for some \(M>0\) and a nonnegative integer \(\mathfrak{m}\). This estimate shows that \(g\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathfrak{d}})\), and hence \(f=gd\in d\mathcal{S}^{\prime}(\mathbb{Z}^{\mathfrak{d}})=\langle d\rangle= \mathfrak{p}\). ### Krull dimension **Definition 4.2**.: The _Krull dimension_ of a commutative ring \(R\) is the supremum of the lengths of chains of distinct proper prime ideals of \(R\). Recall that the Hardy algebra \(H^{\infty}\) is the Banach algebra of bounded and holomorphic functions on the unit disc \(\mathbb{D}:=\{z\in\mathbb{C}:|z|<1\}\), with pointwise operations and the supremum norm \(\|\cdot\|_{\infty}\). In [9], von Renteln showed that the Krull dimension of \(H^{\infty}\) is infinite. We adapt the idea given in [9], to show that the Krull dimension of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathfrak{d}})\) is infinite too. A key ingredient of the proof in [9] was the use of a canonical factorisation of \(H^{\infty}\) elements used to create ideals with zeroes at prescribed locations with prescribed multiplicities. Instead, we will look at the zero set in \(\mathbb{Z}^{d}\) for \(f\in\mathcal{S}^{\prime}(\mathbb{Z}^{d})\), and use the notion of 'zero-order' introduced below. If \(f\in\mathcal{S}^{\prime}(\mathbb{Z}^{d})\) and \(\boldsymbol{n}=(\mathtt{n}_{1},\cdots,\mathtt{n}_{d})\in\mathbb{Z}^{d}\) is such that \(f(\boldsymbol{n})=0\), then we define the _zero-order_\(m(f,\boldsymbol{n})\) by \[m(f,\boldsymbol{n})\!=\!\min_{1\leq\mathtt{k}\leq d}\max\Big{\{}\mathtt{i}\! \in\!\mathbb{N}:\begin{array}{l}f(\mathtt{n}_{1},\cdots,\mathtt{n}_{\mathtt{ k}-1},\mathtt{n}_{\mathtt{k}}\!+\!\mathtt{j},\mathtt{n}_{\mathtt{k}+1}, \cdots,\mathtt{n}_{d})=0\\ \text{whenever }0\leq\mathtt{j}\leq\mathtt{i}-1\end{array}\Big{\}}.\] If \(f(\mathtt{n}_{1},\cdots,\mathtt{n}_{\mathtt{k}-1},\mathtt{n}_{\mathtt{k}}\!+ \!\mathtt{j},\mathtt{n}_{\mathtt{k}+1},\cdots,\mathtt{n}_{d})=0\) for all \(\mathtt{j}\in\mathbb{N}\cup\{0\}\), and all \(\mathtt{k}\in\{1,\cdots,\mathtt{d}\}\), then we set \(m(f,\boldsymbol{n})=\infty\). If \(f(\boldsymbol{n})\neq 0\), then we set \(m(f,\boldsymbol{n})=0\). Analogous to the multiplicity of a zero of a (not identically vanishing) holomorphic function, the zero-order satisfies the following property. * If \(f,g\in\mathcal{S}^{\prime}(\mathbb{Z}^{d})\) and \(\boldsymbol{n}\in\mathbb{Z}^{d}\), then \(m(f+g,\boldsymbol{n})\geq\min\{m(f,\boldsymbol{n}),m(g,\boldsymbol{n})\}\). The multiplicity of a zero \(\zeta\) of the pointwise product of two holomorphic functions is the sum of the multiplicities of \(\zeta\) as a zero of each of the two holomorphic functions. For the zero-order, we have the following instead: * If \(f,g\in\mathcal{S}^{\prime}(\mathbb{Z}^{d})\) and \(\boldsymbol{n}\in\mathbb{Z}^{d}\), then \(m(fg,\boldsymbol{n})\geq\max\{m(f,\boldsymbol{n}),m(g,\boldsymbol{n})\}\). We will use the following known result; see [3, Theorem, SS0.16, p.6]. **Proposition 4.3**.: _If \(\mathtt{i}\) is an ideal in a ring \(R,\,M\subset R\) is a set that is closed under multiplication, and \(M\cap\mathtt{i}=\emptyset\), then there exists an ideal \(\mathfrak{p}\) such that \(\mathtt{i}\subset\mathfrak{p}\) and \(\mathfrak{p}\cap M=\emptyset,\) and \(\mathfrak{p}\) maximal with respect to these properties. Moreover, such an ideal \(\mathfrak{p}\) is necessarily prime._ **Theorem 4.4**.: _The Krull dimension of \(\mathcal{S}^{\prime}(\mathbb{Z}^{d})\) is infinite._ Proof.: For \(\mathtt{i}\in\{1,\cdots,\mathtt{d}\}\), let \(\boldsymbol{e}_{\mathtt{i}}\in\mathbb{Z}^{d}\) be the vector all of whose components are zeroes except for the \(\mathtt{i}^{\text{th}}\) one, which is defined as \(1\). For \(\mathtt{n}\in\mathbb{N}\), define \(f_{\mathtt{n}}\in\mathcal{S}^{\prime}(\mathbb{Z}^{d})\) by \[f_{\mathtt{n}}(2^{\mathtt{k}}\boldsymbol{e}_{\mathtt{i}}\!+\! \mathtt{j}\boldsymbol{e}_{\mathtt{i}})=0\text{ if }\mathtt{k}\in\mathbb{N}\cup\{0\},\,1\!\leq\! \mathtt{i}\!\leq\!\mathtt{d},\,0\!\leq\!\mathtt{j}\!\leq\!\mathtt{k}^{\mathtt{n }+1},\] \[f_{\mathtt{n}}(\boldsymbol{m})=1\text{ if }\boldsymbol{m}\not\in\{2^{ \mathtt{k}}\boldsymbol{e}_{\mathtt{i}}\!+\!\mathtt{j}\boldsymbol{e}_{\mathtt{i} }\!:\mathtt{k}\in\mathbb{N}\cup\{0\},\,1\!\leq\!\mathtt{i}\!\leq\!\mathtt{d},\, 0\!\leq\!\mathtt{j}\!\leq\!\mathtt{k}^{\mathtt{n}+1}\}.\] Note that \(m(f_{\tt n},2^{\tt k}\boldsymbol{e}_{1})\geq{\tt k}^{{\tt n}+1}\), but for each fixed \({\tt n}\in\mathbb{N}\), there exists a \({\tt K}_{\tt n}\in\mathbb{N}\cup\{0\}\) such that the gap between the indices, \[2^{{\tt k}+1}-2^{\tt k}=2^{\tt k}>{\tt k}^{{\tt n}+1}\text{ for all }{\tt k}>{\tt K }_{\tt n},\] and so \(m(f_{\tt n},2^{\tt k}\boldsymbol{e}_{1})={\tt k}^{{\tt n}+1}\) for all \({\tt k}>{\tt K}_{\tt n}\). Hence \[\lim_{{\tt k}\to\infty}\frac{m(f_{\tt n},2^{\tt k}\boldsymbol{e}_{1})}{{\tt k }^{{\tt n}}}=\infty\quad\text{ and }\quad\lim_{{\tt k}\to\infty}\frac{m(f_{\tt n},2^{\tt k} \boldsymbol{e}_{1})}{{\tt k}^{{\tt n}+1}}=1<\infty. \tag{1}\] Let \[{\tt i}_{*}:=\{f\in\mathcal{S}^{\prime}(\mathbb{Z}^{\tt d}):\exists\,{\tt k}_ {0}(f)\in\mathbb{N}_{0}\text{ such that }\forall\,{\tt k}>{\tt k}_{0}(f),\;f(2^{\tt k} \boldsymbol{e}_{1})=0\}.\] The set \({\tt i}_{*}\) is nonempty since \(0\in{\tt i}_{*}\). Clearly \({\tt i}_{*}\) is closed under addition, and \(fg\in{\tt i}_{*}\) whenever \(f\in{\tt i}_{*}\) and \(g\in\mathcal{S}^{\prime}(\mathbb{Z}^{\tt d})\). So \({\tt i}_{*}\) is an ideal of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\tt d})\). For \({\tt n}\in\mathbb{N}\), define \[{\tt i}_{\tt n} = \Big{\{}f\in{\tt i}_{*}:\lim_{{\tt k}\to\infty}\frac{m(f,2^{\tt k }\boldsymbol{e}_{1})}{{\tt k}^{{\tt n}}}=\infty\Big{\}},\] \[M_{\tt n} = \Big{\{}f\in\mathcal{S}^{\prime}(\mathbb{Z}^{\tt d}):\sup_{{\tt k }\in\mathbb{N}}\frac{m(f,2^{\tt k}\boldsymbol{e}_{1})}{{\tt k}^{{\tt n}}}< \infty\Big{\}}.\] Clearly \(f_{\tt n}\in{\tt i}_{\tt n}\), and so \({\tt i}_{\tt n}\) is not empty. Using (P1), we see that if \(f,g\in{\tt i}_{\tt n}\), then \(f+g\in{\tt i}_{\tt n}\). If \(g\in\mathcal{S}^{\prime}(\mathbb{Z}^{\tt d})\) and \(f\in{\tt i}_{\tt n}\), then (P2) implies that \(fg\in{\tt i}_{\tt n}\). Hence \({\tt i}_{\tt n}\) is an ideal of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\tt d})\). The identity element \(1_{\mathbb{Z}^{\tt d}}\in M_{\tt n}\) for all \({\tt n}\in\mathbb{N}\). If \(f,g\in M_{\tt n}\), then it follows from (P2) that \(fg\in M_{\tt n}\). Thus \(M_{\tt n}\) is a nonempty multiplicatively closed subset of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\tt d})\). It is easy to check that for all \({\tt n}\in\mathbb{N}\), \({\tt i}_{{\tt n}+1}\subset{\tt i}_{\tt n}\) and \(M_{\tt n}\subset M_{{\tt n}+1}\). We now prove that the inclusions are strict for each \({\tt n}\in\mathbb{N}\). From (1), it follows that \(f_{\tt n}\in{\tt i}_{\tt n}\) but \(f_{\tt n}\not\in{\tt i}_{{\tt n}+1}\). Also \(f_{\tt n}\in M_{{\tt n}+1}\) and \(f_{\tt n}\not\in M_{\tt n}\). Next we show that \({\tt i}_{\tt n}\cap M_{\tt n}=\emptyset\). Indeed, if \(f\in{\tt i}_{\tt n}\cap M_{\tt n}\), then \[\infty=\lim_{{\tt k}\to\infty}\frac{m(f,2^{\tt k}\boldsymbol{e}_{1})}{{\tt k }^{{\tt n}}}=\limsup_{{\tt k}\to\infty}\frac{m(f,2^{\tt k}\boldsymbol{e}_{1})} {{\tt k}^{{\tt n}}}\leq\sup_{{\tt k}\in\mathbb{N}}\frac{m(f,2^{\tt k} \boldsymbol{e}_{1})}{{\tt k}^{{\tt n}}}<\infty,\] a contradiction. But \({\tt i}_{\tt n}\cap M_{{\tt n}+1}\neq\emptyset\), since \(f_{\tt n}\in{\tt i}_{\tt n}\) and \(f_{\tt n}\in M_{{\tt n}+1}\). We will now show that the Krull dimension of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\tt d})\) is infinite by showing that for all \({\tt N}\in\mathbb{N}\), we can construct a chain of strictly decreasing prime ideals \(\mathfrak{p}_{{\tt N}+1}\subsetneq\mathfrak{p}_{{\tt N}}\subset\cdots\subsetneq \mathfrak{p}_{2}\subsetneq\mathfrak{p}_{1}\) in \(\mathcal{S}^{\prime}(\mathbb{Z}^{\tt d})\). Fix an \(\mathtt{N}\in\mathbb{N}\). Applying Proposition 4.3 by taking \(\mathfrak{i}=\mathfrak{i}_{\mathtt{N}+1}\) and \(M=M_{\mathtt{N}+1}\), we obtain the existence of a prime ideal \(\mathfrak{p}=\mathfrak{p}_{\mathtt{N}+1}\) in \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\), which satisfies \(\mathfrak{i}_{\mathtt{N}+1}\subset\mathfrak{p}_{\mathtt{N}+1}\) and \(\mathfrak{p}_{\mathtt{N}+1}\cap M_{\mathtt{N}+1}=\emptyset\). We claim the ideal \(\mathfrak{i}_{\mathtt{N}}+\mathfrak{p}_{\mathtt{N}+1}\) of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\) satisfies \((\mathfrak{i}_{\mathtt{N}}+\mathfrak{p}_{\mathtt{N}+1})\cap M_{\mathtt{N}}=\emptyset\). Let \(h=f+g\in\mathfrak{i}_{\mathtt{N}}+\mathfrak{p}_{\mathtt{N}+1}\), where \(f\in\mathfrak{i}_{\mathtt{N}}\) and \(g\in\mathfrak{p}_{\mathtt{N}+1}\). Since \(g\in\mathfrak{p}_{\mathtt{N}+1}\), by the construction of \(\mathfrak{p}_{\mathtt{N}+1}\) it follows that \(g\not\in M_{\mathtt{N}+1}\). But \(M_{\mathtt{N}}\subset M_{\mathtt{N}+1}\), and so \(g\not\in M_{\mathtt{N}}\) as well. Thus there exists a subsequence \((\mathfrak{k}_{\mathfrak{j}})_{\mathfrak{j}\in\mathbb{N}}\) of \((\mathfrak{k})_{\mathfrak{k}\in\mathbb{N}}\) such that \[\lim_{\mathfrak{j}\to\infty}\frac{m(g,2^{\mathfrak{k}_{j}}\boldsymbol{e}_{1}) }{\mathfrak{k}_{\mathfrak{j}}^{\mathtt{N}}}=\infty.\] From (P1), we obtain \[\frac{m(h,2^{\mathfrak{k}_{j}}\boldsymbol{e}_{1})}{\mathfrak{k}_{\mathfrak{j} }^{\mathtt{N}}}\geq\min\Big{\{}\frac{m(f,2^{\mathfrak{k}_{j}}\boldsymbol{e}_{ 1})}{\mathfrak{k}_{\mathfrak{j}}^{\mathtt{N}}},\frac{m(g,2^{\mathfrak{k}_{j}} \boldsymbol{e}_{1})}{\mathfrak{k}_{\mathfrak{j}}^{\mathtt{N}}}\Big{\}}.\] As \(f\in\mathfrak{i}_{\mathtt{N}}\), it follows that \[\sup_{\mathfrak{j}\in\mathbb{N}}\frac{m(h,2^{\mathfrak{k}_{j}}\boldsymbol{e}_ {1})}{\mathfrak{k}_{\mathfrak{j}}^{\mathtt{N}}}\geq\min\Big{\{}\limsup_{ \mathfrak{j}\to\infty}\frac{m(f,2^{\mathfrak{k}_{j}}\boldsymbol{e}_{1})}{ \mathfrak{k}_{\mathfrak{j}}^{\mathtt{N}}},\limsup_{\mathfrak{j}\to\infty} \frac{m(g,2^{\mathfrak{k}_{j}}\boldsymbol{e}_{1})}{\mathfrak{k}_{\mathfrak{j} }^{\mathtt{N}}}\Big{\}}\geq\infty.\] Thus \(h\not\in M_{\mathtt{N}}\). Consequently, \((\mathfrak{i}_{\mathtt{N}}+\mathfrak{p}_{\mathtt{N}+1})\cap M_{\mathtt{N}}=\emptyset\). Clearly \(\mathfrak{i}_{\mathtt{N}}\subset\mathfrak{i}_{\mathtt{N}}+\mathfrak{p}_{ \mathtt{N}+1}\). Applying Proposition 4.3 again, now taking \(\mathfrak{i}=\mathfrak{i}_{\mathtt{N}}+\mathfrak{p}_{\mathtt{N}+1}\) and \(M=M_{\mathtt{N}}\), we obtain the existence of a prime ideal \(\mathfrak{p}=\mathfrak{p}_{\mathtt{N}}\) in \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\) such that \(\mathfrak{i}_{\mathtt{N}}+\mathfrak{p}_{\mathtt{N}+1}\subset\mathfrak{p}_{ \mathtt{N}}\) and \(\mathfrak{p}_{\mathtt{N}}\cap M_{\mathtt{N}}=\emptyset\). Thus \(\mathfrak{p}_{\mathtt{N}+1}\subset\mathfrak{i}_{\mathtt{N}}+\mathfrak{p}_{ \mathtt{N}+1}\subset\mathfrak{p}_{\mathtt{N}}\). The first inclusion is strict as \(f_{\mathtt{N}}\in\mathfrak{i}_{\mathtt{N}}\subset\mathfrak{i}_{\mathtt{N} }+\mathfrak{p}_{\mathtt{N}+1}\). But \(f_{\mathtt{N}}\not\in\mathfrak{p}_{\mathtt{N}+1}\) (since \(f_{\mathtt{N}}\in M_{\mathtt{N}+1}\) and \(\mathfrak{p}_{\mathtt{N}+1}\cap M_{\mathtt{N}+1}=\emptyset\) by the construction of \(\mathfrak{p}_{\mathtt{N}+1}\)). Thus \(\mathfrak{p}_{\mathtt{N}+1}\subsetneq\mathfrak{p}_{\mathtt{N}}\). Now consider the ideal \(\mathfrak{i}:=\mathfrak{i}_{\mathtt{N}-1}+\mathfrak{p}_{\mathtt{N}}\supset \mathfrak{i}_{\mathtt{N}-1}\) of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\) and the multiplicatively closed set \(M:=M_{\mathtt{N}-1}\) of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathtt{d}})\). Similar to the argument given above, we show below that \(\mathfrak{i}\cap M=(\mathfrak{i}_{\mathtt{N}-1}+\mathfrak{p}_{\mathtt{N}}) \cap M_{\mathtt{N}-1}=\emptyset\). Let \(h=f+g\in\mathfrak{i}_{\mathtt{N}-1}+\mathfrak{p}_{\mathtt{N}}\), where \(f\in\mathfrak{i}_{\mathtt{N}-1}\) and \(g\in\mathfrak{p}_{\mathtt{N}}\). Since \(g\in\mathfrak{p}_{\mathtt{N}}\), by the construction of \(\mathfrak{p}_{\mathtt{N}}\), \(g\not\in M_{\mathtt{N}}\supset M_{\mathtt{N}-1}\), and so \(g\not\in M_{\mathtt{N}-1}\). Thus there exists a subsequence \((\mathfrak{k}_{\mathfrak{j}})_{\mathfrak{j}\in\mathbb{N}}\) of \((\mathfrak{k})_{\mathfrak{k}\in\mathbb{N}}\) such that \[\lim_{\mathfrak{j}\to\infty}\frac{m(g,2^{\mathfrak{k}_{j}}\boldsymbol{e}_{1}) }{\mathfrak{k}_{\mathfrak{j}}^{\mathtt{N}-1}}=\infty.\] As \(f\in\mathfrak{i}_{\mathbb{N}-1}\), \[\sup_{\mathfrak{j}\in\mathbb{N}}\frac{m(h,2^{\mathfrak{k}_{j}}\boldsymbol{e}_{ 1})}{\mathfrak{k}_{j}^{\mathbb{N}-1}}\geq\min\Big{\{}\limsup_{\mathfrak{j} \to\infty}\frac{m(f,2^{\mathfrak{k}_{j}}\boldsymbol{e}_{1})}{\mathfrak{k}_{j}^ {\mathbb{N}-1}},\limsup_{\mathfrak{j}\to\infty}\frac{m(g,2^{\mathfrak{k}_{j}} \boldsymbol{e}_{1})}{\mathfrak{k}_{j}^{\mathbb{N}-1}}\Big{\}}\geq\infty.\] Thus \(h\not\in M_{\mathbb{N}-1}\). So \((\mathfrak{i}_{\mathbb{N}-1}+\mathfrak{p}_{\mathbb{N}})\cap M_{\mathbb{N}-1}=\emptyset\). By Proposition 4.3, taking \(\mathfrak{i}=\mathfrak{i}_{\mathbb{N}-1}+\mathfrak{p}_{\mathbb{N}}\supset \mathfrak{i}_{\mathbb{N}-1}\) and \(M=M_{\mathbb{N}-1}\), there exists a prime ideal \(\mathfrak{p}=\mathfrak{p}_{\mathbb{N}-1}\) in \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathfrak{d}})\) such that \(\mathfrak{i}_{\mathbb{N}-1}+\mathfrak{p}_{\mathbb{N}}\subset\mathfrak{p}_{ \mathbb{N}-1}\) and \(\mathfrak{p}_{\mathbb{N}-1}\cap M_{N-1}=\emptyset\). Thus \(\mathfrak{p}_{\mathbb{N}}\subset\mathfrak{i}_{\mathbb{N}-1}+\mathfrak{p}_{ \mathbb{N}}\subset\mathfrak{p}_{\mathbb{N}-1}\), and again the first inclusion is strict (because \(f_{\mathbb{N}-1}\in\mathfrak{i}_{\mathbb{N}-1}\subset\mathfrak{i}_{\mathbb{N} -1}+\mathfrak{p}_{\mathbb{N}}\), \(f_{\mathbb{N}-1}\in M_{\mathbb{N}}\) and \(M_{\mathbb{N}}\cap\mathfrak{p}_{\mathbb{N}}=\emptyset\)). Proceeding in this manner, we obtain the chain of distinct prime ideals \(\mathfrak{p}_{\mathbb{N}+1}\subsetneq\mathfrak{p}_{\mathbb{N}}\subsetneq \mathfrak{p}_{\mathbb{N}-1}\subsetneq\cdots\subsetneq\mathfrak{p}_{1}\). in \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathfrak{d}})\). As \(\mathbb{N}\in\mathbb{N}\) was arbitrary, it follows that the Krull dimension of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathfrak{d}})\) is infinite. ### Weak Krull dimension Recall the following definition from [6]: **Definition 4.5**.: The _weak Krull dimension_ of a commutative ring \(R\) is the supremum of the lengths of chains of distinct proper finitely generated prime ideals of \(R\). **Theorem 4.6**.: _The weak Krull dimension of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathfrak{d}})\) is \(1\)._ Proof.: Let \(\mathfrak{p}_{1}\) and \(\mathfrak{p}_{2}\) be finitely generated proper prime ideals in \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathfrak{d}})\) such that \(\mathfrak{p}_{1}\subset\mathfrak{p}_{2}\). For each \(\mathfrak{i}\in\{1,2\}\), by Proposition 2.2, there exists an \(\boldsymbol{n}_{\mathfrak{i}}\in\mathbb{Z}^{\mathfrak{d}}\) such that \(\mathfrak{p}_{\mathfrak{i}}=\{f\in\mathcal{S}^{\prime}(\mathbb{Z}^{\mathfrak{d }}):f(\boldsymbol{n}_{\mathfrak{i}})=0\}\). But as \(\mathfrak{p}_{1}\subset\mathfrak{p}_{2}\), it follows that \(\boldsymbol{n}_{1}=\boldsymbol{n}_{2}\) (by considering the function which is zero at all \(\boldsymbol{n}\in\mathbb{Z}^{\mathfrak{d}}\setminus\{\boldsymbol{n}_{2}\}\) and equal to \(1\) at \(\boldsymbol{n}_{2}\)), and so \(\mathfrak{p}_{1}=\mathfrak{p}_{2}\). So the weak Krull dimension of \(\mathcal{S}^{\prime}(\mathbb{Z}^{\mathfrak{d}})\) is \(1\).
2309.01608
Supervised dimensionality reduction for multiple imputation by chained equations
Multivariate imputation by chained equations (MICE) is one of the most popular approaches to address missing values in a data set. This approach requires specifying a univariate imputation model for every variable under imputation. The specification of which predictors should be included in these univariate imputation models can be a daunting task. Principal component analysis (PCA) can simplify this process by replacing all of the potential imputation model predictors with a few components summarizing their variance. In this article, we extend the use of PCA with MICE to include a supervised aspect whereby information from the variables under imputation is incorporated into the principal component estimation. We conducted an extensive simulation study to assess the statistical properties of MICE with different versions of supervised dimensionality reduction and we compared them with the use of classical unsupervised PCA as a simpler dimensionality reduction technique.
Edoardo Costantini, Kyle M. Lang, Klaas Sijtsma
2023-09-04T13:46:41Z
http://arxiv.org/abs/2309.01608v2
# Supervised dimensionality reduction for multiple imputation by chained equations ###### Abstract Multivariate imputation by chained equations (MICE) is one of the most popular approaches to address missing values in a data set. This approach requires specifying a univariate imputation model for every variable under imputation. The specification of which predictors should be included in these univariate imputation models can be a daunting task. Principal component analysis (PCA) can simplify this process by replacing all of the potential imputation model predictors with a few components summarizing their variance. In this article, we extend the use of PCA with MICE to include a supervised aspect whereby information from the variables under imputation is incorporated into the principal component estimation. We conducted an extensive simulation study to assess the statistical properties of MICE with different versions of supervised dimensionality reduction and we compared them with the use of classical unsupervised PCA as a simpler dimensionality reduction technique. Corresponding author's email address: [email protected]; ## 1 Introduction Multiple Imputation (MI) is a state-of-the-art missing data treatment in today's data analysis (Schafer and Graham, 2002; Van Buuren, 2018, p. 30). Multiple imputations is often implemented through the multivariate imputation by chained equations approach (MICE, Van Buuren and Oudshoorn, 2000), which can accommodate a wide range of data measurement levels. This flexibility comes from the possibility of modeling the multivariate joint density of the variables with missing values through a collection of conditional densities for every variable under imputation. MICE requires specifying a different univariate imputation model for each variable under imputation, which entails deciding on the imputation model form and which predictors to use. The first decision is usually guided by the measurement level of the variables under imputation. For example, continuous variables can be imputed using a linear regression model, while binary variables can be imputed using logistic regression. The second decision concerns which and how many predictors to include in the imputation models and therefore it is more difficult. The general recommendation has been to follow an inclusive strategy (Collins, Schafer, & Kam, 2001), meaning that as many predictors as possible should be included in the imputation models. Using as much information as possible from the data leads to multiple imputations that have minimal bias and maximal efficiency (Collins et al., 2001; Meng, 1994). Furthermore, including more predictors in the imputation models makes the missing at random assumption (MAR) more plausible (Collins et al., 2001, p. 339). Finally, if the imputation model omits variables that are part of the analysis model fitted to the data after imputation, the parameter estimates might be biased (Enders, 2010, p. 229) and estimated confidence intervals might be too wide (Little & Rubin, 2002, p. 218). As a result, including more predictors in the imputation models increases the range of analysis models that can be estimated with a given set of imputations (Meng, 1994). Despite its advantages, the inclusive strategy easily results in singularity issues (Hastie, Tibshirani, & Friedman, 2009, p. 46) when estimating imputation models. Consequently, researchers performing MI often face difficult choices on how many and which variables to use as predictors. High-dimensional prediction models offer an opportunity to specify the imputation models automatically and to include more predictors than traditionally possible. For example, ridge regression (Hoerl & Kennard, 1970) can estimate regression models with hundreds of predictors; lasso regression (Tibshirani, 1996) can perform data-driven variable selection; decision trees (e.g., Breiman, 2001) can consider hundreds of variables for their splitting rules; and principal component analysis (PCA, Jolliffe, 2002) can summarize a large set of predictors using a few independent linear combinations. All of these modeling strategies have been implemented in combination with MICE (Burgette & Reiter, 2010; Deng, Chang, Ido, & Long, 2016; Doove, Van Buuren, & Dusseldorp, 2014; Howard, Rhemtulla, & Little, 2015; Shah, Bartlett, Carpenter, Nicholas, & Hemingway, 2014; Zhao & Long, 2016). Costantini, Lang, Reeskens, and Sijtsma (2023) compared their performance in terms of estimation bias and confidence interval coverage when applied to data with missing values. They found that using PCA to create summaries of the many possible imputation model predictors performs particularly well. In a follow-up study, Costantini, Lang, Sijtsma, and Reeskens (2023) explored different ways of using PCA with the MICE algorithm and found that updating the principal components (PCs) for the imputation of every variable at every iteration provided the lowest bias and the lowest deviation from nominal confidence interval coverage. However, these results relied heavily on the number of components computed. With their simulation study, Costantini, Lang, Sijtsma, and Reeskens (2023) showed that to achieve small bias and satisfactory coverage, a researcher imputing the data using PCA to aid imputation model specification should retain at least as many PCs as the number of latent variables in the data generating model. This is undesirable as researchers usually do not know the true number of latent variables. PCA is an unsupervised dimensionality reduction technique that summarizes the variability of a set of \(P\) variables \(\{\mathbf{x}_{1},\ldots,\mathbf{x}_{p}\}\) measured on \(N\) observations with a set of PCs, with \(Q<P\). The PCs can be used in any regression model as a replacement for the original predictors, an approach known as Principal Component Regression (PCR; Jolliffe, 2002, pp. 168-173). PCR addresses possible multicollinearity issues afflicting the model. However, the PCs obtained by PCR cannot take into account variables that are not part of the set \(\{\mathbf{x}_{1},\ldots,\mathbf{x}_{p}\}\), a feature that can result in PCs that are unrelated or only weakly related to the dependent variable, which by definition is not included in the set of predictors. Contrary to PCA, supervised dimensionality reduction (SDR) techniques use the outcome variable to guide the computation so that the resulting PCs are both good representations of the predictor variables and strongly associated with the dependent variable (e.g., Bair, Hastie, Paul, & Tibshirani, 2006; De Jong & Kiers, 1992; Wold, 1975). Using SDR within MICE might relax the need to know the number of latent variables in the data-generating model described by Costantini, Lang, Sijtsma, and Reeskens (2023) for PCA. The purpose of this study is to evaluate how SDR techniques can improve upon unsupervised PCR as a univariate imputation model in MICE. In this study, we considered two questions. First, what are the statistical properties (bias, coverage, confidence interval width) of parameters estimated from data treated with the MICE algorithm using different versions of SDR as the univariate imputation models? Second, can using SDR in MICE relax the PCA requirement of using at least as many PCs as the number of latent variables in the data-generating model? We used a Monte Carlo simulation study to explore the performance of different versions of SDR with the MICE algorithm. The article is structured as follows. In Section 2, we describe the MICE algorithm, unsupervised and supervised dimensionality reduction, different versions of SDR, and we propose uses of SDR as a univariate imputation method for MICE. In Section 3, we describe the Monte Carlo simulation study. Next, we discuss the main findings (Section 4), our ideas for future research directions (Section 5), and we provide concluding remarks (Section 6). ## 2 Imputation methods and algorithms We use the following notation. Indices and scalars are denoted by lowercase and uppercase letters. For example, \(i\) is an index enumerating iterations out of \(I\) total iterations (\(i\in\{1,\ldots,I\}\)). Vectors are written in bold lowercase while matrices are denoted by bold uppercase letters. The superscript \({}^{\prime}\) defines the transpose of a matrix. We use the subscript \(obs\) and \(mis\) to refer to the observed and missing elements in a vector or matrix. ### Multivariate imputation by chained equations Consider data set \(\mathbf{Z}\) with \(N\) rows and \(P\) columns \(\mathbf{z}_{1},\ldots,\mathbf{z}_{P}\). We assume that \(\mathbf{Z}\) is a random draw from a multivariate distribution \(f(\mathbf{Z}|\theta)\), where \(\theta\) is a vector of unknown parameters that completely specifies its multivariate distribution. Let the first \(J\) columns of \(\mathbf{Z}\) have missing values. MICE is an iterative algorithm for imputing multivariate missing data on a variable-by-variable basis. It obtains multiple imputations for the missing values by drawing from the variable-specific conditional distributions of the form: \[f(\mathbf{z}_{j}|\mathbf{Z}_{-j},\theta_{j}), \tag{1}\] where \(\mathbf{z}_{j}\) is a partially observed variable, \(\mathbf{Z}_{-j}\) is the collection of variables in \(\mathbf{Z}\) excluding \(\mathbf{z}_{j}\), and \(\theta_{j}\) is a vector of model parameters. The parameters \(\theta_{j}\) are specific to the respective conditional distributions and might not determine the unique _true_ joint distribution \(f(\mathbf{Z}|\theta)\). We refer to the conditional distributions in Equation 1 as (univariate) imputation models. The MICE algorithm starts by replacing the missing values in each \(\mathbf{z}_{j}\) with initial guesses (e.g., random draws from the observed values). At iteration \(i\), the MICE algorithm imputes successively variables \(\mathbf{z}_{1}\) to \(\mathbf{z}_{J}\) by taking draws from the following distributions: \[\theta_{j}^{(i)} \sim f(\theta_{j}|\mathbf{Z}_{j,\text{obs}},\mathbf{Z}_{-j}^{(i)}), \tag{2}\] \[\mathbf{z}_{j,\text{mis}}^{(i)} \sim f(\mathbf{z}_{j,\text{mis}}|\mathbf{Z}_{-j}^{(i)},\theta_{j}^ {(i)}) \tag{3}\] Equation 2 is the fully conditional posterior distribution defined by the product of an uninformative prior distribution for \(\theta_{j}\) and the likelihood of observing \(\mathbf{z}_{j,\text{obs}}\) under the imputation model for \(\mathbf{z}_{j}\). Equation 3 is the posterior predictive distribution from which updates of the imputations are drawn. In both equations, \(\mathbf{Z}_{-j}^{(i)}\) is \((\mathbf{z}_{1}^{(i)},\ldots,\mathbf{z}_{j-1}^{(i)},\mathbf{z}_{j+1}^{(i-1)},\ldots,\mathbf{z}_{J}^{(i-1)},\mathbf{z}_{J+1},\ldots,\mathbf{z}_{P})\), meaning that at all times the most recently imputed values of all variables are used to impute other variables. After repeating the sampling steps described by Equation 2 and 3 for every variable under imputation, the algorithm moves to the next iteration and repeats the same sampling steps for all variables under imputation. The convergence of the algorithm is usually assessed by plotting the trends of the average imputations across iterations for different starting values. After convergence, the imputations are assumed to be samples from the target multivariate distribution. With this process, one can generate as many imputed data sets as desired. Finally, the analysis model used to answer a substantive researcher question is estimated on each imputed data set, and the parameter estimates are pooled using Rubin's rules (Rubin, 1987). For small values of \(P\), the researcher imputing the data can use all of the columns in \(\mathbf{Z}_{-j}\) as predictors in the univariate imputation model for \(\mathbf{z}_{j}\). As \(P\) grows larger, the imputer needs to decide which predictors to include and which to leave out, a task that can require a considerable amount of expertise in both statistical modeling techniques and the field of the substantive research question. By summarizing the information in all of the possible predictors with a few linear combinations of the columns of \(\mathbf{Z}_{-j}\), PCA and other dimensionality reduction techniques provide an accessible, data-driven way of specifying the imputation models. ### Principal component analysis PCA is a dimensionality reduction technique that finds a low-dimensional representation of the variables contained in an \(N\times P\) data matrix \(\mathbf{X}\) with minimal loss of information. It does so by finding a \(P\times Q\) matrix of weights \(\mathbf{W}\) that defines \(Q\) independent linear combinations of the columns of \(\mathbf{X}^{1}\) with maximum variance (with \(Q\leq P\)). These weights project the columns of \(\mathbf{X}\) onto a lower-dimensional subspace to produce the \(N\times Q\) matrix \(\mathbf{T}=\mathbf{X}\mathbf{W}\) that summarizes the information in \(\mathbf{X}\). The \(Q\) columns of \(\mathbf{T}\) are called the principal components (PCs) of \(\mathbf{X}\). The first PC of \(\mathbf{X}\) is the linear combination of the columns of \(\mathbf{X}\) with the largest variance: \[\mathbf{t}_{1}=\mathbf{x}_{1}w_{11}+\mathbf{x}_{2}w_{12}+\cdots+\mathbf{x}_{P}w _{1P}=\mathbf{X}\mathbf{w}_{1}, \tag{4}\] with \(\mathbf{w}_{1}\) being the \(P\times 1\) vector of weights that comprises the first column of \(\mathbf{W}\). The second principal component (\(\mathbf{t}_{2}\)) is defined by finding the vector of weights \(\mathbf{w}_{2}\) giving the linear combination of \(\mathbf{x}_{1},\ldots,\mathbf{x}_{P}\) with maximal variance out of all the linear combinations that are uncorrelated with \(\mathbf{t}_{1}\). Every subsequent column of \(\mathbf{T}\) can be understood in the same way: for example, \(\mathbf{t}_{3}\) is the linear combinations of \(\mathbf{x}_{1},\ldots,\mathbf{x}_{P}\) that has maximal variance out of all the linear combinations that are uncorrelated with \(\mathbf{t}_{1}\) and \(\mathbf{t}_{2}\). As a result, all PCs are uncorrelated by definition and every subsequent PC has a lower variance than the preceding one. We can also think of PCA as the process of projecting the original data from a set of oblique vectors in a \(P\)-dimensional space to a set of orthogonal vectors in a Q-dimensional subspace. The weight vectors \(\mathbf{w}_{1},\ldots,\mathbf{w}_{Q}\) define the directions in which the \(N\) observations in \(\mathbf{x}_{1},\ldots,\mathbf{x}_{P}\) are projected. The projected values are the principal component scores \(\mathbf{T}\). Estimating PCA is the task of minimizing the criterion: \[\begin{split}(\mathbf{W},\mathbf{P})&=\operatorname {argmin}_{\mathbf{W},\mathbf{P}}\ \|\mathbf{X}-\mathbf{X}\mathbf{W}\mathbf{P}^{\prime}\|^{2}\\ &=\operatorname{argmin}_{\mathbf{W},\mathbf{P}}\ \|\mathbf{X}-\mathbf{T}\mathbf{P}^{\prime}\|^{2}\end{split} \tag{5}\] subject to the constraint \(\mathbf{P}^{\prime}\mathbf{P}=\mathbf{I}\), where \(\mathbf{P}^{\prime}\) provides the weights for estimating \(\mathbf{X}\) from \(\mathbf{T}\). In other words, PCA finds the matrices \(\mathbf{W}\) and \(\mathbf{P}\) that minimize the reconstruction error \(\mathbf{E}_{\mathbf{X}}=\mathbf{X}-\mathbf{X}\mathbf{W}\mathbf{P}^{\prime}\). ### Principal component regression PCR replaces the \(P\) predictors of a regression model with \(Q\) PCs extracted from those predictors. Given an outcome variable \(\mathbf{y}\) and a set of \(P\) predictors \(\mathbf{X}\), consider a standard regression model: \[\mathbf{y}=\mathbf{X}\beta+\epsilon, \tag{6}\] where \(\beta\) is a \(P\times 1\) vector of regression coefficients, and \(\epsilon\) is a \(N\times 1\) vector of independent normally distributed errors. With PCR we use \(Q\) PCs of \(\mathbf{X}\) in its place so that Equation 6 can be rewritten as: \[\mathbf{y}=\mathbf{T}\gamma+\epsilon, \tag{7}\] where \(\gamma\) is a \(Q\times 1\) vector of regression coefficients. The lower dimensionality of \(\mathbf{T}\) compared to \(\mathbf{X}\) and the independence of its columns allow Equation 7 to address the computational limitations of Equation 6 when \(P\) is large or when the variables in \(\mathbf{X}\) are highly correlated. The optimal number of components \(Q\) is never certain. A crucial feature of PCA is that the first PC explains the maximum amount of variance in all \(P\) predictors, the second PC explains the maximum variance of all \(P\) residuals, and so on. This means that the explained variance decreases as fast as the data allows as more PCs are retained. As a result, \(Q\) is usually taken to be much smaller than \(P\); if \(Q=P\), the variance in the predictors would be redistributed across \(P\) new predictors and this would defy the goal of PCA to reduce the number \(P\) considerably while retaining as much variance as possible. In practice, when estimating PCR, researchers rely on cross-validation (e.g., Vervloet, Van Deun, Van den Noortgate, & Ceulemans, 2016) to guide this decision. ### Multiple imputation with principal component regression Costantini, Lang, Sijtsma, and Reeskens (2023) found that the best way to incorporate PCR into MICE is to extract PCs at every iteration. When imputing \(\mathbf{z}_{j}\) in the \(i\)th iteration of MICE, the PCs can be estimated from \(\mathbf{Z}_{-j}^{(i)}\) and used as predictors in the univariate imputation model. Each univariate imputation model can then be defined as: \[f(\mathbf{z}_{j}|\mathbf{T}_{-j}^{(i)},\theta_{j}), \tag{8}\] where \(\mathbf{T}_{-j}^{(i)}\) is the matrix storing the PC scores estimated on \(\mathbf{Z}_{-j}^{(i)}\). The steps described in Algorithm 1 are followed to impute \(\mathbf{z}_{j}\) with PCR at every iteration. We refer to this use of PCR within MICE as MI-PCR. This MI-PCR incorporates uncertainty around the imputation model parameters using bootstrapping following the same principle as the 'imputation under the normal linear model with bootstrap' algorithm described by Van Buuren (2018, p. 69). MI-PCR allows the researcher imputating the data to include all predictors in the imputation model for every variable, bypassing the difficult model selection step, while preserving the advantages of an inclusive strategy. However, the performance of MI-PCR is highly sensitive to the number of PCs computed. Not using enough PCs to adequately represent the latent structure (i.e., using fewer PCs than the true number of latent variables) will produce poor imputations (Costantini, Lang, Sijtsma, & Reeskens, 2023). Furthermore, there is no guarantee the number of PCs that optimally represent \(\mathbf{Z}_{-j}^{(i)}\) will be good imputation model predictors as the PCs retained might be summarizing information that is unrelated to the variable under imputation \(\mathbf{z}_{j}\). Finally, performing PCA on large datasets involves demanding matrix operations. MI-PCR requires repeating these intensive manipulations for every variable under imputation and every iteration of the MICE algorithm. ### Multiple imputation with supervised dimensionality reduction SDR techniques represent an alternative to PCA that could obviate some of the limitations of MI-PCR outlined above. In particular, the _supervision_ of SDR methods should help computing PCs that are better predictors of the variables under imputation than the ones produced by PCA, and, as a corollary, it could also allow to retain fewer PCs than the number of latent variables in the data-generating model. In what follows we describe three alternative approaches to finding linear combinations of the predictors that do a good job of both summarizing the predictors and predicting the outcome variable. For each approach, we first describe how it works, and then we describe its implementation as a univariate imputation method in the MICE algorithm. #### 2.5.1 Supervised principal component regression Bair et al. (2006) proposed computing the PCs only on the subset of variables that are associated with the dependent variable. Their approach is straightforward: ``` 1:For a given \(\mathbf{z}_{j}\) variable under imputation, draw a bootstrap version \(\mathbf{z}_{j,\text{obs}}^{*}\) with replacement from the observed cases \(\mathbf{z}_{j,\text{obs}}\), and store as \(\mathbf{Z}_{-j,\text{obs}}^{*}\) the corresponding rows of \(\mathbf{Z}_{-j}^{(i)}\). 2:Center and scale \(\mathbf{Z}_{-j,\text{obs}}^{*}\) and store the result as \(\tilde{\mathbf{Z}}_{-j,\text{obs}}^{*}\) 3:Center and scale \(\mathbf{Z}_{-j,\text{mis}}^{(i)}\) based on the means and standard deviations of \(\mathbf{Z}_{-j,\text{obs}}^{*}\) and store the result as \(\tilde{\mathbf{Z}}_{-j,\text{mis}}\) 4:Center \(\mathbf{z}_{j,\text{obs}}^{*}\) on its mean value \(\bar{\mathbf{z}}_{j,\text{obs}}^{*}\) and store it in \(\tilde{\mathbf{z}}_{j,\text{obs}}^{*}\). 5:Estimate \(\mathbf{W}\) and \(\mathbf{P}\) by the eigendecomposition of the cross-product matrix of \(\tilde{\mathbf{Z}}_{-j,\text{obs}}^{*}\) 6:Compute the first \(Q\) PCs as \(\mathbf{T}_{-j,\text{obs}}^{(i)}=\tilde{\mathbf{Z}}_{-j,\text{obs}}^{*} \mathbf{W}\) 7:Regress the mean-centered \(\hat{\mathbf{z}}_{j,\text{obs}}^{*}\) on \(\mathbf{T}_{-j,\text{obs}}^{(i)}\) and store the regression coefficients \(\beta\) 8:Estimate the residual error variance \(\sigma^{2}\) as the ratio between the residual sum of square (\(RSS\)) and the degrees of freedom (\(df\)): \[\sigma^{2} =RSS/df\] \[=\frac{\Sigma\left(\hat{\mathbf{z}}_{j,\text{obs}}^{*}-\mathbf{T} _{-j,\text{obs}}^{(i)}\beta\right)^{2}}{N-Q}\] 9:Obtain the predicted values for \(\mathbf{z}_{j,miss}\) by \[\hat{z}_{j,miss}=\hat{\mathbf{Z}}_{-j,\text{mis}}\mathbf{W}\beta\] 10:Obtain imputations by adding normally distributed errors scaled by \(\sigma^{2}\) to these predictions and by adding \(\bar{\mathbf{z}}_{j,\text{obs}}^{*}\) to center them appropriately. ``` **Algorithm 1** Imputation under the PCR model with bootstrap 1. Regress \(\mathbf{y}\) onto each column of \(\mathbf{X}\) via \(P\) separate simple linear regressions. Because the data are standardized, the regression coefficients of these simple linear regression are equivalent to correlation coefficients. The strength of the association is what matters in the predictive task, so we consider only the absolute value of the correlation and refer to it as \(\hat{\rho}\). 2. Define the subset \(\mathbf{X}_{s}\in\mathbf{X}\) by discarding all variables whose correlation \(\hat{\rho}\) is less than a selected threshold \(\rho_{s}\). 3. Use \(\mathbf{X}_{s}\) to estimate the PCs. 4. Use these PCs as independent variables in the PCR model. A key aspect of the method is that both the number of PCs and the threshold value \(\rho_{s}\) can be determined by cross-validation. We refer to this approach as supervised principal component regression (SPCR). In SPCR, the component weights are estimated by minimizing the same criterion as in Equation 5, but only a subset of relevant variables from \(\mathbf{X}\) is used for the computation. By doing so, SPCR effectively sets to 0 component weights for variables that are not relevant predictors of \(\mathbf{y}\). As a result, SPCR produces PCs that are better predictors of \(\mathbf{y}\) and improves the predictive performance of PCR. We refer to the approach of excluding variables that are uninteresting for the prediction of the dependent variable as _discrete_ supervision. A similar approach, known as Sparse PCA (Zou, Hastie, & Tibshirani, 2006), reduces the number of variables explicitly used in the PC computation by combining a lasso penalty with the PCA optimization criterion. Similarly to SPCR, Sparse PCA sets certain loadings to 0, but it does so to increase the interpretability of the resulting PCs, not to improve their predictive performance. This key difference makes SPCR a more suitable tool than Sparse PCA for aiding automatic imputation model specification. Therefore, we considered SPCR and not Sparse PCA as a means to reduce the dimensionality of the imputation models. In the context of imputation, SPCR can be used as a univariate imputation model in a similar way to PCR. For each partially observed \(\mathbf{z}_{j}\), with \(j\in\{1,\ldots,J\}\), the imputation model can be defined as: \[f(\mathbf{z}_{j}|\mathbf{T}_{s}^{(i)},\theta_{j}), \tag{9}\] where \(\mathbf{T}_{s}^{(i)}\) is the matrix of PCs computed on \(\mathbf{Z}_{s}^{(i)}\), the subset of variables with \(\hat{\rho}>\rho_{s}\), at the \(i\)th iteration of the MICE algorithm. The steps described in Algorithm 2 are followed to impute \(\mathbf{z}_{j}\) at every iteration. We refer to this use of SPCR within MICE as MI-SPCR. In our implementation of MI-SPCR, _K_-fold cross-validation (Hastie et al., 2009, pp. 241-245) is used to select \(\rho_{s}\), from a user-defined vector of possible values. For every threshold value in the interval \([0,1]\), all predictors of \(\mathbf{z}_{j,\text{obs}}^{*}\) in \(\mathbf{Z}_{-j,\text{obs}}^{*}\) with a correlation larger than the threshold form an active set of predictors. Then, \(Q\) PCs are extracted from each active set and used to predict \(\mathbf{z}_{j,\text{obs}}^{*}\) in a _K_-fold cross-validation procedure. The active set giving the lowest cross-validated prediction error is kept. As with MI-PCR, the number of components \(Q\) is considered fixed, but it can be selected by the same cross-validation procedure. Note that for a given number of components, only certain threshold values are allowed. We can compute \(Q\) components only if the data have at least \(Q\) columns. Therefore, the more components we want to estimate, the less restrictive \(\rho_{s}\) can be. If we ask for as many components as there are columns in the data, then \(\rho_{s}\) must be large enough to keep all columns of the data, making MI-SPCR equivalent to MI-PCR. #### 2.5.2 Principal covariates regression Principal covariates regression (PCovR, De Jong & Kiers, 1992) is an SDR approach that modifies the optimization criteria behind PCA to include information from the outcome variable in the optimization problem. PCovR looks for a low-dimensional representation of \(\mathbf{X}\) that accounts for the maximum amount of variation in both \(\mathbf{X}\) and \(\mathbf{y}\). To understand how PCovR differs from PCR consider the following decomposition of the data: \[\mathbf{X} =\mathbf{TP}_{\mathbf{X}}^{\prime}+\mathbf{E}_{\mathbf{X}} \tag{10}\] \[\mathbf{y} =\mathbf{TP}_{\mathbf{y}}^{\prime}+\mathbf{e}_{\mathbf{y}}\] (11) \[\mathbf{T} =\mathbf{X}\mathbf{W} \tag{12}\] where \(\mathbf{T}\) and \(\mathbf{W}\) are defined as in 5, \(\mathbf{P}_{\mathbf{X}}\) is \(\mathbf{P}\) from 5, and \(\mathbf{P}_{\mathbf{y}}\) is the \(Q\times 1\) vector of weights relating \(\mathbf{y}\) to the component scores in \(\mathbf{T}\). \(\mathbf{E}_{\mathbf{X}}\) and \(\mathbf{e}_{\mathbf{y}}\) are reconstruction errors. They represent the information lost by using \(\mathbf{T}\) as a summary of \(\mathbf{X}\) and the errors in the linear regression model, respectively. PCovR can be formulated as the task of minimizing [PREPRINT] SDR for Mice ``` 1:For a given \(\mathbf{z}_{j}\) variable under imputation, draw a bootstrap version \(\mathbf{z}_{j,\text{obs}}^{*}\) with replacement from the observed cases \(\mathbf{z}_{j,\text{obs}}\), and store as \(\mathbf{Z}_{-j,\text{obs}}^{*}\) the corresponding values on \(\mathbf{Z}_{-j,\text{obs}}^{(i)}\). 2:Compute the absolute correlation between \(\mathbf{z}_{j,\text{obs}}^{*}\) and every potential predictor in \(\mathbf{Z}_{-j,\text{obs}}^{*}\). 3:For every \(\rho_{h}\), create a set of predictors with absolute correlation higher than \(\rho_{h}\). 4:Use \(K\)-fold cross-validation to select the value of \(\rho_{h}\) and the associated set of predictors that return the PCR model with the smallest prediction error. Define \(\rho_{s}\) as the selected value. 5:Drop from \(\mathbf{Z}_{-j,\text{obs}}^{*}\) and \(\mathbf{Z}_{-j,\text{mis}}^{(i)}\) all variables with an absolute correlation smaller than \(\rho_{s}\) and create \(\mathbf{Z}_{s,obs}^{*}\) and \(\mathbf{Z}_{s,mis}\). 6:Center and scale \(\mathbf{Z}_{s,obs}^{*}\) and store the result as \(\tilde{\mathbf{Z}}_{s,obs}^{*}\) 7:Center and scale \(\mathbf{Z}_{s,mis}\) based on based on the means and standard deviations of \(\mathbf{Z}_{s,obs}^{*}\) and store the result as \(\tilde{\mathbf{Z}}_{s,mis}\) 8:Center \(\mathbf{z}_{j,\text{obs}}^{*}\) on its mean value \(\bar{\mathbf{z}}_{j,\text{obs}}^{*}\) and store it in \(\bar{\mathbf{z}}_{j,\text{obs}}^{*}\). 9:Estimate \(\mathbf{W}\) and \(\mathbf{P}\) by the eigendecomposition of the cross-product matrix of \(\tilde{\mathbf{Z}}_{s,obs}^{*}\) 10:Compute the first \(Q\) PCs as \(\mathbf{T}_{s,obs}^{(i)}=\tilde{\mathbf{Z}}_{s,obs}^{*}\mathbf{W}\) 11:Regress the mean centered \(\tilde{\mathbf{z}}_{j,\text{obs}}^{*}\) on \(\mathbf{T}_{s,obs}^{(i)}\) and store the regression coefficients \(\beta\). 12:Estimate the residual error variance \(\sigma^{2}\) as the ratio between the residual sum of square (\(RSS\)) and the degrees of freedom (\(df\)): \[\sigma^{2} =RSS/df\] \[=\frac{\Sigma\left(\bar{\mathbf{z}}_{j,\text{obs}}^{*}-\mathbf{T} _{s,obs}^{(i)}\beta\right)^{2}}{N-Q}\] 13:Obtain the predicted values for \(\mathbf{z}_{j,miss}\) by \[\hat{z}_{j,miss}=\tilde{\mathbf{Z}}_{s,mis}\mathbf{W}\beta\] 14:Obtain imputations by adding normally distributed errors scaled by \(\sigma^{2}\) to these predictions and by adding \(\bar{\mathbf{z}}_{j,\text{obs}}^{*}\) to center them appropriately. ``` **Algorithm 2** Imputation under the SPCR model with bootstrap a weighted combination of both \(\mathbf{E_{X}}\) and \(\mathbf{e_{y}}\): \[(\mathbf{W},\mathbf{P_{X}},\mathbf{P_{y}})=\operatorname*{argmin}_{\mathbf{W}, \mathbf{P_{X}},\mathbf{P_{y}}}\alpha\left\|(\mathbf{X}-\mathbf{X}\mathbf{W} \mathbf{P_{X}^{\prime}})\right\|^{2}+(1-\alpha)\left\|(y-\mathbf{X}\mathbf{W} \mathbf{P_{y}^{\prime}})\right\|^{2} \tag{13}\] subject to the constraint \(\mathbf{W^{\prime}X^{\prime}XW}=\mathbf{T^{\prime}T}=\mathbf{I}\). The \(\alpha\) parameter defines which reconstruction error is being prioritized. When \(\alpha=1\), the emphasis is exclusively placed on reconstructing \(\mathbf{X}\), casting PCR as a special case of PCovR. When \(\alpha=0.5\), the importance of \(\mathbf{X}\) and \(\mathbf{y}\) is equally weighted, a case that resembles partial least square regression (PLSR), which we discuss in subsection 2.5.3. In practice, the value of \(\alpha\) can be found by cross-validation or according to a sequential procedure based on maximum likelihood principles (Vervloet, Van Deun, Van den Noortgate, & Ceulemans, 2013). In particular, \[\alpha_{ML}=\frac{\|\mathbf{X}\|^{2}}{\|\mathbf{X}\|^{2}+\|y\|^{2}\frac{\hat{ \sigma}_{\mathbf{E_{X}}}^{2}}{\hat{\sigma}_{\mathbf{E_{y}}}^{2}}} \tag{14}\] where \(\hat{\sigma}_{\mathbf{E_{X}}}^{2}\) can be obtained as the unexplained variance by components computed according to classical PCA on \(\mathbf{X}\) and \(\hat{\sigma}_{ey}^{2}\) can be estimated as the unexplained variance by the linear model regressing \(\mathbf{y}\) on \(\mathbf{X}\). Note that, for the same data set, the more components are retained, the smaller \(\hat{\sigma}_{\mathbf{E_{X}}}\) is, the higher \(\alpha_{ML}\) is, and the closer PCovR becomes to PCR. Retaining the same number of components as the number of variables in \(\mathbf{X}\) results in \(\hat{\sigma}_{\mathbf{E_{X}}}=0\) and \(\alpha=1\), which casts PCR as a special case of PCovR. Compared to PCR, PCovR allows estimating PCs that not only represent well the predictor variables but also predict well the dependent variable. Compared to SPCR, the PCs computed with PCovR are always linear combinations of _all_ variables in \(\mathbf{X}\). PCovR can downweigh irrelevant variables for the prediction of \(\mathbf{y}\), but it will never exclude them entirely. We refer to the PCovR approach to supervision as _continuous_, as opposed to the _discrete_ supervision of SPCR. When applied to the MICE algorithm, we can use PCovR as a univariate imputation model in a similar way to how we can use PCR and SPCR. At every iteration of the MICE algorithm, the steps described in Algorithm 3 are followed to impute \(\mathbf{z}_{j}\). We refer to this use of PCovR within MICE as MI-PCovR. #### 2.5.3 Partial least square regression Partial least square regression (PLS, Wold, 1975) is a dimensionality reduction technique that seeks linear combinations (or PLS components) that account for a large proportion of the variance in the predictors and correlate strongly with the dependent variable. Like PCR, PLSR finds independent linear combinations of the predictors in \(\mathbf{X}\) that summarize the data well and uses these linear combinations to predict the dependent variable. Like PCovR, the weights defining the linear combinations of the predictors are computed using all the predictor variables and the outcome, resulting in a _continuous_ supervision: irrelevant variables will be downweighted in the linear combinations, but they will not be completely ignored. Unlike PCR, SPCR, and PCovR, PLSR computes one linear combination at a time and stops at the required number of PLS components \(Q\). PLSR estimates \(\mathbf{t}_{1}\), the first PLS component, by: 1. Computing the vector of weights \(\mathbf{w}_{1}\) with elements \(w_{1,j}=\mathbf{x}_{j}^{\prime}\mathbf{y}\) for \(j\in\{1,\ldots,P\}\), the inner products between each predictor \(\mathbf{x}_{j}\) and the dependent variable \(\mathbf{y}\). 2. Deriving the constructed variable \(\mathbf{t}_{1}=\sum_{j=1}^{P}\mathbf{x}_{j}w_{1,j}=\mathbf{X}\mathbf{w}_{1}\). 3. Orthogonalizing \(\mathbf{x}_{1}\) to \(\mathbf{x}_{P}\) with respect to \(\mathbf{t}_{1}\). The second linear combination (\(\mathbf{t}_{2}=\mathbf{X}\mathbf{w}_{2}\)) is then derived by repeating the same procedure but replacing each \(\mathbf{x}_{j}\) with their versions orthogonalized with respect to \(\mathbf{t}_{1}\). In PLSR, the \(q\)th weight vector (\(\mathbf{w}_{q}\)) maximizes the following optimization criterion (Frank & Friedman, 1993; Stone & Brooks, 1990): \[\underset{\mathbf{w}_{q}}{\text{argmax}}\text{ Corr}^{2}(\mathbf{y},\mathbf{X} \mathbf{w}_{q})\text{Var}(\mathbf{X}\mathbf{w}_{q}) \tag{15}\] where \(\text{Corr}^{2}(.)\) is the squared correlation of the vectors between brackets. As with all other methods, the linear combinations derived by the PLS algorithm are constrained to be mutually orthogonal. As with SPCR and PCovR, at every iteration of the MICE algorithm, we can use PLSR to obtain imputations. Algorithm 4 describes the univariate imputation method based on PLSR2 that we used to impute \(\mathbf{z}_{j}\). We refer to this use of PLSR within MICE as MI-PLSR. In Table 1, we summarize the differences between the univariate imputation methods used by MI-PCR and the SDR-based approaches we described. ``` 1:For a given \(\mathbf{z}_{j}\) variable under imputation, draw a bootstrap version \(\mathbf{z}_{j,\text{obs}}^{*}\) with replacement from the observed cases \(\mathbf{z}_{j,\text{obs}}\), and store as \(\mathbf{Z}_{-j,\text{obs}}^{*}\) the corresponding values on \(\mathbf{Z}_{-j}^{(i)}\). 2:Estimate PLSR with \(Q\) components by regressing \(\mathbf{z}_{j,\text{obs}}^{*}\) onto \(\mathbf{Z}_{-j,\text{obs}}^{*}\). 3:Estimate the residual error variance \(\sigma^{2}\) as the ratio between the residual sum of square (\(RSS\)) and the degrees of freedom (\(df\)). 4:Obtain the predicted values for \(\mathbf{z}_{j,miss}\) based on the trained PLSR model. 5:Obtain imputations by adding noise scaled by \(\sigma^{2}\) to these predictions. ``` **Algorithm 4** Imputation under the PLSR model with bootstrap ## 3 Simulation study We investigated the relative performance of unsupervised PCR and the supervised alternatives described above with a Monte Carlo simulation study. In particular, we investigated the estimation bias, confidence interval width, and confidence interval coverage of several analysis model parameters obtained after imputation. We varied the dimensionality of the latent structure in the data-generating model, the proportion of missing values, the missing data mechanism, and the number of components used as predictors in the imputation models. The proportion of missing values and the missing data mechanism both influence the statistical properties of any missing data treatment, while the number of latent variables in the data-generating model and the number of components used in the imputation model affect the extent to which supervision can improve upon the limitations of MI-PCR. In table 2, we summarize the experimental factors we varied and the levels we used with each factor. ### Procedure The simulation study involved four steps: \begin{table} \begin{tabular}{c c c c c} \hline \hline Method & Supervision & Optimization & Estimated & Tuning \\ & type & criterion & parameters & parameters \\ \hline MI-PCR & none & \(\underset{\mathbf{W},\mathbf{P}}{\text{argmin}}\;\|\widehat{\mathbf{Z}}_{-j, \text{obs}}^{*}-\widehat{\mathbf{Z}}_{-j,\text{obs}}^{*}\mathbf{W}\mathbf{P}^ {\prime}\|^{2}\) & \(\mathbf{W},\mathbf{P}\) & - \\ MI-SPCR & discrete & \(\underset{\mathbf{W},\mathbf{P}}{\text{argmin}}\;\|\widehat{\mathbf{Z}}_{-j, \text{obs}}^{*}-\widehat{\mathbf{Z}}_{-j,\text{obs}}^{*}\mathbf{W}\mathbf{P}^ {\prime}\|^{2}\) & \(\mathbf{W},\mathbf{P}\) & \(\rho_{s}\) \\ & & \(\underset{\mathbf{W},\mathbf{P}_{\mathbf{X}},\mathbf{P}_{\mathbf{Y}}}{\text{ argmin}}\;\alpha\|(\widehat{\mathbf{Z}}_{-j,\text{obs}}^{*}-\widehat{\mathbf{Z}}_{-j, \text{obs}}^{*}\mathbf{W}\mathbf{P}^{\prime}_{\mathbf{X}})\|^{2}+\) & \\ MI-PCovR & continuous & & \(\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \ 1. Data generation: We generated \(R=240\) data sets from a confirmatory factor analysis model, following the procedure described in Section 3.1.1. 2. Missing data generation: We generated missing values on three target items in each data set, following the procedure described in Section 3.1.2. 3. Imputation: We generated \(d=5\) multiply imputed versions of each generated data set using different imputation methods, as described in Section 3.1.3. 4. Analysis: We estimated the means, variances, covariances, and correlations of the three items with missing values on the \(d\) imputed data sets, and we pooled the estimates according to Rubin's rules (Rubin, 1986, p. 76). We then assessed each imputation method by computing the bias of different parameter estimates, and their confidence interval widths and coverages as described in Section 3.1.4. #### 3.1.1 Data generation For each of the \(R\) replications, we generated a \(1000\times P\) data matrix \(\mathbf{Z}\). The sample size should be large enough to generate data sets that have statistical properties similar to large social science data sets. Each data set was generated based on the following model: \[\mathbf{Z}=\mathbf{F}\mathbf{\Lambda}^{\prime}+\mathbf{E}, \tag{16}\] where \(\mathbf{F}\) is a \(1000\times L\) matrix of latent variables scores, \(L\) is the number of latent variables, \(\mathbf{\Lambda}\) is a \(3\times L\) matrix of factor loadings, where 3 is the number of items measuring each latent variable, and \(\mathbf{E}\) is a \(1000\times P\) matrix of measurement errors, where \(P=3*L\). The dimensionality of the data resembles that of the many large social surveys that use short scales to measure respondents' attitudes such as _political engagement_ and _anti-immigrant attitudes_. For example, consider the European Values Study (EVS, 2020) which measures a variety of attitudes with 3, 4, or 5 items. The factor loading matrix \(\mathbf{\Lambda}\) in Equation 16 described a simple structure (Bollen, 1989, p. 234) where each item loads on exactly one of the \(L\) latent variables. In real data applications, this factor structure is uncommon but not implausible. For example, a relatively clear simple structure can be found when analyzing personality inventories (e.g., NEO-PR-I, Costa Jr, McCrae, & Dye, 1991) with the _Neuroticism-Extroversion-Openness_ three-factor model (McCrae & Costa Jr, 1983). A simple structure is also often assumed \begin{table} \begin{tabular}{c c l} \hline \hline Experimental factor & Label & Levels \\ \hline Number of latent variables & \(L\) & 2, 10, 50 \\ Missing data mechanism used & \(mech\) & MCAR, MAR \\ Proportion of missing values & \(pm\) & 0.1, 0.25, 0 50 \\ Missing data treatment & \(method\) & MI-PCR, MI-SPCR, MI-PCovR, MI-PLS, MI-QP, MI-AM, MI-ALL, CC, FO \\ Number of components & \(nc\) & 0, 1 to 12, 20, 29, 30, 40, 48, 49, 50, 51, 52, 60, 149 \\ \hline \hline \end{tabular} \end{table} Table 2: _Summary of experimental factors for the simulation study._ when performing exploratory factor analysis because it provides the most parsimonious explanation (Costa & McCrae, 2008, pp. 183-184). We sampled \(\mathbf{F}\) from a multivariate normal distribution with mean \(\mathbf{0}\) and covariance matrix \(\mathbf{\Psi}\). The correlation between the first and second latent variables was fixed at \(0.8\), while the correlation between all other latent variables and the first two was fixed at \(0.1\). Together with factor loadings fixed at \(\lambda=0.85\), these choices resulted in correlations of approximately \(0.72\), \(0.58\), and \(0.07\) between items measuring the same latent variable, items measuring the first and second latent variable, and items measuring the first latent variable and the others, respectively. These values represent plausible, but reasonably high, item-scale associations and they should mitigate the impact of measurement error on our findings without resorting to implausibly precise data. The matrix of measurement errors \(\mathbf{E}\) was sampled from a multivariate normal distribution with mean vector \(\mathbf{0}\) and covariance matrix \(\mathbf{\Theta}\). The off-diagonal elements of \(\mathbf{\Theta}\) were set to \(0\) to reflect uncorrelated errors, while the diagonal elements were specified as \(1-\lambda^{2}\) to give the simulated items unit variances. After sampling, the columns of \(\mathbf{Z}\) were rescaled to have approximately a mean of \(5\) and a variance of \(6.5\), which are common values for Likert items in social surveys measured on a \(10\)-point scale (for example in EVS, 2020). In this data-generating procedure, we considered the number of latent variables used (\(L\)) as an experimental factor with levels \(2,10,50\), resulting in data sets containing \(6\), \(30\), and \(150\) total items. Costantini, Lang, Sijtsma, and Reeskens (2023) showed that the number of components used in MI-PCR needs to be at least as high as the number of latent variables in the data-generating model. So, we expected MI-PCR to require at least \(L\) components to achieve satisfactory performance. One of this study's main objectives was to understand how well supervision can overcome this limitation of MI-PCR. We generated data according to a confirmatory factor analysis model instead of generating the data directly based on true principal components to avoid the misleading results that can occur when using a single model for both data generation and imputation (see Oberman & Vink, 2023, p. 4). #### 3.1.2 Missing data generation. We generated missing data on the three items measuring the first latent variable (\(\mathbf{z}_{1}\), \(\mathbf{z}_{2}\), \(\mathbf{z}_{3}\)). The proportion of missing values per variable (\(pm\)) was defined as an experimental factor taking three levels \(pm\in\{0.1,0.25,0.5\}\). The missing data mechanism (\(mech\)) was a factor with two levels: * Missing completely at random (MCAR): To test how the methods performed in the simplest possible missing data mechanism, we generated missing values on each item based on a missing data indicator (\(\delta\)) sampled from a binomial distribution with success probability \(pm\). If \(\delta=1\), the item score was set to missing. If \(\delta=0\), the item score was set to observed. As a result, every variable had a proportion of missing values approximately equal to \(pm\). * Missing at random (MAR): To test how the methods performed in a more realistic situation, we generated missing values based on a MAR mechanism with the three items measuring the second latent variable (\(\mathbf{z}_{4}\), \(\mathbf{z}_{5}\), \(\mathbf{z}_{6}\)) used as predictors of missingness. We sampled \(\delta\) from Bernoulli distributions with probabilities defined based on the following logit model: \[logit(\delta=1)=\beta_{0}+\mathbf{Z}_{(4,5,6)}\beta,\] (17) where \(\beta_{0}\) is an intercept parameter, and \(\beta\) is a vector of slope parameters. All slopes in \(\beta\) were fixed to 1, while the value of \(\beta_{0}\) was chosen with an optimization algorithm that minimized the difference between the actual and desired proportion of missing values on the variable. The pseudo R-squared for the logistic regression of the missing value indicator on the predictors of missingness was approximately 14%. The AUC for the logistic regression was approximately 0.74. To create realistic missing data patterns, the location of missing data was fixed to right for \(\mathbf{z}_{1}\), left for \(\mathbf{z}_{2}\), and tails for \(\mathbf{z}_{3}\). #### 3.1.3 Imputation. We imputed the missing values using the four dimension reduction-based methods described above (MI-PCR, MI-SPCR, MI-PCovR, MI-PLSR) as well as three traditional approaches: * MI with all the available variables used as predictors in the imputation models (MI-ALL), which represents the most naive way to define the imputation model. * MI with a correlation-based threshold strategy to select the subset of important predictors (MI-QP). As a pragmatic point of comparison, this method used the _quickpred_ function from the R package mice (Van Buuren & Groothuis-Oudshoorn, 2011) to select the predictors for the univariate imputation models via the correlation-based threshold strategy described by Van Buuren, Boshuizen, and Knook (1999, pp. 687-688). To implement this approach, we selected only those predictors that correlated with the imputation targets (or their associated missingness indicators) higher than 0.1. * MI with the analysis model variables used as sole predictors in the imputation models (MI-AM). This method produces the simplest possible congenial imputation model and is interesting due to its popularity in the social scientific literature (Costantini, Lang, Reeskens, & Sijtsma, 2023). As reference points, we also treated the missing values with complete case analysis (CC) and estimated the analysis model from the original, fully observed data. Every imputation-based method used simple random draws from the observed data as starting values and was run to obtain 5 imputed data sets. Convergence was achieved after 20 iterations for all methods3. All our dimensionality reduction-based algorithms need the user to define \(Q\), the optimal number of components. In practice, researchers will choose a single value of \(Q\) with a cross-validation procedure, but in this study, we are interested in evaluating the performance of the four dimension-reduction imputation approaches while varying the number of retained components. Therefore, we defined \(Q\) as one of our main experimental factors (the number of components, \(nc\)) taking values \(\{1,\ldots,12,20,29,30,40,48,49,50,51,52,60,149\}\). These values were chosen to both cover the range of possible choices (i.e., \(1,\ldots,[P-1]\)) and to provide more granularity around the true number of latent variables (i.e., 3, 10, 50). Footnote 3: Convergence plots are reported in the interactive results dashboard that we developed to accompany this article. See Section 3.2 for more details. Finally, for MI-SPCR, we used the cross-validation procedure described in Section 2.5.1 to select \(\rho_{s}\) from the vector of values \(\{0.05,0.1,0.15,\ldots,0.95\}\). For a given number of components, some threshold values can exclude enough predictors to preclude computing the required number of components. To avoid this possibility, the cross-validation algorithm only considered values of \(\rho\) that retained enough variables to compute the required number of components. The weighting parameter (\(\alpha\)) for MI-PCovR was selected using the sequential MLE-based estimation procedure described in Vervloet et al. (2013). The degrees of freedom for the PLSR imputation model were computed based on the naive approach described by Kramer and Sugiyama (2011) #### 3.1.4 Analysis and comparison criteria For a given parameter \(\phi\) (e.g., the mean of \(\mathbf{z}_{1}\), the correlation between \(\mathbf{z}_{1}\) and \(\mathbf{z}_{2}\)), we used the absolute percent relative bias (PRB) to quantify the estimation bias introduced by the imputation procedure: \[\text{PRB}=\Bigg{|}\ \frac{\bar{\phi}-\phi}{\phi}\Bigg{|}\times 100 \tag{18}\] where \(\phi\) is the true value of the focal parameter defined as \(\sum_{r=1}^{R}\hat{\phi}_{r}^{full}/R\), with \(\phi_{r}^{full}\) being the parameter estimate for the \(r\)th repetition computed on the fully observed data. The averaged focal parameter estimate under a given missing data treatment was computed as \(\bar{\phi}=\sum_{r=1}^{R}\hat{\phi}_{r}/R\), with \(\bar{\hat{\phi}}_{r}\) being the estimate obtained from the treated incomplete data in the \(r\)th replication. Following Muthen, Kaplan, and Hollis (1987), we considered PRB \(>10\) as indicative of problematic estimation bias. To measure the statistical efficiency of the imputation methods we computed the average width of the confidence intervals (CIW). \[\text{CIW}=\frac{\sum_{r=1}^{R}(\widehat{\text{CI}}_{r}^{upper}-\widehat{ \text{CI}}_{r}^{lower})}{R}, \tag{19}\] with \(\widehat{\text{CI}}_{r}^{upper}\) and \(\widehat{\text{CI}}_{r}^{lower}\) being the upper and lower bounds of the estimated confidence interval for the \(r\)th replication. Narrower CIWs indicate higher efficiency. However, narrower CIWs are not preferred if they come at the expense of good confidence interval coverage (CIC) of the parameter values. CIC is the proportion of confidence intervals that contain the true value of the parameter, across the \(R\) data samples: \[\text{CIC}=\frac{\sum_{r=1}^{R}I(\phi\in\widehat{\text{CI}}_{r})}{R}, \tag{20}\] where \(\widehat{\text{CI}}_{r}\) is the confidence interval of the parameter estimate \(\hat{\phi}_{r}\) in the \(r\)th replication, and \(I(.)\) is the indicator function that returns 1 if the argument is true and 0 otherwise. CIC depends on both the bias and the variability of the CIW for a parameter estimate. In particular, for a given level of bias, a narrower CIW leads to lower CIC, and, for a given CIW, a larger bias leads to lower CIC. An imputation method with good coverage should result in CICs greater than or equal to the nominal rate. For 95% CIs, CIC below 0.9 is usually considered problematic (e.g., Van Buuren, 2018, p. 52; Collins et al., 2001, p. 340) as it implies inflated Type I error rates. High CIC (e.g., 0.99) implies inflated Type II error rates. ### Results We report only the results for the correlation between \(\mathbf{z}_{1}\) and \(\mathbf{z}_{2}\) for the conditions with _mech_ = MAR and \(pm=0.5\) because the type of parameter and the different levels of these two factors did not impact the relative performances of the imputation methods. We focused on the correlation between two items with missing values because this parameter differentiated the performances of the methods the most. The full set of results is available via the interactive results dashboard that we developed to accompany this article (Costantini, 2023b). The dashboard can be downloaded and installed as an R package, and it can be used as an R Shiny app. In Figures 1, 2, and 3, we report the PRB, CIW, and CIC for the correlation coefficient between \(\mathbf{z}_{1}\) and \(\mathbf{z}_{2}\) for different numbers of latent variables in the data-generating model (\(L\)) and numbers of components retained by the methods (\(nc\)). Across all values of \(L\), MI-PCR resulted in a smaller bias and coverage closer to nominal the more components were retained. However, MI-PCR required the number of components to be greater or equal to the number of latent variables used in the data-generating model to return acceptable bias and coverages. In particular, for \(L=2\) and \(L=10\), MI-PCR resulted in acceptable bias (\(\text{PRB}<10\)) and close to nominal coverage (\(\text{CIC}>0.9\)) only when using \(nc\geq 2\) and \(nc\geq 10\), respectively. Contrary to expectation, this trend did not persist for all higher values of \(L\) and \(nc\). For \(L=50\), MI-PCR resulted in high bias (\(\text{PRB}>20\)) even for \(nc=50\). Furthermore, for \(L=2\) and \(L=10\), MI-PCR resulted in large deviations from nominal coverage (\(\text{CIC}<0.9\)) for \(nc=5\) and for \(nc\in\{11,12\}\), respectively. Compared to MI-PCR, MI-SPCR performed much better, especially when using just a few components. MI-SPCR resulted in the lowest bias, smallest confidence interval width, and closest to nominal coverage when using between 2 and 5 components. For all values of \(L\), using 2 components instead of 1, led to a large reduction in PRB and improvement in CIC. Using 6 or more components had only a minor negative impact on the performance of the method in the condition with \(L=10\), resulting in a negligible increase in bias. However, for \(L=50\), using 6 or more components did lead to high bias and low coverage. Finally, the maximum number of components led to algorithmic failures, so there were no results to report for \(nc=29\) and \(nc=149\) in the \(L=10\) and \(L=50\) conditions, respectively. MI-PCovR resulted in acceptable bias for all values of \(L\), for all \(nc\) values reported. However, its bias performance was less stable than that of MI-SPCR across the values of \(L\) and \(nc\). For \(L=10\), MI-PCovR led to smaller bias when using \(nc=2\) instead of \(nc=1\), but the bias increased when using \(nc\in\{3,\ldots,9\}\), only to decrease again for \(nc\geq 10\). In the \(L=50\) condition, MI-PCovR resulted in decreasing bias for the range \(nc\in\{1,\ldots,9\}\), and the lowest bias was achieved with \(nc=L\), just after a small increase. Furthermore, the CIC resulted in larger deviations from nominal coverage than MI-SPCR, resulting in acceptable CIC only with a few of the many \(nc\) values considered. Compared to MI-SPCR and MI-PCovR, the reduction in bias obtained by MI-PLSR for higher values of \(nc\) was more gradual. For \(L=10\) and \(50\), using 2 components instead of 1, led to a reduction in bias, but 3 components were necessary to achieve \(\text{PRB}<10\), while both MI-SPCR, and MI-PCovR only needed 2 to achieve the same result. However, the PRB remained small for \(nc\in\{6,7,8,9,10\}\), even when that of MI-SPCR and MI-PCovR increased. Despite this good bias performance, the CIW and CIC of MI-PLSR fluctuat between acceptable and not, with only a few values of \(nc\) resulting in close-to-nominal coverage. To put these results into perspective, we reported the same performance metrics for three traditional MI approaches, and complete case analysis in Figure 4. MI-QP and MI-ALL resulted in acceptable bias (\(PRB<10\)) for all values of \(L\), although larger values of \(L\) did result in increased bias and decreasing coverage for both methods, but especially for MI-ALL. MI-AM and CC did not result in a higher bias or lower coverages for larger values of \(L\), but both returned relatively high bias and low coverage across all conditions. ## 4 Discussion ### Supervised dimensionality reduction Our simulation study outlined some clear advantages of using supervised dimensionality reduction techniques over standard PCA with MICE. We found that MI-PCR requires the use of at least as many components as the number of latent variables in the data-generating model, which is in line with the results presented by Costantini, Lang, Sijtsma, and Reeskens (2023). The simulation study presented here also showed that meeting this requirement is not sufficient to obtain good imputations for a large number of latent variables. We found that the performance of MI-PCR does not only depend on knowing the Figure 1: The PRB for the estimated correlation coefficient between the first two items imputed is reported (Y-axis) as a function of the number of components (\(nc\)) used by the PCA-based imputation methods (X-axis). The plot is divided into a grid where the rows distinguish the results obtained after imputing the data with the four PCA-based methods and the columns distinguish the number of latent variables used to generate the data (\(L\)). All results plotted in this figure were obtained on data generated with _mech_ = MAR and \(pm=0.5\). number of latent variables in the data-generating model, but also on the number itself. On the contrary, the SDR-based methods retaining just a few components resulted in small bias and good confidence interval coverage, _independent_ of the number of latent variables in the data-generating model. Furthermore, when using any given number of components, except the maximum, using the SDR-based methods resulted in smaller bias, narrower confidence intervals, and closer to nominal coverage than MI-PCR. Considering these results, SDR-based MICE seems to be more appropriate than PCR-based MICE for automatic imputation model specification. Among the SDR-based methods, MI-SPCR had the best statistical properties. MI-SPCR returned smaller bias and better coverage for a wider range of retained components compared to MI-PCovR. MI-SPCR also achieved a smaller bias than MI-PLSR when retaining fewer components, and it resulted in consistently closer-to-nominal coverages. Based on our results it seems that, at least in the context of the imputation of data with a latent structure, the _discrete_ type of supervision employed by MI-SPCR should be preferred to Figure 2: The CIW for the estimated correlation coefficient between the first two items imputed is reported (Y-axis) as a function of the number of components (\(nc\)) used by the PCA-based imputation methods (X-axis). The plot is divided into a grid where the rows distinguish the results obtained after imputing the data with the four PCA-based methods and the columns distinguish the number of latent variables used to generate the data (\(L\)). All results plotted in this figure were obtained on data generated with _mech_ = MAR and \(pm=0.5\). The light gray color indicates the parameter estimate for which the CIW is reported had both acceptable bias (\(\text{PRB}<10\)) and coverage (\(\text{CIC}>0.9\)). The dark gray color indicates the parameter estimate for which the CIW is reported had large bias (\(\text{PRB}>10\)) or low coverage (\(\text{CIC}<0.9\)), or both. The black horizontal lines represent the average CIW obtained on the original fully observed data. [PREPRINT] SDR FOR MICE _Figure 3._ The CIC for the estimated correlation coefficient between the first two items imputed is reported (Y-axis) as a function of the number of components (_nc_) used by the PCA-based imputation methods (X-axis). The plot is divided into a grid where the rows distinguish the results obtained after imputing the data with the four PCA-based methods and the columns distinguish the number of latent variables used to generate the data (\(L\)). All results plotted in this figure were obtained on data generated with _mech_ = MAR and \(pm=0.5\). For CIC values below 0.8, we reported the precise value within the corresponding bar. the _continuous_ supervision employed by MI-PCovR and MI-PLSR. ### Supervision and the number of principal components Based on our simulation study results, irrespective of which type of supervised dimensionality reduction is used, the implementation of SDR-based methods in MICE should aim for computing a small number of components. Despite this general trend, MI-SPCR and MI-PCovR showed different performances in relation to the different numbers of components retained. As described in the results section, the bias obtained by MI-SPCR was the smallest when using between 2 and 5 components, independently of the number of latent variables in the data-generating model, and it led to algorithmic failures for large numbers of components. These results can be explained by considering how the number of PC retained Figure 4: PRB, CIC, and CIW for the estimated correlation coefficient between the first two items imputed are reported for the traditional missing data handling methods considered. The black horizontal lines represent the average CIW obtained for the parameter of interest when analyzing the original fully observed data. For CIC values below 0.8, we reported the precise value within the corresponding bar. influences MI-SPCR. In MI-SPCR, supervision is introduced by pre-screening the columns of possible predictors set to exclude any predictors that are not correlated strongly enough with the variable under imputation. By reducing the number of columns in the predictor set, this supervision reduces the maximum number of components that can be estimated. The inverse constraint also holds. Fixing the number of components puts an upper-bound on the number of variables that can be excluded during the screening process. So, for example, when using 50 components, MI-SPCR must retain, at least, 50 predictors during the screening step, regardless of how weakly some of these variables may associate with any given variable under imputation. Retaining more components forces the threshold values to be smaller and results in keeping more predictors that are less strongly related to the dependent variable, and, by doing so, it limits the advantage of using supervision in the PCA. In conclusion, considering our results and the relationship between the number of components and supervision, we recommend retaining between 2 and 5 components when using MI-SPCR. In the results section, we noted that in the condition with 10 latent variables, MI-PCovR led to smaller bias when using 2 components instead of 1, but its bias increased when retaining 3 to 9 components, only to decrease again when retaining 10 or more components. A similar but less extreme trend was also detected in the condition with 50 latent variables, a result which we explore further in the appendix. This fluctuating bias performance can be explained by considering the relationship between the number of components retained and the way we computed \(\alpha\) in our simulation study. As described in equation 14, for the same data set, \(\alpha_{ML}\) is bigger when the unexplained variance by the components retained is smaller. The more components we retained in our simulation study, the closer the value of \(\alpha\) was to 1, and the closer MI-PCovR became to MI-PCR. As a result, MI-PCovR resulted in smaller bias than MI-PCR when retaining the first components, as supervision helped to compute leading components that were important predictors for the imputation task. However, as more components were retained, the effect of supervision started to diminish, which drove the bias closer to that of MI-PCR. After this initial increase, the bias achieved by MI-PCovR dropped when retaining as many components as the number of latent variables, mirroring the drop in bias presented by MI-PCR for the same number of components. As a result, the best performances for MI-PCovR could be achieved by retaining the first components or by retaining a number of components just above the number of latent variables. Because of this fluctuating performance, the optimal range of components to consider when using MI-PCovR is not as clear as for MI-SPCR. ## 5 Limitations and future directions An important aspect to consider in deciding which version of supervised dimensionality reduction to use with MICE is how flexible these approaches are to deviations from normality of the data. This topic was not covered by our simulation study. MI-SPCR can easily be adapted to impute binary and categorical variables, and the only complication would be in defining a suitable threshold parameters. One option would be to estimate the associations via fit measures derived from simple (multinomial) logistic regression models. Much research has been dedicated to extending PLSR to categorical outcomes (e.g., Chung & Keles, 2010; Ding & Gentleman, 2005), and these approaches could be used in a similar way to the standard PLS implementation we used in this study. The development of PCovR for classification tasks has not received much attention (Park, Ceulemans, & Van Deun, n.d., is the only example of which we are aware), but the same approaches used to fit PLS in the generalized linear framework should also apply to PCovR. However, the maximum likelihood estimation of the \(\alpha\) parameter can only be done for continuous dependent variables. To impute categorical variables, PCovR would require cross-validation to estimate the value of \(\alpha\), adding to the computational intensity of the procedure. The set of predictors used to compute the components can also include categorical variables. There are different ways of accommodating these categorical variables when estimating the components, including the naive application of traditional PCA Filmer and Pritchett (2001) and the PCAMIX algorithm (Chavent, Kuentz-Simonet, Labenne, & Saracco, 2014; Chavent, Kuentz-Simonet, & Saracco, 2012; Kiers, 1991) specifically designed for this purpose. Which of these approaches is more appropriate for the predictive task involved in MICE is yet to be tested. Finally, all of the PCA-based methods considered here (both supervised and unsupervised) entail a high computational load. For every variable and every iteration of the MICE algorithm, complex matrix operations need to be performed to estimate the components. When cross-validating the tuning parameters, the supervised approaches described in this article can increase. Future research should explore possible computational shortcuts to perform supervised dimensionality reduction faster (e.g., Abraham & Inouye, 2014; Halko, Martinsson, & Tropp, 2011). ## 6 Conclusions Based on the simulation study presented here, it can be concluded that adding a supervision element to the classical use of PCR as a univariate imputation method can improve significantly the performance of MI-PCR, especially when the data contain hundreds of variables. Although there is room to assess the performance of these imputation methods in more complex data scenarios, MI-SPCR was particularly effective for the imputation of missing values and seems to be preferable to MI-PCovR and MI-PLSR. ## 7 Code availability The R code used to perform the simulation study is available on Zenodo (Costantini, 2023a). Please read the README.md files for instructions on how to replicate the results. The article is also accompanied by an interactive results dashboard packaged as an R Shiny app (Costantini, 2023b). We encourage the interested reader to use this tool while reading the results and discussion sections. A user manual is included as a README file in the folder accessible through the DOI provided in the citation. The software can be downloaded and installed as an R package.
2310.14400
A Pytorch Reproduction of Masked Generative Image Transformer
In this technical report, we present a reproduction of MaskGIT: Masked Generative Image Transformer, using PyTorch. The approach involves leveraging a masked bidirectional transformer architecture, enabling image generation with only few steps (8~16 steps) for 512 x 512 resolution images, i.e., ~64x faster than an auto-regressive approach. Through rigorous experimentation and optimization, we achieved results that closely align with the findings presented in the original paper. We match the reported FID of 7.32 with our replication and obtain 7.59 with similar hyperparameters on ImageNet at resolution 512 x 512. Moreover, we improve over the official implementation with some minor hyperparameter tweaking, achieving FID of 7.26. At the lower resolution of 256 x 256 pixels, our reimplementation scores 6.80, in comparison to the original paper's 6.18. To promote further research on Masked Generative Models and facilitate their reproducibility, we released our code and pre-trained weights openly at https://github.com/valeoai/MaskGIT-pytorch/
Victor Besnier, Mickael Chen
2023-10-22T20:21:11Z
http://arxiv.org/abs/2310.14400v1
# A Pytorch Reproduction of Masked Generative Image Transformer ###### Abstract In this technical report, we present a reproduction of MaskGIT: Masked Generative Image Transformer [3], using PyTorch [11]. The approach involves leveraging a masked bidirectional transformer architecture, enabling image generation with only few steps (\(8\sim 16\) steps) for \(512\times 512\) resolution images, i.e., \(\sim\)64x faster than an auto-regressive approach. Through rigorous experimentation and optimization, we achieved results that closely align with the findings presented in the original paper. We match the reported FID of 7.32 with our replication and obtain 7.59 with similar hyperparameters on ImageNet at resolution \(512\times 512\). Moreover, we improve over the official implementation with some minor hyperparameter tweaking, achieving FID of **7.26**. At the lower resolution of \(256\times 256\) pixels, our reimplementation scores **6.80**, in comparison to the original paper's 6.18. To promote further research on Masked Generative Models and facilitate their reproducibility, we released our code and pre-trained weights openly at [https://github.com/valeoai/MaskGIT-pytorch/](https://github.com/valeoai/MaskGIT-pytorch/). ## 1 Masked Generative Models In recent years, advancements in image synthesis have exhibited significant progress. Notably, diffusion models [5] have gained considerable traction, surpassing GANs in popularity due to its ease of training and capacity to capture a more diversified distribution. But, this paper directs its Figure 1: Examples generated on ImageNet at \(512\times 512\) demonstrate the effectiveness of our reproduction. Hyperparameters for this set of examples are Gumbel temperature set to 7, Softmax temperature set to 1.3, CFG set to 9, scheduler set to arccos, and scheduler step set to 32 focus towards another type of generative methods known as Masked Generative models (MGM) [2; 3; 10; 13; 15; 16]. The fundamental concept involves harnessing the capabilities of a VQGAN [6] to discretize and reduce an image into visual tokens. Once this discretization is achieved, a bidirectional transformer [4] is trained to unmask a randomly selected set of visual tokens. During inference, the transformer progressively unmasks a fully masked image. This process makes it possible to unmask tokens in parallel. More information can be found in [3]. Conceptually, MGMs have strong connections to several popular and successful deep learning methods. They are very close to Masked Auto-Encoders [7], a strong family of models for self-supervised learning, with whom they share similar transformers architectures and the masked reconstruction training objective, except in a token space. MGMs are also connected to auto-regressive models [12; 1], a very strong paradigm for data generation, as both perform generation by iteratively predicting tokens, but only MGMs are able to parallelize the process. Finally, MGMs can be linked to discrete reverse diffusion, when the forward noise process is replaced by a discrete patch dropping scheme. The sampling process is progressively retrieving patches over multiple iterations. As MGMs are in very close proximity to those tried and tested methods, we argue that they are very promising research venues that deserve more attention than they currently have. This idea is further supported by the fact that a Masked Generative Model, MAGVIT-v2 [16], reached state-of-the-art results for both text to image and text to video generation. However, despite its promises, it has to be noted that very few public releases of MGMs have been made available so far. At the time of writing, most prominent papers for image generation do not have full release of code and pretrained weights, if any at all. For instance, the founding paper, MaskGIT [3], comes with pretrained model and inference script1, but no training nor evaluation scripts. A text-to-image model, MUSE [2], has no public code release at all. The MAGVIT [15], introduced for video generation, can also be used for images and do provide code, but no pretrained weights. For video prediction, TECO [14] has both code and pretrained weights, but nothing trained on image datasets, and Phenaki [13] has no official code release at all. Moreover, when existing, all these codebases are JAX implementations, and very few resources for Mask Generative Models exist in Pytorch. Still, on-going reproduction efforts of MUSE2 and Phenaki3 in Pytorch should be noted but they have yet to publicize results or pretrained models. To the best of our knowledge, the only complete image Masked Generative Model released in Pytorch is that of MAGE [10], that proposed a non-conditional generative model where MaskGIT accepts class token that allows for conditioning. Footnote 1: [https://github.com/google-research/maskgit/](https://github.com/google-research/maskgit/) Footnote 2: [https://github.com/lucidrains/muse-maskgit-pytorch](https://github.com/lucidrains/muse-maskgit-pytorch) Footnote 3: [https://github.com/lucidrains/phenaki-pytorch](https://github.com/lucidrains/phenaki-pytorch) In this context, this technical report sets out to reproduce the findings of the original MaskGIT paper [3], as their training code is not available and because no convincing reproduction has been publicly released so far. With this report, we provide all the code necessary for replicating the results, including training code and our pretrained models. This work aims at helping researchers to explore the topic of Masked Generative Modeling. We organize this report as follows: in Section 2, we succinctly describe the model, training and sampling techniques, and discuss differences between our reproduction and the details in the paper or official implementation. In Section 3, we present both quantitative and qualitative results obtained from our networks and, in particular, we ablate the different sampling hyperparameters. ## 2 Implementation Details We utilized a pre-trained VQGAN consisting of 72.142M parameters, trained exclusively on the ImageNet dataset. This VQGAN employs a codebook of 1024 codes and reduces a \(256\times 256\) (resp. \(512\times 512\)) image to a \(16\times 16\) (resp. \(32\times 32\)) token representation. For additional information, readers can refer to the official repository4. Footnote 4: [https://github.com/CompVis/taming-transformers](https://github.com/CompVis/taming-transformers) The bidirectional transformer that generates the visual tokens consists of 174.161M parameters on ImageNet \(256\times 256\) and 176.307M parameters on ImageNet \(512\times 512\). This sums up to 246.30M and 248.44M respectively, including the VQGAN. Hyperparameters settings are shown in Table 1. For the inputs, we concatenate the conditional tokens with the visual tokens, i.e., the model takes as input (\(16\times 16\)) + 1 tokens for ImageNet \(256\times 256\), and (\(32\times 32\)) + 1 tokens for ImageNet \(512\times 512\). The model is trained for token unmasking, using cross-entropy loss with label smoothing of 0.1. The optimizer employed is AdamW with a learning rate of \(1e^{-4}\), betas=(0.9, 0.96), and a weight decay of \(1e^{-5}\). We utilize an arccos scheduler for masking during training, regardless of the image resolution. Additionally, we drop 10% of the conditions for classifier-free guidance. We train our methods on both ImageNet \(256\times 256\) and \(512\times 512\). This well-known classification dataset consists of 1.2M images categorized into 1000 classes. For data augmentation, we simply use random cropping and horizontal flipping. During the training of the masked transformer, a batch size of 512 was employed over 300 epochs, utilizing 8 GPUs (768 GPU hours on Nvidia A100) for 755,200 iterations. Subsequently, we fine-tuned the same model for \(\sim\)750,000 additional iterations on ImageNet \(512\times 512\) with a batch size of 128 and 384 GPU hours on Nvidia A100. In total, this project, including training, testing and debugging, consumed \(\sim\)3,500 GPU hours on Nvidia A100. While a lot of information to reproduce the results was available in the main paper, some points remain uncertain. In particular, we discuss here the main differences between the paper, the official github implementation, and our own design choices: * For the architecture of the unmasking transformer, we try to stay as close as possible to the JAX implementation. But it is still unclear why the class embeddings and the visual embeddings are shared in a single layer. This oddity results in implementing the token classification head as a dot product between the outputs of the transformer and each of the embeddings (classes and visual tokens), keeping only the similarity with visual embeddings to compute the cross-entropy. The remaining 1001 values that correspond to the similarity with the class embeddings are dropped in the inference code, and we don't know if they were actually used for training in the official release. * For sampling, the Gumbel noise injected in the confidence is not mentioned in the paper but taken from the official implementation. * In addition, and not documented in the official releases, we used classifier-free guidance, and chose a learning rate of \(1e^{-4}\). * We also use a different number of inference steps (15 steps) to achieve higher performance when compared to the original paper (12 steps). ## 3 Performance on ImageNet ### Quantitative Results Using the hyperparameters for sampling presented in Table 2, we report our quantitative results in Table 3, as well as those from the original paper. Overall, our reproduction shows very similar results for both ImageNet \(256\times 256\) and \(512\times 512\). Slight differences are that our reproduction emphasizes better quality (Inception Score, Precision) and matches the diversity (Frechet Inception Distance, Recall). While only 8 steps are sufficient to generate good images in the lower resolution, one must increase the number of decoding steps up to 15 to achieve the best performance for ImageNet \(512\times 512\). Moreover, a higher Gumbel noise is required to maintain diversity at higher resolutions (see also Figure 1(b)). Finally, it is noteworthy that the best performance is achieved using the arccos scheduler, but results vary according to the number of steps, more information in Table 4. In Figure 2, we present an ablation study on various hyperparameters related to the sampling process. For each curve or table representing an ablation study, all hyperparameters except the one being tested are held constant, as outlined below: scheduler is set to arccos, number of step is set to 12, softmax temperature is set to 1.0, cfg weight is set to 3.0 and Gumbel temperature: 7. \begin{table} \begin{tabular}{c c c c c c} \hline **Hidden Dimension** & **Codebook Size** & **Depth** & **Attention Heads** & **MLP Size** & **Dropout** \\ \hline 768 & 1024 & 24 & 16 & 3072 & 0.1 \\ \hline \end{tabular} \end{table} Table 1: Transformer architecture Figure 1(a) show that the Inception Score (IS) exhibits a consistent increase throughout the training process, while the Frechet Inception Distance (FID) demonstrates a corresponding decrease. This curve suggests that prolonging the training could potentially enhance the performance significantly, particularly for ImageNet \(256\times 256\). Furthermore, we find out that injecting noise in the confidence score substantially improves the diversity of generated samples. Indeed, as can be seen in Figure 1(b), introducing Gumbel noise in the confidence score, with a linear decay over the sampling, significantly improves the FID from \(66.7\) to \(7.7\). This trick, taken from the official code, was not discussed in the original paper. Additionally, we study the influence of the number of steps on the quality of the samples. As expected, we show in Figure 1(c) that as the number of steps increases, the sampling quality improves. However, it is notable that the performance saturates beyond a certain number of steps, reaching best FID at 15 steps. Also, in Figure 1(d), we perform experiments on the classifier free guidance (cfg) to strike a balance between fidelity, represented by FID, and quality, as indicated by IS. An optimal trade-off seems to manifest at approximately cfg = 3, highlighting the importance of appropriate configuration for achieving the desired results with our model. Finally, in Table 4, we conduct a comparative analysis of various schedulers at \(512\times 512\) pixel resolution. Despite the model being trained with an arccos scheduler, the results indicate that other schedulers at inference could have some advantages. Indeed, when using 12 steps, square scheduler significantly improves FID while maintaining a reasonably high Inception Score. Conversely, a root scheduler yields superior quality, albeit at the cost of a higher FID. Unfortunately, none of these schedulers scale to a higher number of steps, and best results are obtained with the arccos scheduler with 15 steps or more. Nevertheless, these results imply the choice of a scheduler should change based on the number of sampling steps and compute budget. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Scheduler** & **FID** & **IS** & **Precision** & **Recall** & **Density** & **Coverage** \\ \hline root & 8.21 & **252.96** & **0.8619** & 0.4644 & **1.3699** & **0.8666** \\ linear & 7.80 & 238.21 & 0.8547 & 0.4682 & 1.3505 & 0.8633 \\ square & **7.50** & 224.07 & 0.8355 & **0.4904** & 1.2887 & 0.8524 \\ cosine & 7.59 & 229.42 & 0.8423 & 0.4832 & 1.3052 & 0.8546 \\ arccos & 7.65 & 219.32 & 0.8345 & 0.4826 & 1.2794 & 0.8437 \\ \hline \hline \end{tabular} \end{table} Table 4: Scheduler configurations and metrics on ImageNet \(512\times 512\) \begin{table} \begin{tabular}{l c c c c} \hline \hline **Metric** & **Ours** & **MaskGIT**[3] & **Ours** & **MaskGIT**[3] \\ & (\(256\times 256\)) & (\(256\times 256\)) & (\(512\times 512\)) & (\(512\times 512\)) \\ \hline FID & 6.80 & **6.18** & **7.26** & 7.32 \\ IS & **214.0** & 182.1 & **223.0** & 156.0 \\ Precision & **0.82** & 0.80 & **0.85** & 0.78 \\ Recall & **0.51** & **0.51** & 0.49 & **0.50** \\ Density & 1.25 & - & 1.33 & - \\ Coverage & 0.84 & - & 0.86 & - \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison between this work (Ours) and the official paper on ImageNet 256 and 512. ### Qualitative Results Here, we analyze our visual outcomes, showcased in Figure 1, focusing on \(512\times 512\) resolution samples. These particular examples represent the visually best among the 10 random samples for each class. Each image requires 0.9365 seconds and 32 steps to be generated. In Figure 3, we compare our samples to those showcased in the official paper. Our results, while seemingly displaying slightly less diversity, highlight higher quality, and are not cherry-picked. Moreover, those images take only 0.4406 seconds and 15 steps to be generated at \(512\times 512\) resolution. And only 0.036 seconds is required to generate a \(256\times 256\) pixels image with 8 forward steps. All results were obtained with Nvidia A100 GPUs. It is important to notice that it is \(\sim\) 64x faster than auto-regressive methods. In Figure 4, we display the intermediate predictions of our models during the sampling process. Remarkably, the model produces reasonably good images even at an early stage of sampling. Additionally, Figure 5 presents our inpainting results for a rooster (**ImageNet 007**) and a zebra (**ImageNet 340**) integrated into a Cityscapes image. We demonstrate that our adaptation of MaskGIT successfully achieves filling ImageNet classes within a road scene, showcasing the potential of our approach in this domain. ## 4 Conclusion We released a Pytorch [11] reproduction of MaskGIT [3], the founding work for Masked Generative models. Possible next direction is to extend the repository and add models for different modalities, Figure 2: **Hyperparameters search: ablation on crucial parameters to control sampling quality and diversity: number of training epochs, Gumbel noise, number of steps and the cfg weight.** Figure 4: **Intermediate images** generated at \(256\times 256\) resolution. The first row showcases the binary mask, the second row exhibits the dog generated in association with the binary mask, and the last row presents another example of a sailboat. Figure 5: **Image inpainting**:A rooster (**ImageNet 007**) and a zebra (**ImageNet 340**), generated and inpainted in a Cityscapes image. Figure 3: **Diversity comparison** between the official paper (top row) and our reproduction (bottom row). Without cherry pinking on our side, our methods exhibit a little bit less diversity but a higher quality like text-to-image (e.g. MUSE [2]) or text-to-video (e.g. MAGVIT [15]). It can also incorporate further Masked Generation improvement such as Token Critic [9] or Frequency Adaptive Sampling [8]. It is our hope that this work can help other researchers to further explore the capabilities of Masked Generative Modeling. ## Acknowledgments This research received the support of EXA4MIND, a European Union's Horizon Europe Research and Innovation program under grant agreement N*101092944. Views and opinions expressed are however those of the author(s) only and do not necessarily reflect those of the European Union or the European Commission. Neither the European Union nor the granting authority can be held responsible for them.
2302.14611
TransAdapt: A Transformative Framework for Online Test Time Adaptive Semantic Segmentation
Test-time adaptive (TTA) semantic segmentation adapts a source pre-trained image semantic segmentation model to unlabeled batches of target domain test images, different from real-world, where samples arrive one-by-one in an online fashion. To tackle online settings, we propose TransAdapt, a framework that uses transformer and input transformations to improve segmentation performance. Specifically, we pre-train a transformer-based module on a segmentation network that transforms unsupervised segmentation output to a more reliable supervised output, without requiring test-time online training. To also facilitate test-time adaptation, we propose an unsupervised loss based on the transformed input that enforces the model to be invariant and equivariant to photometric and geometric perturbations, respectively. Overall, our framework produces higher quality segmentation masks with up to 17.6% and 2.8% mIOU improvement over no-adaptation and competitive baselines, respectively.
Debasmit Das, Shubhankar Borse, Hyojin Park, Kambiz Azarian, Hong Cai, Risheek Garrepalli, Fatih Porikli
2023-02-24T01:45:29Z
http://arxiv.org/abs/2302.14611v1
# Transadapt: a Transformative Framework for Online Test Time Adaptive Semantic Segmentation ###### Abstract Test-time adaptive (TTA) semantic segmentation adapts a source pre-trained image semantic segmentation model to unlabeled batches of target domain test images, different from real-world, where samples arrive one-by-one in an online fashion. To tackle online settings, we propose TransAdapt, a framework that uses transformer and input transformations to improve segmentation performance. Specifically, we pre-train a transformer-based module on a segmentation network that transforms unsupervised segmentation output to a more reliable supervised output, without requiring test-time online training. To also facilitate test-time adaptation, we propose an unsupervised loss based on the transformed input that enforces the model to be invariant and equivariant to photometric and geometric perturbations, respectively. Overall, our framework produces higher quality segmentation masks with up to 17.6% and 2.8% mIOU improvement over no-adaptation and competitive baselines, respectively. Debasmit Das, Shubhankar Borse, Hyojin Park, Kambiz Asarian, Hong Cai, Risheek Garrepalli, Fatih Porikli Qualcomm AI Research*{debadas, sborse, hyojinp, kambiza, hongcai, rgarrepa, fporikli}@qti.qualcomm.com Test Time Adaptation, Online Learning, Transformer, Consistency, Semantic Segmentation ## 1 Introduction Deep learning systems produce highly accurate predictions when tested on data similar to the training data. However, when there is a distribution shift between training and test data, the performance of deep learning systems can be significantly impacted. In particular, previous work in semantic segmentation, which is a key computer vision task for various applications like self-driving and AR/VR, has often seen such performance degradation caused by domain gaps. More specifically, researchers and practitioners often utilize synthetic data [1, 2] to train semantic segmentation models, since obtaining ground-truth annotations on real images is very costly. However, such trained models usually perform poorly on real images due to the drastic visual difference between synthetic and real data. In order to reduce the gap, researchers have proposed various domain adaptation approaches that include self-training with pseudo-labels [3, 4, 5, 6, 7, 8, 9], adversarial feature alignment [10, 11, 12], input style transfer [13, 14, 15, 16, 17, 18, 19], or conditioning of segmentation outputs [20, 9, 21]. Under the assumption that a large number of unlabeled images are available from the target/test domain, one can finetune the pretrained semantic segmentation model on both the unlabeled test-domain data (via an unsupervised loss) as well as the labeled source-domain data. This produces a domain-invariant model, which can produce more accurate predictions on the target domain as compared to the pretrained model. There exists source-free domain adaptation methods [22, 23, 24, 25, 26] that assume absence of source domain data during adaptation. However, these methods can lead to improved segmentation performance on common evaluation benchmarks due to adaptation on large target data batches, which can overestimate segmentation performance. Specifically, in many real-world deployments, prior access to a set of unlabeled target-domain data will not be available for performing offline model updates. On the contrary, the target-domain data samples often arrive on-the-fly. Most existing domain adaptation methods can not be used in this online setting, since gradient updates based on single images will be noisy and degrade the model's stability and accuracy. In this paper, we formally introduce online test-time adaptive semantic segmentation, where we adapt a pre-trained model on online test image sequences without accessing source domain data. On this task, we construct three cross-dataset benchmarks by evaluating existing domain adaptation methods to establish several baselines. In addition to establishing baselines, we propose a novel framework, namely _TransAdapt_, for conducting online test-time adaptation of semantic segmentation models. In TransAdapt, we first pretrain the semantic segmentation model with both supervised and unsupervised loss functions, where the final supervised segmentation predictions are mapped from the unsupervised predictions. Specifically, given a segmentation prediction head that is trained with an unsupervised loss, we leverage a transformer module to convert the unsupervised predictions to the final predictions via a linear mapping. During test, only the unsupervised head receives training signals to update the model without incurring costly updates on the transformer. The transformer module leverages global context within features to generate the unsupervised-to-supervised mapping and also empirically produces better recognition performance compared to non-transformer variants. During online test-time adaptation, we propose to use transformation consistency (TC) as the unsupervised loss for updating the model. By utilizing TC, we avoid relying on noisy and inaccurate pseudo-labels of target domain images. Specifically, we consider two types of transformation: (a) photometric transformations which enforce model invariance to non-essential visual appearance changes and (b) geometric transformations which enforce the model's prediction to be equivariant to the input, w.r.t. certain geometric transformations. Using our transformer based pre-training and/or transformation consistency based adaptation, we produced improved segmentation on the three cross-dataset benchmarks. Our main contributions are summarized as follows: * We propose a plug-and-play transformer module on top of a segmentation network that produces supervised segmentation outputs from unsupervised ones, thus allowing generalization without requiring adaptation. * We devise a test-time adaptation loss using photometric and geometric transformation consistency. * Finally, we evaluate the effectiveness of our framework against existing but compatible unsupervised and adaptive baselines and establish three cross-dataset benchmarks to facilitate future research. ## 2 Proposed Framework ### Task Description We consider the availability of labeled source domain data \(\mathcal{X}^{src}=\{(x_{i}^{src},y_{i}^{src})\}_{i=1}^{N_{src}}\) with input images \(x_{i}^{src}\in\mathbb{R}^{H\times W\times 3}\) and their segmentation maps \(y_{i}^{src}\in\mathbb{R}^{H\times W\times L}\). Here, \(H\) and \(W\) are the height and width of the input image and \(L\) is the number of class labels. This labeled source domain data is used to pre-train a model. For online test-time adaptation, we need to adapt the model to a sequence of unlabeled images \(\mathcal{X}^{tgt}=\{(x_{i}^{tgt})\}_{i=1}^{N_{tgt}}\) from the target domain with the same set of classes as the source domain. It is to be noted that the sequence of images are not necessarily adjacent frames of a video. Also, the model can be decomposed into a feature extractor \(F\), a prediction head \(H_{p}\) and optionally our proposed transformer module \(T_{f}\). ### Pre-training with Transformer Module We use the transformer module to find the relationship between supervised and unsupervised semantic segmentation maps as described in Fig. 1. Specifically, consider an input image \(x_{i}\in\mathbb{R}^{H\times W\times 3}\), which produces a feature map \(f_{i}\in\mathbb{R}^{H^{\prime}\times W^{\prime}\times C}\) such that \(f_{i}=F(x_{i})\). This feature map when passed through a prediction head, produces output logits \(o_{i}\in\mathbb{R}^{H\times W\times L}\) such that \(o_{i}=H_{p}(f_{i})\). A softmax operation is applied on these logits to obtain a segmentation probability map \(p_{i}\in\mathbb{R}^{H\times W\times L}\). For end-to-end training of \(F\) and \(H_{p}\), we can use cross entropy loss \(\mathcal{L}_{XEnt}(p_{i},y_{i})\) between the predicted probability maps \(p_{i}\) and ground truth segmentation labels \(y_{i}\). In our proposed framework, we aim to learn the relationship between supervised and unsupervised predictions which would facilitate test-time adaptation from unlabeled image sequences. Here, we use the output from the prediction head as unsupervised logit \(o_{i}^{u}\). The feature map \(f_{i}\) is then used as conditioning input for a transformer decoder module to construct the keys and the values. The transformer decoder uses learnable queries \(q_{i}^{s}\in\mathbb{R}^{L\times C}\) which are \(C\) dimensional vector representations of \(L\) categories to be identified for the supervised output. To generate keys and values, the patches are obtained from the feature map \(f_{i}\) which are then flattened to produce \(n\) tokens \(t_{i}\in\mathbb{R}^{n\times C}\). These tokens are then fed into the multi-head attention stage of the transformer decoder followed by a feed forward network. To understand the multi-head attention scheme, we first mention the single-head attention mechanism which is as follows: \(q=q^{s}W^{q},\ k_{i}=t_{i}W^{k},\ v_{i}=t_{i}W^{v}\) where \(W^{q},W^{k},W^{v}\in\mathbb{R}^{C\times C}\) are weight matrices to produce linear representations of the raw tokens. These processed tokens are then used to produce the attention operation \(\text{Att}(\cdot)\) such that \(\text{Att}(q,t_{i},t_{i})=\text{Softmax}(qk_{i}^{T})v_{i}\). For the multi-head attention operation \(\text{MHAit}(\cdot)\) having \(M\) heads, \(q^{s}\) and \(t_{i}\) are split into \(M\) parts \(q_{1}^{s},\dots,q_{M}^{s}\) and \(t_{i,1},\dots,t_{i,M}\), where dimension of each split is \(C^{\prime}=C/M\). Attention operation is applied over all such splits to produce \[\widehat{q}_{i}^{s}=[\text{Att}_{1}(q_{1}^{s},t_{i,1},t_{i,1});\dots;\text{Att }_{M}(q_{M}^{s},t_{i,M},t_{i,M})] \tag{1}\] \[\text{MHAit}\left(q,t_{i},t_{i}\right)=\text{LN}\left(q^{s}+\text{DO}\left( \widehat{q}_{i}^{s}W\right)\right) \tag{2}\] where \(\text{LN}\) is layer normalization [27], \(\text{DO}\) is the dropout operation [28] and \(W\in\mathbb{R}^{C\times C}\) is a linear mapping. The output of the multi-head attention mechanism is passed through a two layer feed-forward network where each layer consists of a linear mapping followed by dropout, residual connection and layer normalization similar to Eq. 2. Alternating multi-head attention and feed-forward networks can produce multiple layers of the transformer decoder. The output of the transformer decoder will produce a representation \(q_{i}^{o}\in\mathbb{R}^{L\times C}\) for the transformer decoder input \(q^{s}\). \(q_{i}^{o}\) thus consists of \(C\)-dimensional vector representation of each of the \(L\) classes conditioned on the feature map of the input image. This representation needs to be mapped to a \(L\)-dimensional space for it to produce a \(L\times L\) weight matrix that relates supervised and unsupervised logits. Hence, we apply the following operations \[W_{u}^{s}=\text{Softmax}\left(q_{i}^{o}U\right)\quad,\quad o_{i}^{s}=o_{i}^{u}W _{u}^{sT}. \tag{3}\] Here, \(U\in\mathbb{R}^{C\times L}\) is a projection layer, \(W_{u}^{s}\) is the transfer matrix and \(o_{i}^{s}\) are the supervised output logits. Softmax is then applied to these logits to obtain the segmentation probability map \(p_{i}^{s}\in\mathbb{R}^{H\times W\times L}\). Figure 1: The feature extractor, prediction head, transformer module and other learnable parameters are pre-trained using a combination of supervised and unsupervised losses. During test time, the unsupervised loss is used to conduct adaptation. Once the adaptation is done, we use the output from the prediction head and multiply it with the transfer matrix to produce the supervised output to for inference. For end-to-end training of the whole model, we can use cross entropy loss \(\mathcal{L}_{XEnt}(p_{i}^{s},y_{i})\) between the predicted probability maps \(p_{i}^{s}\) and ground truth segmentation labels \(y_{i}\). For training the model using unsupervised logits \(o_{i}^{u}\), we can use an unsupervised loss \(\mathcal{L}_{USup}(o_{i}^{u})\). This unsupervised loss can possibly be one of the losses used for test-time adaptation such as min-entropy [22], max-squares [29] or our proposed transformation consistency loss. We can thus train the whole network using the total loss \[\mathcal{L}_{Tot}(x_{i},y_{i})=\mathcal{L}_{XEnt}(p_{i}^{s},y_{i})+\lambda \mathcal{L}_{USup}(o_{i}^{u}) \tag{4}\] By minimizing this training loss with the source domain data, we can learn the mapping between unsupervised and supervised segmentation predictions. During online test-time adaptation over a sample \(x\), the transformer module is kept frozen and we use the output \(o^{u}\) of the unsupervised head for obtaining the unsupervised loss \(\mathcal{L}_{TTA}(o^{u})\) to be used for updating the model parameters. After adaptation is complete, we use the supervised head outputs \(o^{s}\) for evaluation purposes. In the next sub-section, we explain our proposed transformation consistency loss as an unsupervised loss for test-time adaptation. ### Adaptation with Transformation Consistency To resolve erroneous updates due to noisy pseudo-labels of single images, we apply transformation consistency as a loss for online test-time adaptation using invariance and equivariance property of different transformation types. We use two transformation types - photometric (grayscale, jitter, blur) and geometric (cropping, rotations, shuffling), for invariance and equivariance respectively. Specifically, let's consider a test image \(x\) and a sampled photometric transformation \(\mathcal{A}_{p}(\cdot)\). When this transformation is applied on a test image, it will produce a transformed image \(\tilde{x}=\mathcal{A}_{p}(x)\). For both the original input image \(x\) and its transformation \(\tilde{x}\), we produce unsupervised output logits \(o^{u}=H_{p}(F(x))\) and \(\tilde{o}^{u}=H_{p}(F(\tilde{x}))\) respectively. To minimize the difference between \(o^{u}\) and \(\tilde{o}^{u}\), we can use discrepancy loss term \(\mathcal{L}_{p}(o^{u},\tilde{o}^{u})\). Possible discrepancies are \(L1\) or \(L2\) distances. Specifically, let's also consider a sampled geometric transformation \(\mathcal{A}_{g}(\cdot)\). When the geometric transformation is applied on the test image, it will produce a transformed image \(\hat{x}=\mathcal{A}_{g}(x)\). For both the original input image \(x\) and its transformation \(\tilde{x}\), we produce unsupervised output logits \(o^{u}=H_{p}(F(x))\) and \(\tilde{o}^{u}=H_{p}(F(\hat{x}))\) respectively. To enforce equivariance, we minimize the difference between \(\tilde{o}^{u}\) and the transformed logits \(\mathcal{A}_{g}(o^{u})\) by using a discrepancy loss term \(\mathcal{L}_{g}(\mathcal{A}_{g}(o^{u}),\tilde{o}^{u})\). The discrepancy will be the same as used for photometric transformation consistency loss. For adapting the model on the test sample \(x\), we use both photometric and geometric transformation consistency losses as follows \[\mathcal{L}_{TTA}(x)=\mathcal{L}_{p}(o^{u},\tilde{o}^{u})+\mathcal{L}_{g}( \mathcal{A}_{g}(o^{u}),\hat{o}^{u}) \tag{5}\] Once the model is adapted using \(\mathcal{L}_{TTA}(x)\), we infer the output predictions with the supervised head using Eq. 3. We reiterate that during test-time adaptation, the back-propagated gradients through the unsupervised head do not affect the transformer module and hence it remains frozen throughout. When the transformer module is not used, the model has a single head. \(o^{u}\) and subsequently \(\mathcal{L}_{TTA}(x)\) is processed through the single head for adaptation, and inference is carried out through that single head only. We summarize our pre-training and adaptation step in Algorithm 1. ``` Given: Source dataset \(\mathcal{X}^{src}=\{(x_{i}^{src},y_{i}^{src})\}_{i=1}^{N_{src}}\) & Target dataset sequence \(\mathcal{X}^{tgt}=\{(x_{i}^{tgt})\}_{i=1}^{N_{tgt}}\) Step 1: Pre-train model on \(\mathcal{X}^{src}\) For each sample \((x_{i}^{src},y_{i}^{src})\) from sampled batch of \(\mathcal{X}^{src}\) Gradient update of Eq. 4 w.r.t. \(F\), \(H_{p}\), \(T_{f}\), \(U\), \(q^{s}\), \(W^{q,v,k}\) Step 2: Adaptation and Evaluation on \(\mathcal{X}^{tgt}\) For each sample \(x_{i}^{tgt}\) from \(\mathcal{X}^{tgt}\) Gradient update of Eq. 5 w.r.t \(F\) and \(H_{p}\) Predict segmentation map of \(x_{i}^{tgt}\) using Eq. 3 ``` **Algorithm 1**TransAdapt framework ## 3 Experimental Results ### Experiment Details We evaluate our framework using three cross-dataset settings as in [29]: GTA5 (Synthetic) [1]\(\rightarrow\) Cityscapes (Real) [30], SYN-THIA (Synthetic) [2]\(\rightarrow\) Cityscapes and Cityscapes \(\rightarrow\) Cross-City (Real) [5] using online adaptation on test-set instead of offline adaptation. For all evaluation metrics, we use mean Intersection-over-Union (mIoU). For the segmentation model, we use DeepLab-V2 ResNet-101 [29] trained on each of the source datasets. For our proposed transformer module, we use a 1-layer decoder without positional encoding, and output of block 3 of ResNet is used for the input of of the transformer module. This transformer module is trained with the segmentation network together by the loss defined in Eq. 4. Unless explicitly mentioned, we set \(\lambda=0.1\) and max squares [29] as the unsupervised loss. For the transformation consistency loss in Eq. 5, \(L2\) distance is used as the default metric for both \(\mathcal{L}_{p}(\cdot)\) and \(\mathcal{L}_{g}(\cdot)\). For the adaptation, we use SGD with learning rate of \(1e-4\) and update only batch-norm parameters for only1 iteration per sample as more updates cause performance degradation. ### Comparison Studies We compare against some popular domain adaptive methods [22, 29, 31, 32] and unsupervised segmentation methods [33, 34]. We also compare against recently proposed online adaptive methods: AuxAdapt [35], Batch Norm update [36], Style Transfer variants [37], of which the latter two have been used in the OASIS [38] benchmark. We further evaluate against two proposed baselines: (a) Selective Cross Entropy: we apply cross-entropy loss on only those pixels whose confidence score is greater than 0.8. (b) Special Cross Entropy: We use the original test image and its photometric transformed image and apply weighted cross entropy loss where the weight depends on agreement of predictions from both the images. Table 1 shows that for all cross-dataset setups, our proposed transformation consistency method achieves mostly better performance compared to other methods. Interestingly, using the transformer block, we observe mIoU improvement even without adaptation. When we apply our consistency method to the model with the transformer block, there is further mIoU improvement. Figure 2 illustrates visualization of predicted segmentation masks. Our proposed transformer module improves performance in case of no adaptation and our proposed transformation consistency method is better than other adaptation techniques. In Fig. 2, we highlight that the unsupervised segmentation map produces more errors compared to the supervised segmentation map. Furthermore, presence of non-zero values on off-diagonal elements of the transfer matrix \(W_{u}^{*}\) suggests that supervised prediction is related through a combination of different unsupervised predicted categories. ### Ablation Studies and Analyses In Fig. 3 (a), we report results when alternative ways for update and inference is done in test-time adaptation. Here, XY denotes that X is the update pass and Y is the inference pass. For example, the default setting of US is that update is done using unsupervised (U) head and the supervised (S) head is used for predicting results. Results show that this default setting is optimal and it surpasses over other configurations. Fig. 3 (b) shows that the proposed method achieves an overall performance improvement. As observed by the blue curve (No Adaptation), mIoU increases slightly until 80 samples and then reduces. This implies that performance with no adaptation varies across a sequence of samples. However, the curves of our proposed methods exhibit better performance than the No Adaptation curve for all sequences. We observe significant gains using both the transformation consistency loss and the transformer block.
2301.12730
General Covariance Data Augmentation for Neural PDE Solvers
The growing body of research shows how to replace classical partial differential equation (PDE) integrators with neural networks. The popular strategy is to generate the input-output pairs with a PDE solver, train the neural network in the regression setting, and use the trained model as a cheap surrogate for the solver. The bottleneck in this scheme is the number of expensive queries of a PDE solver needed to generate the dataset. To alleviate the problem, we propose a computationally cheap augmentation strategy based on general covariance and simple random coordinate transformations. Our approach relies on the fact that physical laws are independent of the coordinate choice, so the change in the coordinate system preserves the type of a parametric PDE and only changes PDE's data (e.g., initial conditions, diffusion coefficient). For tried neural networks and partial differential equations, proposed augmentation improves test error by 23% on average. The worst observed result is a 17% increase in test error for multilayer perceptron, and the best case is a 80% decrease for dilated residual network.
Vladimir Fanaskov, Tianchi Yu, Alexander Rudikov, Ivan Oseledets
2023-01-30T08:55:29Z
http://arxiv.org/abs/2301.12730v2
# General Covariance Data Augmentation for Neural PDE Solvers ###### Abstract The growing body of research shows how to replace classical partial differential equation (PDE) integrators with neural networks. The popular strategy is to generate the input-output pairs with a PDE solver, train the neural network in the regression setting, and use the trained model as a cheap surrogate for the solver. The bottleneck in this scheme is the number of expensive queries of a PDE solver needed to generate the dataset. To alleviate the problem, we propose a computationally cheap augmentation strategy based on general covariance and simple random coordinate transformations. Our approach relies on the fact that physical laws are independent of the coordinate choice, so the change in the coordinate system preserves the type of a parametric PDE and only changes PDE's data (e.g., initial conditions, diffusion coefficient). For tried neural networks and partial differential equations, proposed augmentation improves test error by 23% on average. The worst observed result is a 17% increase in test error for multilayer perceptron, and the best case is a 80% decrease for dilated residual network. Machine Learning, 1. [noitemsep,topsep=0pt] 2. Easily extendable, architecture-agnostic augmentation procedure based on general covariance. 3. Cheap algebraic random grids in \(R^{n}\) based on cumulative distribution function and transfinite interpolation. 4. Comprehensive set of experiments showing that augmentation helps to improve test error for different architectures and parametric families of PDEs. Code and datasets are available on [https://github.com/VLSF/augmentation](https://github.com/VLSF/augmentation), [https://disk.yandex.ru/d/ArC6jT3TZcKncw](https://disk.yandex.ru/d/ArC6jT3TZcKncw). ## 2 Basic augmentation example Before dwelling upon technical details, we provide a simple example of our approach for a parametric boundary-value problem \[\begin{split}&\frac{d}{dx}\left(a(x)\frac{d}{dx}u(x)\right)=f(x), \\ & x\in[0,1],\,u(0)=u(1)=0.\end{split} \tag{1}\] Suppose that \(a\) and \(f\) are chosen reasonably, so the unique solution exists.1 The usual way to approximate this solution is to use finite-element discretization Footnote 1: The formal statement on existence is available in (Evans, 2010), but it is largely irrelevant to our discussion. \[u(x)=\sum_{i=1}^{N}\phi_{i}(x)u_{i}, \tag{2}\] where \(\phi_{i}(x)\) are piecewise linear functions such that \(\phi_{i}(x_{j})=\delta_{ij}\) for \(x_{j}=j/(N+1),\,j=1,\ldots,N\), i.e., the hat functions. After that, the differential equation (1) in a weak form is equivalent to \(N\times N\) linear system, and the solution is straightforward. The obtained solution is known not only on the uniform grid \(\mathcal{G}=\{j/(N+1),\,j=0,\ldots,N+1\}\) but everywhere in the domain thanks to the closed-form representation (2). Using described procedure one can generate a dataset with features \(F_{i}=(a_{i}(\mathcal{G}),f_{i}(\mathcal{G}))\) and targets \(T_{i}=(u_{i}(\mathcal{G}))\), \(i=1,\ldots,N_{\text{samples}}\). As a rule, features, \(a_{i}\) and \(f_{i}\) in our case, are samples from some distribution (Kovachki et al., 2021) or typical inputs needed for a particular application, e.g., (Pathak et al., 2022). Our main observation is that when the PDE is known, it is possible to extract more information from each obtained solution using coordinate transformation. Suppose \(y(\xi)\) is analytic strictly monotonic function from \([0,1]\) to \([0,1]\) such that \(y(0)=0\), \(y(1)=1\). We use \(x\equiv y(\xi)\) as coordinate transformation and rewrite (1) in coordinates \(\xi\) as follows \[\begin{split}&\frac{d\xi}{dy}\frac{d}{d\xi}\left(a(y(\xi))\frac{d \xi}{dy}\frac{d}{d\xi}u(y(\xi))\right)=f(y(\xi)),\\ &\xi\in[0,1],\,u(y(0))=a,\,u(y(1))=b.\end{split} \tag{3}\] As we see, transformed equation (3) has the same parametric form as the original one (1). As a consequence, if a triple of functions \(a(x),u(x),f(x)\) solve (1), the triple of modified functions \(a(y(x))\frac{dx}{dy},u(y(x)),f(y(x))\frac{dy}{dx}\) also solve the same equation (1), where we rename variable \(\xi\) in (3) to \(x\). So we can generate novel solutions from the old ones using smooth coordinate transformations and interpolation. To complete a description of the augmentation, we need to explain how to generate smooth coordinate transformations. Since any strictly monotonic positive function that maps \([0,1]\) constitutes a valid coordinate transformation, we propose to use cumulative distribution functions with strictly positive probability density. It is easy to come up with many parametric families of probability densities. For example, we can use trigonometric series and define \[\begin{split}& p(x)=1+\sum_{k=1}^{N}\frac{(c_{k}\cos(2\pi kx)+d_{k} \sin(2\pi kx))}{c_{0}},\\ & c_{0}=\sum_{k=1}^{N}(|c_{k}|+|d_{k}|)+\beta,\,\beta>0.\end{split} \tag{4}\] After integration, we obtain a cumulative distribution function that serves as a coordinate transformation \[y(x)=x+\sum_{k=1}^{N}\frac{(c_{k}\sin(2\pi kx)+d_{k}(1-\cos(2\pi kx)))}{2\pi kc _{0}}. \tag{5}\] The whole augmentation procedure for elliptic equation (1) can be compactly written as \[\begin{split}& a(x),u(x),f(x)\longrightarrow a(y(x))/\frac{dy}{ dx},u(y(x)),f(y(x))\frac{dy}{dx}.\end{split} \tag{6}\] Figure 1 illustrates the proposed approach for elliptic equation (1) and a particular set of transformations (5). To summarize, our augmentation approach consists of three stages: 1. [noitemsep,topsep=0pt] 2. Generate a sufficiently smooth coordinate transformation \(y(\xi)\). 3. Interpolate features and targets on discrete grid \(y(\xi_{j})\), \(\xi_{j}=j/(N+1),\,j=0,\ldots,N+1\). 4. Adjust interpolated features and targets according to the transformations law for PDE evaluated in new coordinates \(y(\xi)\). This procedure can be applied for as many coordinate transformations \(y(\xi)\) as needed and requires only cheap interpolation, so the overall cost is \(O(N)\) for each sample, where \(N\) is the number of grid points. In the Section 3, we show how to generalize results illustrated here for other partial differential equations and higher dimensions. ## 3 Augmentation by General Covariance In Section 2, we explained that two principal components of the proposed augmentation approach are grid generation and transformation law for PDE in question. Here we show how to extend results from Section 2 to a more general setting. Everywhere in this section, Einstein's summation notation is used, e.g., \(a_{\alpha}b^{\alpha}\equiv\sum_{\alpha}a_{\alpha}b^{\alpha}\). ### How to construct coordinate transformations in the general case We define coordinate transformations in \(D\geq 1\) as one-to-one analytic mapping \[\mathbf{x}(\mathbf{\xi}):[0,1]^{D}\longrightarrow[0,1]^{D} \tag{7}\] In Section 2, we outlined a particular scheme to construct families of coordinate transformations in \(D=1\). The general algorithm is as follows 1. Select a family of basis functions \(\phi_{j}(\xi)\) defined on \([0,1]\) that are easy to integrate (e.g., the indefinite integral is known). 2. Find suitable shift and scale for a series \(s\left(\sum_{j}\phi_{j}(\xi)c_{j}\right)+c_{0}\) to be a valid probability density function \(p(\xi)\) for all \(c_{j}\). 3. Use cumulative distribution function (indefinite integral of \(p(\xi)\)) as a coordinate transformation. When \(D=1\) mapping is available, it is possible to lift it to \(D>1\) by transfinite interpolation (Gordon & Hall, 1973). For example, for \(D=2\) the transformation becomes \[\begin{split} x^{1}(\xi^{1},\xi^{2})&=y_{1}(\xi^{1 })(1-\xi^{2})+y_{2}(\xi^{1})\xi^{2},\\ x^{2}(\xi^{1},\xi^{2})&=y_{3}(\xi^{2})(1-\xi^{1})+ y_{4}(\xi^{2})\xi^{1},\end{split} \tag{8}\] where \(y_{i}\), \(i=1,\ldots,4\) are \(D=1\) mappings, e.g., given in (5). The extension of (8) to higher dimensions is straightforward. Note that (8) has a "low-rank" structure that decreases the diversity of possible grids. The issue can be alleviated with Hermite transfinite interpolation or with more general blending functions (Liseikin, 2017). Other more computationally involved remedies are variational and elliptic (Laplace-Beltrami) grid generators (Steinberg & Roache, 1986), (Spekreijse, 1995). ### PDEs under coordinate transformations Here we remind how the most widely used differential operators change under the transformation (7). The results we present in this section are standard (Liseikin, 2017), (Eglit et al., 1996), (Simmonds, 1994). For convenience, the proofs are also available in Appendix A. For convenience, we define the Jacobi matrix, and its determinant \[\mathcal{J}_{i\alpha}\equiv\frac{\partial x^{i}}{\partial\xi^{\alpha}},\,J \equiv\det\mathcal{J}. \tag{9}\] Note that for the mapping (5) determinant \(J\) vanishes nowhere in the domain due to the strict monotonicity of the mapping. Figure 1: Example of data augmentation for elliptic equation (1). The first column on the left contains features and target that solves (1). All other columns are obtained from the first one with transformation (6). Coordinate transformations are generated according to (5) with parameters \(N=3\), \(\beta=1\) and coefficients \(c_{k},d_{k}\), \(k=1,2,3\) sampled from standard normal distribution. It is not hard to show that for arbitrary space-dependent fields \(c^{j}\), \(\phi\), \(a^{kj},k,j\in{1,\ldots,D}\), the following transformation laws hold \[\begin{split}& c^{j}\frac{\partial\phi}{\partial x^{j}}=c^{j}\frac{ \partial\xi^{\alpha}}{\partial x^{j}}\frac{\partial\phi}{\partial\xi^{\alpha}},\\ & a^{kj}\frac{\partial^{2}\phi}{\partial x^{j}\partial x^{k}}=a^{kj }\frac{\partial\xi^{\beta}}{\partial x^{j}}\frac{\partial\xi^{\gamma}}{ \partial x^{k}}\frac{\partial^{2}\phi}{\partial\xi^{\beta}\partial\xi^{\gamma} }+a^{kj}\frac{\partial^{2}\xi^{\gamma}}{\partial x^{k}\partial x^{j}}\frac{ \partial\phi}{\partial\xi^{\gamma}},\\ &\frac{\partial}{\partial x^{\alpha}}\left(c^{\alpha}\phi\right) =\frac{1}{J}\frac{\partial}{\partial\xi^{k}}\left(Jc^{\alpha}\frac{\partial \xi^{k}}{\partial x^{\alpha}}\phi\right),\\ &\frac{\partial}{\partial x^{k}}\left(a^{kj}\frac{\partial\phi} {\partial x^{j}}\right)=\frac{1}{J}\frac{\partial}{\partial\xi^{k}}\left(J \left(a^{\alpha j}\frac{\partial\xi^{k}}{\partial x^{\alpha}}\frac{\partial \xi^{\beta}}{\partial x^{j}}\right)\frac{\partial\phi}{\partial\xi^{\beta}} \right).\end{split} \tag{10}\] Results (10) allow deriving transformation laws for many practically-relevant PDEs. We are interested in the following ones: 1. Stationary diffusion equation \[\begin{split}&\frac{\partial}{\partial x^{k}}\left(a^{kj}(\mathbf{x}) \frac{\partial}{\partial x^{j}}u(\mathbf{x})\right)=f(\mathbf{x})\\ &\mathbf{x}\in\Gamma\equiv[0,1]^{D},\ u(\mathbf{x})\big{|}_{x\in\partial \Gamma}=0,\end{split}\] (11) where \(\partial\Gamma\) is a boundary of the unit hypercube \(\Gamma\), and \(\mathbf{x}\in\mathbb{R}^{D}\). 2. Convection-diffusion equation \[\begin{split}&\frac{\partial}{\partial t}\phi(\mathbf{x},t)+\frac{ \partial}{\partial x^{i}}\left(v^{i}(\mathbf{x})\phi(\mathbf{x},t)\right)=\\ &\frac{\partial}{\partial x^{k}}\left(a^{kj}(\mathbf{x})\frac{ \partial}{\partial x^{j}}\phi(\mathbf{x},t)\right),\\ &\mathbf{x}\in\Gamma\equiv[0,1]^{D},\left.\phi(\mathbf{x},t)\right|_{x\in \partial\Gamma}=0,\,\phi(\mathbf{x},0)=f(\mathbf{x}).\end{split}\] (12) 3. Two-way wave equation \[\begin{split}&\frac{\partial^{2}\rho(\mathbf{x},t)}{\partial t^{2}}+v^ {i}(\mathbf{x})\frac{\partial\rho(\mathbf{x},t)}{\partial x^{i}}=& e^{ij}(\mathbf{x})\frac{\partial^{2}\rho(\mathbf{x},t)}{ \partial x^{i}\partial x^{j}}\\ &+e(\mathbf{x})\rho(\mathbf{x},t),\\ &\mathbf{x}\in\Gamma\equiv[0,1]^{D},\left.\rho(\mathbf{x},t\ )\right|_{x\in\partial\Gamma}=0,\,\rho(\mathbf{x},0)=f(\mathbf{x}).\end{split}\] (13) For all of the equations above, the transformed form easily follows from (10). However, for wave and convection-diffusion equations, additional steps are required to ensure that the transformed equation has the same parametric form as the original one. Table 1 contains the results for selected PDEs. Results summarized in Table 1 along with the coordinate transformations described in Section 3.1 are sufficient to perform general covariance augmentation for equations (11), (12), (13). In Section 4, we show the empirical performance of the proposed augmentation scheme. ## 4 Experiments Here we present an empirical evaluation of augmentation by general covariance. For that purpose, we design several experiments in \(D=1\) and \(D=2\). We start with a description of the shared experiments' setup. ### Setup #### 4.1.1 Neural networks As a rule, neural PDE solvers are either Neural Operators or classical architectures used for image processing.2 Since our approach is architecture-agnostic, we include results for both types of neural networks. Footnote 2: There are also hybrid methods (e.g., (Bar-Sinai et al., 2019)), but we do not consider them here. On the side of neural operators, we include original versions of DeepONet (Lu et al., 2021) and FNO (Li et al., 2020), implemented in [https://github.com/lu-group/deeponet-fno](https://github.com/lu-group/deeponet-fno) and [https://github.com/neural-operator/fourier_neural_operator](https://github.com/neural-operator/fourier_neural_operator), respectively. Besides, for the \(D=1\) case, we also implemented an FNO-like operator dubbed rFNO with FFT replaced by pure real transform based on a complete trigonometric family and trapezoidal rule. Classical machine-learning architectures include DiResNet (Yu and Koltun, 2015), (Stachenfeld et al., 2021), U-Net (Ronneberger et al., 2015), and MLP (Haykin, 1994). A detailed description of neural networks is available in Appendix B. #### 4.1.2 Partial differential equations and datasets We evaluate augmentation on stationary diffusion (11), convection-diffusion (12), and wave (13) equations. To produce PDE data, we sampled all needed data from random trigonometric series \[f(x)=\sum_{k=0}^{N-1}\left(c_{k}\cos(2\pi kx)+s_{k}\sin(2\pi kx)\right), \tag{14}\] with \(c_{k}\) sampled from the standard normal distribution and scaled/shifted appropriately to ensure needed boundary conditions or make \(f(x)\) uniformly positive. For \(D=2\), the procedure is the same, but a direct product of one-dimensional bases is used. Afterward, equations with randomly generated data are discretized either with finite-difference or finite-element methods. For \(D=1\), we generated one dataset per equation and an additional dataset for the wave equation. Results for the extra wave dataset are available in Appendix C. For \(D=2\), we produced two datasets per equation that differ by complexity (more rough targets or more diverse feature-target pairs). Also, for the purposes explained later, we use two distinct elliptic datasets in \(D=2\). In the first one, named "Elliptic alpha", diffusion coefficients \(a^{i\beta}\) form a symmetric positive definite matrix for each point of the domain. In the second one, named "Elliptic beta", the matrix \(a^{i\beta}\) is the identity matrix multiplied by a single uniformly positive diffusion coefficient. More details on the dataset generation process are available in Appendix C. The datasets themselves can be downloaded from [https://disk.yandex.ru/d/ArC6jT3T2cKncw](https://disk.yandex.ru/d/ArC6jT3T2cKncw). #### 4.1.3 Coordinate transformations To generate coordinate transformation for \(D=1\), we use a cumulative distribution function constructed from unnormalize probability density \[p(x)=\beta+\sum_{i=1}^{N}\left(c_{i}\cos(2n\pi x+p_{i})\right), \tag{15}\] where \(c_{i}\) and \(p_{i}\) are samples from the standard normal distribution, and \(N=5\), \(\beta=1\). For \(D=2\) we use four \(D=1\) coordinate transformations constructed with (5), where coefficients \(i=1,\ldots,5\) are from standards normal distribution, \(c_{0}=1\), \(N=6\), and \(\beta=10^{-5}\). To obtain \(D=2\) mappings from the four unidimensional, we apply transformation using transfinite interpolation (8). #### 4.1.4 Metrics As a main measure of performance, we use average relative \(L_{2}\) test error \[E_{\text{test}}=\frac{1}{N}\sum_{i=1}^{N}\frac{\left\|\mathcal{N}(f_{i})-t_{i} \right\|_{2}}{\left\|t_{i}\right\|_{2}}, \tag{16}\] where \(\mathcal{N}\) is a neural network, \(f_{i}\) and \(t_{i}\) are features and targets from the test set. To evaluate the impact of augmentation, we consider the relative gain \[g=\left(1-\frac{E_{\text{test}}^{\text{aug}}}{E_{\text{test}}}\right)\times 1 00\%, \tag{17}\] where \(E_{\text{test}}^{\text{aug}}\), and \(E_{\text{test}}>0\) are relative errors of neural network trained with and without augmentation respectively. Since \(E_{\text{test}}^{\text{aug}}=(1-g)E_{\text{test}}\), the larger \(g\) the better. ### Sensitivity to grid distortions Before training of augmented dataset, it is instructive to evaluate the network trained without augmentation on the augmented train set. This way, we can estimate the degree of equivariance current neural networks have. More specifically, for this experiment, we take a dataset for stationary-diffusion equation (11) (Elliptic alpha), and \begin{table} \begin{tabular}{l l c} \hline \hline Equation & Fields & Transformed fields \\ \hline \multirow{4}{*}{Stationary diffusion (11)} & \(u(\mathbf{x})\) & \(u(\mathbf{x}(\mathbf{\xi}))\) \\ & \(a^{k\beta}(\mathbf{x})\) & \(Ja^{\alpha j}(\mathbf{x}(\mathbf{\xi}))\frac{\partial\xi^{k}}{\partial x^{\alpha}} \frac{\partial\xi^{\beta}}{\partial x^{\beta}}\) \\ & \(f(\mathbf{x})\) & \(Jf(\mathbf{x}(\mathbf{\xi}))\) \\ \cline{2-3} & \(\phi(\mathbf{x},t)\) & \(J\phi(\mathbf{x}(\mathbf{\xi}),t)\) \\ \multirow{2}{*}{Convection-diffusion (12)} & \(a^{k\beta}(\mathbf{x})\) & \(a^{\alpha j}(\mathbf{x}(\mathbf{\xi}))\frac{\partial\xi^{k}}{\partial x^{\alpha}} \frac{\partial\xi^{\beta}}{\partial x^{\beta}}\) \\ & \(v^{i}(\mathbf{x})\) & \(v^{k}(\mathbf{x}(\mathbf{\xi}))\frac{\partial\xi^{i}}{\partial x^{k}}+a^{\alpha j}(\bm {x}(\mathbf{\xi}))\frac{\partial\xi^{k}}{\partial x^{\alpha}}\frac{\partial\xi^{ \beta}}{\partial x^{\beta}}\frac{\partial\xi^{\gamma}}{\partial x^{\beta}} \frac{\partial^{2}x^{\rho}}{\partial\xi^{\gamma}\partial\xi^{\delta}}\) \\ & \(f(\mathbf{x})\) & \(Jf(\mathbf{x}(\mathbf{\xi}))\) \\ \cline{2-3} & \(\rho(\mathbf{x},t)\) & \(\rho(\mathbf{x}(\mathbf{\xi}),t)\) \\ \multirow{2}{*}{Wave (13)} & \(c^{\gamma\beta}(\mathbf{x})\) & \(c^{kj}(\mathbf{x}(\mathbf{\xi}))\frac{\partial\xi^{\gamma}}{\partial x^{\delta}} \frac{\partial\xi^{\beta}}{\partial x^{\beta}}\) \\ & \(v^{\alpha}(\mathbf{x})\) & \(v^{i}(\mathbf{x}(\mathbf{\xi}))\frac{\partial\xi^{\alpha}}{\partial x^{i}}-c^{kj}(\bm {x}(\mathbf{\xi}))\frac{\partial^{2}\xi^{\alpha}}{\partial x^{\pm}\partial x^{ \beta}}\) \\ & \(f(\mathbf{x})\) & \(f(\mathbf{x}(\mathbf{\xi}))\) \\ \multirow{2}{*}{\(e(\mathbf{x})\)} & \(e(\mathbf{x}(\mathbf{\xi}))\) & \(e(\mathbf{x}(\mathbf{\xi}))\) \\ \hline \hline \end{tabular} \end{table} Table 1: Transformation of PDEs parameters under the change of coordinates \(\mathbf{x}\to\mathbf{x}(\mathbf{\xi})\). train DilResNet and FNO. After that, we generate a set of augmented datasets using increasingly distorted grids and evaluate neural networks on them. Results are reported in Figure 2. As we can see, if distortion is comparable with \(10^{-2}\), which is a spacing of the original grid, the neural network can handle the modified dataset quite well. However, the further distortion increase leads to substantial performance deterioration. So for distortion of about \(10\) original grid spacing, the trained network is unusable. Networks trained with augmentation retain good relative error even on distorted grids. Interestingly, FNO can handle distortion slightly better -- four-fold increase against seven-fold for DilResNet. Qualitatively, FNO starts with \(4\%\) and ends with \(18\%\), whereas DilResNet starts with \(10\%\) and ends with \(70\%\) We stress that for equations other than elliptic the equivariance is not observed (see results in Table 6). Besides, for the elliptic equation, equivariance is confirmed only for the specific distortions of the grid. It is not obvious that the same result holds for other grid transformations, so we do not claim that we achieved general covariance. ### Statistical study for \(D=1\) problems Given that neural networks fail to produce correct predictions for augmented dataset, the next natural step is to introduce augmented samples on the training stage. Here we describe relevant experiments for \(D=1\) datasets. In this section we use \(m\) to denote augmentation factor. If original train set consists of \(N_{\text{train}}\) points, after augmentation with factor \(m\) the modified train set has \((1+m)N_{\text{train}}\) points. For each equation we consider the following parameters: \(N_{\text{train}}=500,1000,1500,2000\); \(N_{\text{test}}=1000\); augmentation factor \(m=1,2,3,4\). For each set of parameters we perform five runs with different seeds controlling network initialization and random grids generated for the augmentation. In Appendix D one can find relative test errors averaged with respect to these five runs. Here we present only aggregated results. First, Table 3 contain results averaged with respect to sizes of train set and augmentation factors. We can see that gain is positive on average for all networks and equations. Note, however, that we observe negative gain, i.e., the augmentation fail to improve test error. This occurres mainly for weak models such as DeepONet and MLP that are then fail to achieve reasonable test error. Second, Table 2 contain gains averaged over equations and networks. Generally, we observe that augmentation is more helpful for larger datasets. Our best current explanation is that our augmentation procedure is poorly-calibrated, i.e., the augmented data is completely out of distribution. If this is the case, the increase of the train set may improve the overlap between the train data distribution and augmented data distribution. ### Augmentation for \(D=2\) problems We report results for a single run for two-dimensional problems in Table 4. One can see that, augmentation reduces relative test error for all cases when the network can generalize. It also helps \begin{table} \begin{tabular}{c c c c c} \hline \hline \(m\backslash N_{\text{train}}\) & \(500\) & \(1000\) & \(1500\) & \(2000\) \\ \hline \(1\) & \(11\%\) & \(18\%\) & \(21\%\) & \(21\%\) \\ \(2\) & \(16\%\) & \(25\%\) & \(27\%\) & \(31\%\) \\ \(3\) & \(15\%\) & \(23\%\) & \(28\%\) & \(28\%\) \\ \(4\) & \(19\%\) & \(25\%\) & \(28\%\) & \(30\%\) \\ \hline \hline \end{tabular} \end{table} Table 2: Relative gain for averaging with respect to equations and networks. Augmentation factor \(m\) means that additional \(mN_{\text{train}}\) augmented samples are appended to the training dataset. \begin{table} \begin{tabular}{c c c c c c} \hline \hline & DeepONet & FNO & DilResNet & rFNO & MLP \\ \cline{2-6} elliptic & \(16\%\) & \(22\%\) & \(28\%\) & \(19\%\) & \(8\%\) \\ conv-diff & \(32\%\) & \(36\%\) & \(22\%\) & \(24\%\) & \(28\%\) \\ wave & \(14\%\) & \(36\%\) & \(22\%\) & \(18\%\) & \(16\%\) \\ \hline \hline \end{tabular} \end{table} Table 3: Relative gain for averaging wrt different training scenarios. Figure 2: Sensitivity to grid distortion for DilResNet and FNO with and without augmentation. The distortion here refers to the maximal difference between the unperturbed \(\mathbf{x}\) and perturbed \(\mathbf{x}(\mathbf{\xi})\) grids averaged over \(1000\) grids used to augment dataset. DeepONet to reach test errors smaller than one for the Elliptic alpha and Elliptic beta datasets. The most intriguing part are the results for the stationary diffusion equation. As we explained before, in \(D=2\) we consider two datasets for the elliptic equation: Elliptic alpha and Elliptic beta. Elliptic alpha has a complete set of distinct diffusion coefficients that form a symmetric positive definite matrix. Contrary to that, Elliptic beta has a single positive diffusion coefficient, so the diffusion matrix \(a^{ij}\) in (11) is proportional to the diagonal matrix. On the other hand, after the augmentation, i.e., for the transformed equation, the Elliptic beta train set has nonzero off-diagonal contributions to the diffusion matrix (see Table 1). At the same time, we still have a diffusion matrix proportional to the diagonal for the test set. Despite this discrepancy, augmentation still improves the test error. This result suggests that it can be beneficial to embed a given family of equations into a larger parametric family and perform augmentation for that extended set. ## 5 Related research We know of two articles directly related to the augmentation techniques for neural PDE solvers (Brandstetter et al., 2022), (Li et al., 2022). The article (Li et al., 2022) is secondary since it is only a minor extension of (Brandstetter et al., 2022). We do not discuss it further. In (Brandstetter et al., 2022), authors consider Lie point symmetries. Namely, to perform augmentation, they use smooth transformations that preserve a solution set of PDE (map a given solution to the other one) and form a group with a structure of the continuous manifold (Lie group). As the authors explain, Lie point symmetries, in a certain sense, provide an exhaustive set of possible transformations suitable for augmentation. Given that, it is appropriate to highlight what distinguishes our research from (Brandstetter et al., 2022). In (Brandstetter et al., 2022) symmetries of a _fixed PDE_ are used. In place of that, we consider mappings that leave us within _a particular family of PDEs_. Such transformations are more abundant, easier to find, and more suitable for physical systems with spatiotemporal dependencies, e.g., Maxwell equations in macroscopic media \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline & & \multicolumn{4}{c}{simple datasets} & \multicolumn{4}{c}{complex datasets} \\ \cline{3-8} Equation & Model & \(\times\) & \(\surd\) & \(g\) & \(\times\) & \(\surd\) & \(g\) \\ \hline \multirow{4}{*}{Convection-diffusion} & FNO & \(0.067\) & \(0.048\) & \(28\%\) & \(0.510\) & \(0.418\) & \(18\%\) \\ & DeepONet & \(0.675\) & \(0.567\) & \(16\%\) & — & — & \\ & DilResNet & \(0.023\) & \(0.010\) & \(56\%\) & \(0.312\) & \(0.225\) & \(28\%\) \\ & MLP & \(0.094\) & \(0.050\) & \(49\%\) & \(0.566\) & \(0.496\) & \(12\%\) \\ & U-Net & \(0.069\) & \(0.031\) & \(55\%\) & \(0.419\) & \(0.364\) & \(13\%\) \\ \cline{2-8} & FNO & \(0.066\) & \(0.036\) & \(46\%\) & \(0.306\) & \(0.207\) & \(32\%\) \\ & DeepONet & — & \(0.826\) & & — & — & \\ & DilResNet & \(0.105\) & \(0.021\) & \(80\%\) & \(0.160\) & \(0.133\) & \(17\%\) \\ & MLP & \(0.088\) & \(0.053\) & \(40\%\) & \(0.322\) & \(0.253\) & \(21\%\) \\ & U-Net & \(0.093\) & \(0.070\) & \(25\%\) & \(0.386\) & \(0.194\) & \(50\%\) \\ \cline{2-8} & FNO & \(0.034\) & \(0.021\) & \(38\%\) & \(0.181\) & \(0.126\) & \(30\%\) \\ & DeepONet & — & \(0.832\) & & — & \(0.946\) & \\ & DilResNet & \(0.099\) & \(0.022\) & \(78\%\) & \(0.089\) & \(0.062\) & \(30\%\) \\ & MLP & \(0.069\) & \(0.035\) & \(50\%\) & \(0.238\) & \(0.138\) & \(42\%\) \\ & U-Net & \(0.070\) & \(0.067\) & \(4\%\) & \(0.170\) & \(0.143\) & \(16\%\) \\ \cline{2-8} & FNO & \(0.200\) & \(0.159\) & \(21\%\) & — & — & \\ & DeepONet & — & — & — & — & — & \\ & DilResNet & \(0.053\) & \(0.048\) & \(9\%\) & — & — & \\ & MLP & \(0.313\) & \(0.295\) & \(6\%\) & — & — & \\ & U-Net & — & — & — & — & — & \\ \hline \hline \end{tabular} \end{table} Table 4: Relative test errors and gain for \(D=2\) datasets. Symbols \(\surd\) and \(\times\) mark results with and without augmentation respectively. We put — when network fails to reach test error below \(1.0\). (Jackson, 1975), wave propagation in non-homogeneous media (Brekhovskikh, 1980), and fluid flow through porous media (Alt & Di Benedetto, 1985), e.t.c. In particular, the local distortion of coordinates is a safe choice for a large set of PDEs because of the general covariance principle (Post, 1997). Among other contributions indirectly related to our approach, we can mention articles on deep learning for PDE that deal with complex geometries (Gao et al., 2021), (Li et al., 2022). The end of this line of research is to generalize physics-informed neural networks and neural operators from rectangular to more general domains. These works also contain particular equations in a transformed form, but for an entirely different reason. Similar coordinate transformations are abundant in classical scientific computing (Liseikin, 2017), (Knupp & Steinberg, 1994). On the one hand, the grid defines the geometry of the domain. On the other hand, the grid is refined (h-refinement (Li & Bettess, 1997), (Baker, 1997)) or transported (r-refinement (Baker, 1997)) to improve accuracy (e.g., equidistribution principle (Chen, 1994)). One can implement both approaches using partial differential equations (elliptic and hyperbolic equations) or analytic mappings (algebraic methods (Smith, 1982), (Gordon & Hall, 1973)). In the present research, we gravitate toward the latter because it is computationally cheap, and derivatives are readily available. From the broader perspective, the entire subfield of geometric machine learning (Bronstein et al., 2021) deals with related issues. Typically, the quest is to design a neural network for which the invariance or equivariance holds for a chosen set of transformations. The most relevant works of this sort are (Cheng et al., 2019), (Weiler et al., 2021), (Wang et al., 2020). In the first two articles, the authors develop gauge-invariant convolutions (general covariance), but PDEs are not in question, and the generalization to neural operators is not yet available. In the third article, the authors design neural networks that respect selected symmetries of the Navier-Stokes equation. ## 6 Discussion and further research We demonstrated how to construct augmentation based on general covariance. The proposed augmentation systematically improves test error for all considered architectures. Besides that, it is architecture-agnostic and generalizes well on other equations. We can see the following directions for improvement 1. More complex structured grids It is straightforward to extend our approach to situations where there is an intermediate mapping from the physical domain to the computational domain as considered in (Gao et al., 2021), (Li et al., 2022). 2. Adaptive augmentation In the present research, we generate random grids to perform augmentation. It should be more advantageous to actively select grids based on the neural network performance. 3. Unstructured grids Our approach, as described here, is, in principle, applicable to the unstructured grids. The main problem is that it is not obvious how to construct a mapping and generate a deformed grid such that it is still acceptable from the computational perspective. 4. Covariant neural operators It would be interesting to adapt or generalize results from (Weiler et al., 2021), (Wang et al., 2020) to construct covariant neural operators. Less ambitious, it is possible to improve the training protocols for neural networks using the Jacobi matrix and determinant as input features. This way, one escapes the need to introduce higher derivatives in transformed equations Table 1. 5. Transformations between parametric families A very interesting feature that we observe is that augmentation still helps even when it is performed for a larger parametric family of equations than needed. One possible extension is to dispense with coordinate covariance and consider more general transformations that map one family of PDEs to another family. 6. Time-dependent coordinate transformations We only consider the spatial transformations. It is possible to use time-dependent transformations. For example, one can construct a grid, that is deformed at \(t=0\) and approaches its non-deformed state when \(t\) increases.
2305.11916
F-PABEE: Flexible-patience-based Early Exiting for Single-label and Multi-label text Classification Tasks
Computational complexity and overthinking problems have become the bottlenecks for pre-training language models (PLMs) with millions or even trillions of parameters. A Flexible-Patience-Based Early Exiting method (F-PABEE) has been proposed to alleviate the problems mentioned above for single-label classification (SLC) and multi-label classification (MLC) tasks. F-PABEE makes predictions at the classifier and will exit early if predicted distributions of cross-layer are consecutively similar. It is more flexible than the previous state-of-the-art (SOTA) early exiting method PABEE because it can simultaneously adjust the similarity score thresholds and the patience parameters. Extensive experiments show that: (1) F-PABEE makes a better speedup-accuracy balance than existing early exiting strategies on both SLC and MLC tasks. (2) F-PABEE achieves faster inference and better performances on different PLMs such as BERT and ALBERT. (3) F-PABEE-JSKD performs best for F-PABEE with different similarity measures.
Xiangxiang Gao, Wei Zhu, Jiasheng Gao, Congrui Yin
2023-05-21T12:17:27Z
http://arxiv.org/abs/2305.11916v1
F-Pabee: Flexible-Patience-Based Early Exiting for Single-Label and Multi-Label Text Classification Tasks ###### Abstract Computational complexity and overthinking problems have become the bottlenecks for pre-training language models (PLMs) with millions or even trillions of parameters. A Flexible-Patience-Based Early Exiting method (F-PABEE) has been proposed to alleviate the problems mentioned above for single-label classification (SLC) and multi-label classification (MLC) tasks. F-PABEE makes predictions at the classifier and will exit early if predicted distributions of cross-layer are consecutively similar. It is more flexible than the previous state-of-the-art (SOTA) early exiting method PABEE because it can simultaneously adjust the similarity score thresholds and the patience parameters. Extensive experiments show that: (1) F-PABEE makes a better speedup-accuracy balance than existing early exiting strategies on both SLC and MLC tasks. (2) F-PABEE achieves faster inference and better performances on different PLMs such as BERT and ALBERT. (3) F-PABEE-JSKD performs best for F-PABEE with different similarity measures. Xiangxiang Gao\({}^{1}\), Wei Zhu\({}^{2}\)1, Jiasheng Gao\({}^{3}\), Congrui Yin\({}^{4}\)\({}^{1}\) Shanghai Jiaotong University, China \({}^{2}\) East China Normal University, China \({}^{3}\) Shenzhen University, China \({}^{4}\) Nanchang University, China Footnote 1: Wei Zhu contributes equally with Xiangxiang Gao, and he is the corresponding author. Email: [email protected]. F-PABEE, PABEE, Early Exiting, Multi-label Classification, Single-label Classification ## 1 Introduction Fine-tuning PLMs has become the de-facto paradigm in natural language processing [1], due to the amazing performance gains on a wide range of natural language processing tasks [2, 3, 4, 5, 6]. Despite SOTA performances, BERT [7] and its variants [8, 9, 10, 11] still face significant application challenges: cumbersome computation and overthinking problems due to huge parameters and deep models. Early exiting attracts much attention as an input-adaptive method to speed up inference [12]. Early exiting installs a classifier at each transformer layer to evaluate the predictions and will exit when meeting the criterion. Three different early exiting strategies exist: (1) The confidence-based strategy evaluates the predictions based on specific confidence measurements. (2) The learned-based strategy learns a criterion for early exiting. (3) The patience-based strategy exits when consecutive classifiers make the exact predictions. Among them, the patience-based strategy PABEE [13] achieves SOTA results. We raise two issues for the current SOTA strategy: (1) PABEE faces a limitation for application: it can not flexibly adjust the speedup ratio on a given task and fixed patience parameter, mainly caused by a strict cross-layer comparison strategy. Thus, we wonder whether we can combine PABEE with a softer cross-layer comparison strategy. (2) Current early exiting strategies mainly focus on SLC tasks, while the MLC tasks are neglected. So can they speed up MLC tasks? Therefore, we propose a Flexible-Patience-Based Early Exiting method (F-PABEE) to address the above issues. F-PABEE makes predictions at each classifier and will exit early if the current layer and the last few layers have similar (similarity score less than a threshold) predicted distributions. F-PABEE can be seen as a natural extension of PABEE and is more flexible since it can achieve better speed-accuracy trade-offs by adjusting the similarity score thresholds and patience parameters. It can also extend to MLC tasks effortlessly. Our contributions are summarized as follows: (1) We propose F-PABEE, a novel and effective inference mechanism that is flexible in adjusting the speedup ratios of PLMs. (2) The results show that our method can accelerate inference effectively while maintaining good performances across different SLC and MLC tasks. (3) We are the first to investigate the early exiting of MLC tasks, and F-PABEE is suitable for this type of task. ## 2 Related Works ### Static inference approach The static inference approach compresses the heavy model into a smaller one, including pruning, knowledge distillation, quantization, and weight sharing [14, 15, 16]. For example, HeadPrune [17] ranks the attention heads and prunes them to reduce inference latency. PKD [18] investigates the best practices of distilling knowledge from BERT into smaller sized models. I-BERT [19] performs an end-to-end BERT inference without any floating point calculation. ALBERT [8] shares the cross-layer parameters. [20], [21] and [22] distills knowledge from the larger BERT teacher model for improving the performances of student networks which are learned with neural architecture search. Note that the static models are still in the form of deep neural networks with multiple stacked layers. The computational path is invariable for all examples in the inference process, which is not flexible. ### Dynamic early exiting Orthogonal to the static inference approach, early exiting dynamically adjusts hyper-parameters in response to changes in request traffic. It does not need to make significant changes to the original model structure or weight bits, nor does it need to train different teacher-student learning networks, which saves computing resources [23]. There are mainly three groups of dynamic early exiting strategies. The first type is confidence-based early exiting [24]. For example, BranchyNet [25], FastBERT [26], and DeeBERT [27] calculate the entropy of the prediction probability distribution to estimate the confidence of classifiers to enable dynamic early exiting. Shallow-deep [28] and Right-Tool [29] leverage the maximum of the predicted distribution as the exiting signal. The second type is the learned-based exiting, such as BERxiT [30] and CAT [31]. They learn a criterion for early exiting. The third type is patience-based early exiting, such as PABEE [13], which stops inference and exits early if the classifiers' predictions remain unchanged for pre-defined times. Among them, patience-based PABEE achieves SOTA performance. However, PABEE suffers from too strict cross-layer comparison, and the applications on MLC tasks are neglected. There are also literature focusing on improving the training of multi-exit BERT, like LeeBERT [32] and GAML-BERT [33]. F-PABEE is a more flexible extension to PABEE, which can simultaneously adjust the confidence thresholds and patience parameters to meet different requirements. In addition, it outperforms other existing early exiting strategies on both SLC and MLC tasks. ### Training of multi-exit backbones The literature on early exiting focuses more on the design of early exiting strategies, thus neglect the advances of multi-exit backbones' training methods. LeeBERT [32] employs an adaptive learning method for training multiple exits. GAML-BERT [33] enhances the training of multi-exit backbones by a mutual learning approach. ## 3 Flexible Patience-based Early exiting ### Inference procedure for SLC and MLC tasks The inference procedure of F-PABEE is shown in Fig 1(b), which is an improved version of PABEE (Fig 1(a)), where \(L_{i}\) is the transformer block of the model, \(n\) is the number of transformer layers, \(C_{i}\) is the inserted classifier layer, \(s\) is the cross-layer similarity score, \(thre\) is the similarity score threshold, \(P_{0}\) is the pre-defined patience value in the model. The input sentences are first embedded as the vector: \[h_{0}=\text{Embedding}(x). \tag{1}\] The vector is then passed through transformer layers (\(L_{1}...L_{n}\)) to extract features and compute its hidden state \(h\). After which, we use internal classifiers (\(C_{1}...C_{n}\)), which are connected to each transformer layer to predict probability \(p\): \[p_{i}=C_{i}(h_{i})=C_{i}(L_{i}(h_{i-1})). \tag{2}\] We denote the similarity score between the prediction results of layer \(i-1\) and \(i\) as \(s(p_{i-1},p_{i})\) (\(s(p_{i-1},p_{i})\in\mathbf{R}\)). Figure 1: Inference procedure of PABEE and F-PABEE, \(C_{i}\) is the classifier, \(thre\) is threshold, \(P_{0}\) is pre-defined patience. The smaller the value of \(s(p_{i-1},p_{i})\), the prediction distributions are more consistent with each other. The premise of the model's early exit is that the comparison scores between successive layers are relatively small; The similarity threshold \(thre\) is a hyper-parameter. We use \(pat_{i}\) to store the times that the cross-layer comparison scores are consecutively less than the threshold \(thre\) when the model reaches current layer \(i\): \[pat_{i}=\left\{\begin{array}{rcl}pat_{i-1}+1&s(p_{i-1},p_{i})<thre\\ 0&s(p_{i-1},p_{i})>=thre\end{array}\right\} \tag{3}\] If \(s(p_{i-1},p_{i})\) is less than the similarity score threshold \(thre\), then increase the patience counter by 1. Otherwise, reset the patience counter to 0. This process is repeated until \(pat\) reaches the pre-defined patience value \(P_{0}\). The model dynamically stops inference and exits early. However, if this condition is never met, the model uses the final classifier layer to make predictions. This way, the model can stop inference early without going through all layers. ### Similarity measures for SLC and MLC tasks Under the framework of F-PABEE, we can adopt different similarity measures for predicted probability distributions. This work uses the knowledge distillation objectives as the similarity measures [34]. When the model reaches the current layer \(l\), for SLC tasks, we compare a series of similarity measures of F-PABEE, denoted as: F-PABEE-KD: It adopts the knowledge distillation objective from probability mass distribution \(p^{l-1}\) to \(p^{l}\): \[s(p^{l-1},p^{l})=-\sum_{j=1}^{k}p_{j}^{l-1}log(p_{j}^{l}); \tag{4}\] F-PABEE-ReKD: It adopts the knowledge distillation objective in the reverse direction, from probability mass distribution \(p^{l}\) to \(p^{l-1}\): \[s(p^{l},p^{l-1})=-\sum_{j=1}^{k}p_{j}^{l}log(p_{j}^{l-1}); \tag{5}\] F-PABEE-SymKD: It adopts a symmetrical knowledge distillation objective: \[SymKD=s(p^{l-1},p^{l})+s(p^{l},p^{l-1}); \tag{6}\] F-PABEE-JSKD: It adopts another symmetrical distillation objective, similar to Jenson-Shannon divergence: \[JSKD=\frac{1}{2}s(p^{l-1},\frac{p^{l-1}+p^{l}}{2})+\frac{1}{2}s(p^{l},\frac{p^ {l-1}+p^{l}}{2}) \tag{7}\] In addition, for MLC tasks, we transform them into multiple binary classification problems and sum the similarity scores of all categories, and the formulas are denoted as: F-PABEE-KD: \[s(p^{l-1},p^{l})=-\sum_{j=1}^{k}\sum_{i=1}^{2}p_{ji}^{l-1}log(p_{ji}^{l}); \tag{8}\] F-PABEE-ReKD: \[s(p^{l},p^{l-1})=-\sum_{j=1}^{k}\sum_{i=1}^{2}p_{ji}^{l}log(p_{ji}^{l-1}); \tag{9}\] The formulations of F-PABEE-SymKD and F-PABEE-JSKD for MLC tasks are similar to those of SLC tasks. ### Training procedure F-PABEE is trained on SLC and MLC tasks, while the activation and loss functions are different. For SLC tasks, we use the softmax activation function and cross-entropy function according to the tasks. In contrast, we use the sigmoid activation function and binary cross-entropy function for MLC tasks. After that, we optimize the model parameters by minimizing the overall loss function \(L\), which is the weighted average of the loss terms from all classifiers: \[L=\sum_{j=1}^{n}jL_{j}/\sum_{j=1}^{n}j \tag{10}\] \begin{table} \begin{tabular}{c|c c c c c c c c c c c c c} \hline \hline & \multicolumn{2}{c}{CoLA} & \multicolumn{2}{c}{MNLI} & \multicolumn{2}{c}{MRPC} & \multicolumn{2}{c}{QNLI} & \multicolumn{2}{c}{QQP} & \multicolumn{2}{c}{RTE} & \multicolumn{2}{c}{SST-2} \\ & score & speedup & score & speedup & score & speedup & score & speedup & score & speedup & score & speedup & score & speedup \\ \hline BERT base & 54.2 & 0\% & 83.1 & 0\% & 86.8 & 0\% & 89.8 & 0\% & 89.2 & 0\% & 69.1 & 0\% & 91.3 & 0\% \\ \hline Fixed-Exit-3L & 0.0 & 75\% & 70.0 & 75\% & 75.8 & 75\% & 77.4 & 75\% & 81.8 & 75\% & 54.7 & 75\% & 81.0 & 75\% \\ Fixed-Exit-6L & 0.0 & 50\% & 79.6 & 50\% & 84.7 & 50\% & 85.3 & 50\% & 89.3 & 50\% & 68.1 & 50\% & 88.6 & 50\% \\ \hline BranchyNet & 0.0 & 74\% & 63.8 & 76\% & 75.7 & 76\% & 74.2 & 80\% & 71.6 & 80\% & 54.7 & 76\% & 79.9 & 76\% \\ & 0.0 & 51\% & 78.3 & 53\% & 83.0 & 52\% & 87.1 & 47\% & 89.3 & 50\% & 67.4 & 47\% & 88.3 & 49\% \\ \hline Shallow-Deep & 0.0 & 75\% & 64.1 & 77\% & 75.6 & 76\% & 74.3 & 78\% & 71.4 & 79\% & 54.7 & 76\% & 79.5 & 77\% \\ & 0.0 & 52\% & 78.2 & 51\% & 82.8 & 51\% & 87.2 & 49\% & 89.6 & 51\% & 67.2 & 48\% & 88.4 & 48\% \\ \hline BERsiT & 0.0 & 76\% & 63.5 & 76\% & 75.6 & 76\% & 73.3 & 78\% & 68.2 & 80\% & 55.3 & 77\% & 79.5 & 76\% \\ & 12.3 & 52\% & 78.4 & 51\% & 82.9 & 51\% & 87.0 & 48\% & 89.1 & 49\% & 67.3 & 47\% & 88.3 & 49\% \\ \hline PABEE & 0.0 & 75\% & 63.9 & 77\% & 75.8 & 75\% & 73.6 & 81\% & 68.6 & 82\% & 55.8 & 75\% & 79.9 & 77\% \\ & 0.0 & 50\% & 78.9 & 52\% & 83.1 & 53\% & 87.2 & 46\% & 89.6 & 49\% & 67.7 & 46\% & 88.7 & 48\% \\ \hline **F-PABEE** & 0.0 & 75\% & **66.9** & 72\% & **81.5** & 77\% & **76.2** & 75\% & **79.6** & 82\% & **56.0** & 76\% & **80.5** & 76\% \\ & **13.6** & 52\% & **83.9** & 53\% & **87.3** & 53\% & **88.6** & 54\% & **90.8** & 49\% & **68.1** & 47\% & **92.3** & 48\% \\ \hline \hline \end{tabular} \end{table} Table 1: Experimental results of different early exiting methods with BERT backbone on the GLUE benchmark. ## 4 Experiments ### Tasks and Baselines We evaluate F-PABEE on GLUE benchmark [35] for SLC tasks and four datasets for MLC tasks: MixSNLPS [36], MixATS [37], AAPD [38], and Stackoverflow [39]. we compare F-PABEE with three groups of baselines: (1) BERTbase; (2) Static exiting; (3) Dynamic exiting methods, including BrachcyNet [40], Shallow-Deep [28], BERxiT [30], and PABEE. Considering the flops of inferencing one with the whole BERT as the base, the speed-up ratio is defined as the average ratio of reduced flops due to early exiting. ### Experimental setting In training process, we perform grid search over the batch size of {16, 32, 128}, and learning rate of {1e-5, 2e-5, 3e-5, 5e-5} with an AdamW optimizer [41]. The batch size in the inference process is 1. We implement F-PABEE on the bases of HuggingFace Transformers [42]. All experiments are conducted on two Nvidia TITAN X 24GB GPUs. ### Overall comparisons In Table 1, we compare F-PABEE with other early exiting strategies. We adjust the hyper-parameters of F-PABEE and other baselines to ensure similar speedups with PABEE. It shows that F-PABEE balances speedup and performance better than baselines, especially for a large speedup ratio. Moreover, we draw the score-speedup curves for BERxiT, PABEE, and F-PABEE. It shows that F-PABEE outperforms the baseline models on both SLC (Fig 2) and MLC tasks(Fig 3). Furthermore, the distribution of executed layers (Fig 4) indicates that F-PABEE can choose the faster off-ramp and achieve a better trade-off between accuracy and efficiency by flexibly adjusting similarity score thresholds and patience parameters. ### Ablation studies Ablation on different PLMsF-PABEE is flexible and can work well with other pre-trained models, such as ALBERT. Therefore, to show the acceleration ability of F-PABEE with different backbones, we compare F-PABEE to other early exiting strategies with ALBERT base as the backbone. The results in Fig 5 show that F-PABEE outperforms other early exiting strategies under different backbones by large margins on both SLC and MLC tasks, indicating that F-PABEE can accelerate the inference process for numerous PLMs. **Comparisons between different similarity measures** We consider F-PABEE with different similarity measures, denoted as F-PABEE-KD, F-PABEE-ReKD, F-PABEE-SymKD, and F-PABEE-JSKD, and the results are presented in Fig 6. F-PABEE-JSKD performs the best on both SLC and MLC tasks. We suppose that F-PABEE-JSKD is symmetric, and the similarity discrimination is more accurate than asym Figure 2: Speed-accuracy curves of F-PABEE, PABEE and BERxiT on SLC tasks with BERT backbone. Figure 4: The distribution of executed layers of MRPC and MixSNIPS on average at different speeds (50%, 75%). Figure 5: Speed-accuracy curves of F-PABEE, PABEE and BERxiT on SLC and MLC tasks with ALBERT backbone. Figure 3: Speed-accuracy curves of F-PABEE, PABEE and BERxiT on MLC tasks with BERT backbone. metric measures. Therefore, it is better at determining which samples should exit at shallow layers and which should go through deeper layers. ## 5 Conclusions We proposed F-PABEE, a novel and efficient early exiting method that combines PABEE with a softer cross-layer comparison strategy. F-PABEE is more flexible than PABEE since it can achieve different speed-performance tradeoffs by adjusting the similarity score thresholds and patience parameters. In addition, we investigate the acceleration ability of F-PABEE with different backbones. Moreover, we compare the performances of F-PABEE with different similarity measures. Extensive experiments on SLC and MLC demonstrate that: (1) F-PABEE performs better than the previous SOTA adaptive early exiting strategies for both SLC and MLC tasks. As far as we know, we are the first to investigate the early exiting methods for MLC tasks. (2) F-PABEE performs well on different PLMs such as BERT and ALBERT. (3) Ablation studies show that F-PABEE-JSKD performs best for F-PABEE with different similarity measures.
2303.13257
Equational Theorem Proving for Clauses over Strings
Although reasoning about equations over strings has been extensively studied for several decades, little research has been done for equational reasoning on general clauses over strings. This paper introduces a new superposition calculus with strings and present an equational theorem proving framework for clauses over strings. It provides a saturation procedure for clauses over strings and show that the proposed superposition calculus with contraction rules is refutationally complete. This paper also presents a new decision procedure for word problems over strings w.r.t. a set of conditional equations R over strings if R can be finitely saturated under the proposed inference system.
Dohan Kim
2023-03-23T13:38:51Z
http://arxiv.org/abs/2303.13257v1
# Equational Theorem Proving for Clauses over Strings ###### Abstract Although reasoning about equations over strings has been extensively studied for several decades, little research has been done for equational reasoning on general clauses over strings. This paper introduces a new superposition calculus with strings and present an equational theorem proving framework for clauses over strings. It provides a saturation procedure for clauses over strings and show that the proposed superposition calculus with contraction rules is refutationally complete. This paper also presents a new decision procedure for word problems over strings w.r.t. a set of conditional equations \(R\) over strings if \(R\) can be finitely saturated under the proposed inference system. ## 1 Introduction Strings are fundamental objects in mathematics and many fields of science including computer science and biology. Reasoning about equations over strings has been widely studied in the context of string rewriting systems, formal language theory, word problems in semigroups, monoids and groups [8, 14], etc. Roughly speaking, reasoning about equations over strings replaces equals by equals w.r.t. a given reduction ordering \(\succ\). For example, if we have two equations over strings \(u_{1}u_{2}u_{3}\approx s\) and \(u_{2}\approx t\) with \(u_{1}u_{2}u_{3}\succ s\) and \(u_{2}\succ t\), where \(u_{2}\) is not the empty string, then we may infer the equation \(u_{1}tu_{3}\approx s\) by replacing \(u_{2}\) in \(u_{1}u_{2}u_{3}\approx s\) with \(t\). Meanwhile, if we have two equations over strings \(u_{1}u_{2}\approx s\) and \(u_{2}u_{3}\approx t\) with \(u_{1}u_{2}\succ s\) and \(u_{2}u_{3}\succ t\), where \(u_{2}\) is not the empty string, then we should also be able to infer the equation \(u_{1}t\approx su_{3}\). This can be done by concatenating \(u_{3}\) to both sides of \(u_{1}u_{2}\approx s\) (i.e., \(u_{1}u_{2}u_{3}\approx su_{3}\)) and then replacing \(u_{2}u_{3}\) in \(u_{1}u_{2}u_{3}\approx su_{3}\) with \(t\). Here, the _monotonicity property_ of equations over strings is assumed, i.e., \(s\approx t\) implies \(usv\approx utv\) for strings \(s\), \(t\), \(u\), and \(v\).1 Footnote 1: Note that it suffices to assume the right monotonicity property of equations over strings, i.e., \(s\approx t\) implies \(su\approx tu\) for strings \(s\), \(t\), and \(u\), when finding overlaps between equations over strings under the monotonicity assumption. This reasoning about equations over strings is the basic ingredient for _completion_[8, 16] of string rewriting systems. A completion procedure [8, 16] attempts to construct a finite convergent string rewriting system, where a finite convergent string rewriting system provides a decision procedure for its corresponding equational theory. Unlike reasoning about equations over strings, equational reasoning on general clauses over strings has not been well studied, where clauses are often the essential building blocks for logical statements. This paper proposes a superposition calculus and an equational theorem proving procedure with clauses over strings. The results presented here generalize the results about completion of equations over strings [8, 16]. Throughout this paper, the monotonicity property of equations over strings is assumed and considered in the proposed inference rules. This assumption is natural and common to equations over strings occurring in algebraic structures (e.g., semigroups and monoids), formal language theory, etc. The _cancellation property_ of equations over strings is not assumed, i.e., \(su\approx tu\) implies \(s\approx t\) for strings \(s\), \(t\), and a nonempty string \(u\) (cf. _non-cancellative_[8] algebraic structures). Now, the proposed superposition inference rule is given roughly as follows: **Superposition:**: \(\frac{C\lor u_{1}u_{2}\approx s}{C\lor D\lor u_{2}u_{3}\approx t}\) if \(u_{2}\) is not the empty string, and \(u_{1}u_{2}\succ s\) and \(u_{2}u_{3}\succ t\). Intuitively speaking, using the monotonicity property, \(C\lor u_{1}u_{2}u_{3}\approx su_{3}\) can be obtained from the left premise \(C\lor u_{1}u_{2}\approx s\). Then the above inference by Superposition can be viewed as an application of a conditional rewrite rule \(D\lor u_{2}u_{3}\approx t\) to \(C\lor u_{1}u_{2}u_{3}\approx su_{3}\), where \(u_{2}u_{3}\) in \(C\lor u_{1}u_{2}u_{3}\approx su_{3}\) is now replaced by \(t\), and \(D\) is appended to the conclusion. (Here, \(D\) can be viewed as consisting of the positive and negative conditions.) Note that both \(u_{1}\) and \(u_{3}\) can be the empty string in the Superposition inference rule. These steps are combined into a single Superposition inference step. For example, suppose that we have three clauses 1: \(ab\approx d\), 2: \(bc\approx e\), and 3: \(ae\not\approx dc\). We use the Superposition inference rule with 1 and 2, and obtain 4: \(ae\approx dc\) from which we derive a contradiction with 3. The details of the inference rules in the proposed inference system are discussed in Section 3. The proposed superposition calculus is based on the simple string matching methods and the efficient length-lexicographic ordering instead of using equational unification and the more complex orderings, such as the lexicographic path ordering (LPO) [14] and Knuth-Bendix ordering (KBO) [3]. This paper shows that a clause over strings can be translated into a clause over first-order terms, which allows one to use the existing notion of redundancy in the literature [4, 23] for clauses over strings. Based on the notion of redundancy, one may delete redundant clauses using the contraction rules (i.e., Simplification, Subsumption, and Tautology) during an equational theorem proving derivation in order to reduce the search space for a refutation. The _model construction techniques_[4, 23] is adapted for the refutational completeness of the proposed superposition calculus. This paper also uses a Herbrand interpretation by translating clauses over strings into clauses over first-order terms, where each nonground first-order clause represents all its ground instances. Note that this translation is not needed for the proposed inference system itself. Finally, the proposed equational theorem proving framework with clauses over strings allows one to provide a new decision procedure for word problems over strings w.r.t. a conditional equational theory \(R\) if \(R\) can be finitely saturated under the proposed inference system. ## 2 Preliminaries It is assumed that the reader has some familiarity with equational theorem proving [4, 23] and string rewriting systems [9, 17, 18]. The notion of conditional equations and Horn clauses are discussed in [13]. An _alphabet_\(\Sigma\) is a finite set of symbols (or letters). The set of all strings of symbols over \(\Sigma\) is denoted \(\Sigma^{*}\) with the empty string \(\lambda\). If \(s\in\Sigma^{*}\), then the _length_ of \(s\), denoted \(|s|\), is defined as follows: \(|\lambda|:=0\), \(|a|:=1\) for each \(a\in\Sigma\), and \(|sa|:=|s|+1\) for \(s\in\Sigma^{*}\) and \(a\in\Sigma\). A _multiset_ is an unordered collection with possible duplicate elements. We denote by \(M(x)\) the number of occurrences of an object \(x\) in a multiset \(M\). An _equation_ is an expression \(s\approx t\), where \(s\) and \(t\) are strings, i.e., \(s,t\in\Sigma^{*}\). A _literal_ is either a positive equation \(L\), called a _positive literal_, or a negative equation \(\neg L\), called a _negative literal_. We also write a negative literal \(\neg(s\approx t)\) as \(s\not\approx t\). We identify a positive literal \(s\approx t\) with the multiset \(\{\{s\},\{t\}\}\) and a negative literal \(s\not\approx t\) with the multiset \(\{\{s,t\}\}\). A _clause_ (over \(\Sigma^{*}\)) is a finite multiset of literals, written as a disjunction of literals \(\neg A_{1}\vee\dots\vee\neg A_{m}\lor B_{1}\vee\dots\lor B_{n}\) or as an implication \(\Gamma\rightarrow\Delta\), where \(\Gamma=A_{1}\wedge\cdots\wedge A_{m}\) and \(\Delta=B_{1}\vee\cdots\lor B_{n}\). We say that \(\Gamma\) is the _antecedent_ and \(\Delta\) is the _succedent_ of clause \(\Gamma\to\Delta\). A _Horn clause_ is a clause with at most one positive literal. The _empty clause_, denoted \(\Box\), is the clause containing no literals. A _conditional equation_ is a clause of the form \((s_{1}\approx t_{1}\wedge\cdots\wedge s_{n}\approx t_{n})\to l\approx r\). If \(n=0\), a conditional equation is simply an equation. A conditional equation is naturally represented by a Horn clause. A _conditional equational theory_ is a set of conditional equations. Any ordering \(\succ_{S}\) on a set \(S\) can be extended to an ordering \(\succ_{S}^{mul}\) on finite multisets over \(S\) as follows: \(M\succ_{S}^{mul}N\) if (i) \(M\neq N\) and (ii) whenever \(N(x)>M(x)\) then \(M(y)>N(y)\), for some \(y\) such that \(y\succ_{S}x\). Given a multiset \(M\) and an ordering \(\succ\) on \(M\), we say that \(x\) is _maximal_ (resp. _strictly maximal_) in \(M\) if there is no \(y\in M\) (resp. \(y\in M\setminus\{x\}\)) with \(y\succ x\) (resp. \(y\succ x\) or \(x=y\)). An ordering \(>\) on \(\Sigma^{*}\) is _terminating_ if there is no infinite chain of strings \(s>s_{1}>s_{2}>\cdots\) for any \(s\in\Sigma^{*}\). An ordering \(>\) on \(\Sigma^{*}\) is _admissible_ if \(u>v\) implies \(xuy>xvy\) for all \(u,v,x,y\in\Sigma^{*}\). An ordering \(>\) on \(\Sigma^{*}\) is a _reduction ordering_ if it is terminating and admissible. The _lexicographic ordering_\(\succ_{lex}\) induced by a total precedence ordering \(\succ_{prec}\) on \(\Sigma\) ranks strings of the same length in \(\Sigma^{*}\) by comparing the letters in the first index position where two strings differ using \(\succ_{prec}\). For example, if \(a=a_{1}a_{2}\cdots a_{k}\) and \(b=b_{1}b_{2}\cdots b_{k}\), and the first index position where \(a\) and \(b\) are differ is \(i\), then \(a\succ_{lex}b\) if and only if \(a_{i}\succ_{prec}b_{i}\). The _length-lexicographic ordering_\(\succ\) on \(\Sigma^{*}\) is defined as follows: \(s\succ t\) if and only if \(|s|>|t|\), or they have the same length and \(s\succ_{lex}t\) for \(s,t\in\Sigma^{*}\). If \(\Sigma\) and \(\succ_{prec}\) are fixed, then it is easy to see that we can determine whether \(s\succ t\) for two (finite) input strings \(s\in\Sigma^{*}\) and \(t\in\Sigma^{*}\) in \(O(n)\) time, where \(n=|s|+|t|\). The length-lexicographic ordering \(\succ\) on \(\Sigma^{*}\) is a reduction ordering. We also write \(\succ\) for a multiset extension of \(\succ\) if it is clear from context. We say that \(\approx\) has the _monotonicity property_ over \(\Sigma^{*}\) if \(s\approx t\) implies \(usv\approx utv\) for all \(s,t,u,v\in\Sigma^{*}\). Throughout this paper, it is assumed that \(\approx\) has the monotonicity property over \(\Sigma^{*}\). ## 3 Superposition with Strings ### Inference Rules The following inference rules for clauses over strings are parameterized by a selection function \(\mathcal{S}\) and the length-lexicographic ordering \(\succ\), where \(\mathcal{S}\) arbitrarily selects exactly one negative literal for each clause containing at least one negative literal (see Section 3.6 in [23] or Section 6 in [6]). In this strategy, an inference involving a clause with a selected literal is performed before an inference from clauses without a selected literal for a theorem proving process. The intuition behind the (eager) selection of negative literals is that, roughly speaking, one may first prove the whole antecedent of each clause from other clauses. Then clauses with no selected literals are involved in the main deduction process. This strategy is particularly useful when we consider Horn completion in Section 6 and a decision procedure for the word problems associated with it. In the following, the symbol \(\bowtie\) is used to denote either \(\approx\) or \(\not\approx\). **Superposition:**: \(\frac{C\lor u_{1}u_{2}\approx s}{C\lor D\lor u_{1}t\approx su_{3}}\) if (i) \(u_{2}\) is not \(\lambda\), (ii) \(C\) contains no selected literal, (iii) \(D\) contains no selected literal, (iv) \(u_{1}u_{2}\succ s\), and (v) \(u_{2}u_{3}\succ t\).2 **Rewrite:**\(\dfrac{C\lor u_{1}u_{2}u_{3}\bowtie s\qquad D\lor u_{2}\approx t}{C\lor D\lor u_{1}tu_{3} \bowtie s}\) if (i) \(u_{1}u_{2}u_{3}\bowtie s\) is selected for the left premise whenever \(\bowtie\) is \(\not\approx\), (ii) \(C\) contains no selected literal whenever \(\bowtie\) is \(\approx\), (iii) \(D\) contains no selected literal, and (iv) \(u_{2}\succ t\).3 Footnote 3: Note that \(u_{2}\succ t\) implies that \(u_{2}\) cannot be the empty string \(\lambda\). **Equality Resolution:**\(\dfrac{C\lor s\not\approx s}{C}\) if \(s\not\approx s\) is selected for the premise. The following Paramodulation and Factoring inference rules are used for non-Horn clauses containing positive literals only (cf. _Equality Factoring_[3, 22] and _Merging Paramodulation_ rule [3]). **Paramodulation:**\(\dfrac{C\lor s\approx u_{1}u_{2}\qquad D\lor u_{2}u_{3}\approx t}{C\lor D\lor su _{3}\approx u_{1}t}\) if (i) \(u_{2}\) is not \(\lambda\), (ii) \(C\) contains no selected literal, (iii) \(C\) contains a positive literal, (iv) \(D\) contains no selected literal, (v) \(s\succ u_{1}u_{2}\), and (vi) \(u_{2}u_{3}\succ t\). **Factoring:**\(\dfrac{C\lor s\approx t\lor su\approx tu}{C\lor su\approx tu}\) if \(C\) contains no selected literal. In the proposed inference system, finding whether a string \(s\) occurs within a string \(t\) can be done in linear time in the size of \(s\) and \(t\) by using the existing string matching algorithms such as the Knuth-Morris-Pratt (KMP) algorithm [9]. For example, the KMP algorithm can be used for finding \(u_{2}\) in \(u_{1}u_{2}u_{3}\) in the Rewrite rule and finding \(u_{2}\) in \(u_{1}u_{2}\) in the Superposition and Paramodulation rule. In the remainder of this paper, we denote by \(\mathfrak{S}\) the inference system consisting of the Superposition, Rewrite, Equality Resolution, Paramodulation and the Factoring rule, and denote by \(S\) a set of clauses over strings. Also, by the _contraction rules_ we mean the following inference rules-Simplification, Subsumption and Tautology. **Simplification:**\(\dfrac{S\cup\{C\lor l_{1}ll_{2}\bowtie v,\ l\approx r\}}{S\cup\{C\lor l_{1}r_{ 2}\bowtie v,\ l\approx r\}}\) if (i) \(l_{1}ll_{2}\bowtie v\) is selected for \(C\lor l_{1}ll_{2}\bowtie v\) whenever \(\bowtie\) is \(\not\approx\), (ii) \(l_{1}\) is not \(\lambda\), and (iii) \(l\succ r\). In the following inference rule, we say that a clause \(C\)_subsums_ a clause \(C^{\prime}\) if \(C\) is contained in \(C^{\prime}\), where \(C\) and \(C^{\prime}\) are viewed as the finite multisets. **Subsumption:**\(\dfrac{S\cup\{C,C^{\prime}\}}{S\cup\{C\}}\) if \(C\subseteq C^{\prime}\). **Tautology:**\(\dfrac{S\cup\{C\lor s\approx s\}}{S}\) **Example 1.** Let \(a\succ b\succ c\succ d\succ e\) and consider the following inconsistent set of clauses 1: \(ad\approx b\lor ad\approx c\), 2: \(b\approx c\), 3: \(ad\approx e\), and 4: \(c\not\approx e\). Now, we show how the empty clause is derived: 5: \(ad\approx c\lor ad\approx c\) (Paramodulation of 1 with 2) 6: \(ad\approx c\) (Factoring of 5) 7: \(c\approx e\) (Rewrite of 6 with 3) 8: \(e\not\approx e\) (\(c\not\approx e\) is selected for 4. Rewrite of 4 with 7) 9: \(\square\) (\(e\not\approx e\) is selected for 8. Equality Resolution on 8) Note that there is no inference with the selected literal in 4 from the initial set of clauses 1, 2, 3, and 4. We produced clauses 5, 6, and 7 without using a selected literal. Once we have clause 7, there is an inference with the selected literal in 4. **Example 2**.: Let \(a\succ b\succ c\succ d\) and consider the following inconsistent set of clauses 1: \(aa\approx a\lor bd\not\approx a\), 2: \(cd\approx b\), 3: \(ad\approx c\), 4: \(bd\approx a\), and 5: \(dab\not\approx db\). Now, we show how the empty clause is derived: 6: \(aa\approx a\lor a\not\approx a\) (\(bd\not\approx a\) is selected for 1. Rewrite of 1 with 4) 7: \(aa\approx a\) (\(a\not\approx a\) is selected for 6. Equality resolution on 6) 8: \(ac\approx ad\) (Superposition of 7 with 3) 9: \(ad\approx ab\) (Superposition of 8 with 2) 10: \(ab\approx cd\) (Rewrite of 9 with 3) 11: \(cd\not\approx db\) (\(dab\not\approx db\) is selected for 5. Rewrite of 5 with 10) 12: \(db\not\approx db\) (\(dcd\not\approx db\) is selected for 11. Rewrite of 11 with 2) 13: \(\square\) (\(db\not\approx db\) is selected for 12. Equality Resolution on 12) ### Lifting Properties Recall that \(\Sigma^{*}\) is the set of all strings over \(\Sigma\) with the empty string \(\lambda\). We let \(T(\Sigma\cup\{\bot\})\) be the set of all first-order ground terms over \(\Sigma\cup\{\bot\}\), where each letter from \(\Sigma\) is interpreted as a unary function symbol and \(\bot\) is the only constant symbol. (The constant symbol \(\bot\) does not have a special meaning (e.g., "false") in this paper.) We remove parentheses for notational convenience for each term in \(T(\Sigma\cup\{\bot\})\). Since \(\bot\) is the only constant symbol, we see that \(\bot\) occurs only once at the end of each term in \(T(\Sigma\cup\{\bot\})\). We may view each term in \(T(\Sigma\cup\{\bot\})\) as a string ending with \(\bot\). Now, the definitions used in Section 2 can be carried over to the case when \(\Sigma^{*}\) is replaced by \(T(\Sigma\cup\{\bot\})\). In the remainder of this paper, we use the string notation for terms in \(T(\Sigma\cup\{\bot\})\) unless otherwise stated. Let \(s\approx t\) be an equation over \(\Sigma^{*}\). Then we can associate \(s\approx t\) with the equation \(s(x)\approx t(x)\), where \(s(x)\approx t(x)\) represents the set of all its ground instances over \(T(\Sigma\cup\{\bot\})\). (Here, \(\lambda(x)\) and \(\lambda\bot\) correspond to \(x\) and \(\bot\), respectively.) First, \(s\approx t\) over \(\Sigma^{*}\) corresponds to \(s\bot\approx t\bot\) over \(T(\Sigma\cup\{\bot\})\). Now, using the monotonicity property, if we concatenate string \(u\) to both sides of \(s\approx t\) over \(\Sigma^{*}\), then we have \(su\approx tu\), which corresponds to \(su\bot\approx tu\bot\). There is a similar approach in string rewriting systems. If \(S\) is a string rewriting system over \(\Sigma^{*}\), then it is known that we can associate term rewriting system \(R_{S}\) with \(S\) in such a way that \(R_{S}:=\{l(x)\to r(x)\,|\,l\to r\in S\}\)[9], where \(x\) is a variable and each letter from \(\Sigma\) is interpreted as a unary function symbol. We may rename variables (by standardizing variables apart) whenever necessary. This approach is particularly useful when we consider critical pairs between the rules in a string rewriting system. For example, if there are two rules \(aa\to c\) and \(ab\to d\) in \(S\), then we have \(cb\gets aab\to ad\), where \(<cb,ad>\) (or \(<\!\!ad,cb\!\!>\)) is a _critical pair_ formed from these two rules. This critical pair can also be found if we associate \(aa\to c\in S\) with \(a(a(x))\to c(x)\in R_{S}\) and \(ab\to d\in S\) with \(a(b(x))\to d(x)\in R_{S}\). First, we rename the rule \(a(b(x))\to d(x)\in R_{S}\) into \(a(b(y))\to d(y)\). Then by mapping \(x\) to \(b(z)\) and \(y\) to \(z\), we have \(c(b(z))\gets a(a(b(z)))\to a(d(z))\), where \(<\!\!c(b(z)),a(d(z))\!\!>\) is a critical pair formed from these two rules. This critical pair can be associated with the critical pair \(<\!\!cb,ad\!\!>\) formed from \(aa\to c\) in \(S\) and \(ab\to d\) in \(S\). However, if \(s\not\approx t\) is a negative literal over strings, then we cannot simply associate \(s\not\approx t\) with the negative literal \(s(x)\not\approx t(x)\) over first-order terms. Suppose to the contrary that we associate \(s\not\approx t\) with \(s(x)\not\approx t(x)\). Then \(s\not\approx t\) implies \(su\not\approx tu\) for a nonempty string \(u\) because we can substitute \(u(y)\) for \(x\) in \(s(x)\not\approx t(x)\), and \(su\not\approx tu\) can also be associated with \(s(u(y))\not\approx t(u(y))\). Using the contrapositive argument, this means that \(su\approx tu\) implies \(s\approx t\) for the nonempty string \(u\). Recall that we do not assume the cancellation property of equations over strings in this paper.4 Instead, we simply associate \(s\not\approx t\) with \(s\bot\not\approx t\bot\). The following lemma is based on the above observations. We denote by \(T(\Sigma\cup\{\bot\},X)\) the set of first-order terms built on \(\Sigma\cup\{\bot\}\) and a denumerable set of variables \(X\), where each symbol from \(\Sigma\) is interpreted as a unary function symbol and \(\bot\) is the only constant symbol. Footnote 4: One may assume the cancellation property and associate \(s\not\approx t\) over strings with \(s(x)\not\approx t(x)\) over first-order terms, which is beyond the scope of this paper. **Lemma 1**.: _Let \(C:=s_{1}\approx t_{1}\vee\cdots\lor s_{m}\approx t_{m}\lor u_{1}\not\approx v _{1}\vee\cdots\lor u_{n}\not\approx v_{n}\) be a clause over \(\Sigma^{*}\) and \(P\) be the set of all clauses that follow from \(C\) using the monotonicity property. Let \(Q\) be the set of all ground instances of the clause \(s_{1}(x_{1})\approx t_{1}(x_{1})\vee\cdots\lor s_{m}(x_{m})\approx t_{m}(x_{m} )\lor u_{1}\bot\not\approx v_{1}\bot\vee\cdots\lor u_{n}\bot\not\approx v_{n}\bot\) over \(T(\Sigma\cup\{\bot\},X)\), where \(x_{1},\ldots,x_{m}\) are distinct variables in \(X\) and each letter from \(\Sigma\) is interpreted as a unary function symbol. Then there is a one-to-one correspondence between \(P\) and \(Q\)._ Proof.: For each element \(D\) of \(P\), \(D\) has the form \(D:=s_{1}w_{1}\approx t_{1}w_{1}\vee\cdots\lor s_{m}w_{m}\approx t_{m}w_{m}\lor u _{1}\not\approx v_{1}\vee\cdots\lor u_{n}\not\approx v_{n}\) for some \(w_{1},\ldots,w_{m}\in\Sigma^{*}\). (If \(w_{i}=\lambda\) for all \(1\leq i\leq m\), then \(D\) is simply \(C\).) Now, we map each element \(D\) of \(P\) to \(D^{\prime}\) in \(Q\), where \(D^{\prime}:=s_{1}w_{1}\bot\approx t_{1}w_{1}\bot\vee\cdots\lor s_{m}w_{m}\bot \approx t_{m}w_{m}\bot\lor u_{1}\not\approx v_{1}\bot\vee\cdots\lor u_{n}\bot \not\approx v_{n}\bot\). Since \(\bot\) is the only constant symbol in \(\Sigma\cup\{\bot\}\), it is easy to see that this mapping is well-defined and bijective. **Definition 2**.: (i) We say that every term in \(T(\Sigma\cup\{\bot\})\) is a \(g\)_-term_. (Recall that we remove parentheses for notational convenience.) (ii) Let \(s\approx t\) (resp. \(s\to t\)) be an equation (resp. a rule) over \(\Sigma^{*}\). We say that \(su\bot\approx tu\bot\) (resp. \(su\bot\to tu\bot\)) for some string \(u\) is a \(g\)_-equation_ (resp. a \(g\)_-rule_) of \(s\approx t\) (resp. \(s\to t\)). (iii) Let \(s\not\approx t\) be a negative literal over \(\Sigma^{*}\). We say that \(s\bot\not\approx t\bot\) is a (negative) \(g\)_-literal_ of \(s\not\approx t\). (iv) Let \(C:=s_{1}\approx t_{1}\vee\cdots\lor s_{m}\approx t_{m}\lor u_{1}\not\approx v_{ 1}\vee\cdots\lor u_{n}\not\approx v_{n}\) be a clause over \(\Sigma^{*}\). We say that \(s_{1}w_{1}\bot\approx t_{1}w_{1}\bot\vee\cdots\lor s_{m}w_{m}\bot\approx t_{m} w_{m}\bot\lor u_{1}\bot\not\approx v_{1}\bot\vee\cdots\lor u_{n}\bot\not \approx v_{n}\bot\) for some strings \(w_{1},\ldots,w_{m}\) is a \(g\)_-clause_ of clause \(C\). Here, each \(w_{k}\bot\in T(\Sigma\cup\{\bot\})\) for nonempty string \(w_{k}\) in the \(g\)-clause is said to be a _substitution part_ of \(C\). (v) Let \(\pi\) be an inference (w.r.t. \(\mathfrak{S}\)) with premises \(C_{1},\ldots,C_{k}\) and conclusion \(D\). Then a \(g\)_-instance_ of \(\pi\) is an inference (w.r.t. \(\mathfrak{S}\)) with premises \(C^{\prime}_{1},\ldots,C^{\prime}_{k}\) and conclusion \(D^{\prime}\), where \(C^{\prime}_{1},\ldots,C^{\prime}_{k}\) and \(D^{\prime}\) are \(g\)-clauses of \(C_{1},\ldots,C_{k}\) and \(D\), respectively. Since each term in \(T(\Sigma\cup\{\bot\})\) is viewed as a string, we may consider inferences between \(g\)-clauses using \(\mathfrak{S}\). Note that concatenating a (nonempty) string at the end of a \(g\)-term is not allowed for any \(g\)-term over \(T(\Sigma\cup\{\bot\})\). For example, \(abc\bot d\) is not a \(g\)-term, and \(a\bot\not\approx b\bot\lor abc\bot d\approx def\bot d\) is not a \(g\)-clause. We emphasize that we are only concerned with inferences between (legitimate) \(g\)-clauses here. We may also use the length-lexicographic ordering \(\succ_{g}\) on \(g\)-terms. Given a total precedence ordering on \(\Sigma\cup\{\bot\}\) for which \(\bot\) is minimal, it can be easily verified that \(\succ_{g}\) is a total reduction ordering on \(T(\Sigma\cup\{\bot\})\). We simply denote the multiset extension \(\succ_{g}^{mul}\) of \(\succ_{g}\) as \(\succ_{g}\) for notational convenience. Similarly, we denote ambiguously all orderings on \(g\)-terms, \(g\)-equations, and \(g\)-clauses over \(T(\Sigma\cup\{\bot\})\) by \(\succ_{g}\). Now, we consider the lifting of inferences of \(\mathfrak{S}\) between \(g\)-clauses over \(T(\Sigma\cup\{\bot\})\) to inferences of \(\mathfrak{S}\) between clauses over \(\Sigma^{*}\). Let \(C_{1},\ldots,C_{n}\) be clauses over \(\Sigma^{*}\) and let \[\frac{C_{1}^{\prime}\ldots C_{n}^{\prime}}{C^{\prime}}\] be an inference between their \(g\)-clauses, where \(C_{i}^{\prime}\) is a \(g\)-clause of \(C_{i}\) for all \(1\leq i\leq n\). We say that this inference between \(g\)-clauses can be _lifted_ if there is an inference \[\frac{C_{1}\ldots C_{n}}{C}\] such that \(C^{\prime}\) is a \(g\)-clause of \(C\). In what follows, we assume that a \(g\)-literal \(L_{i}^{\prime}\) in \(C_{i}^{\prime}\) is selected in the same way as \(L_{i}\) in \(C_{i}\), where \(L_{i}\) is a negative literal in \(C_{i}\) and \(L_{i}^{\prime}\) is a \(g\)-literal of \(L_{i}\). Lifting of an inference between \(g\)-clauses is possible if it does not correspond to a \(g\)-instance of an inference (w.r.t. \(\mathfrak{S}\)) into a substitution part of a clause, which is not necessary (see [4, 22]). Suppose that there is an inference between \(g\)-clauses \(C_{1}^{\prime}\ldots C_{n}^{\prime}\) with conclusion \(C^{\prime}\) and there is also an inference between clauses \(C_{1}\ldots C_{n}\) over \(\Sigma^{*}\) with conclusion \(C\), where \(C_{i}^{\prime}\) is a \(g\)-clause of \(C_{i}\) for all \(1\leq i\leq n\). Then, the inference between \(g\)-clauses \(C_{1}^{\prime}\ldots C_{n}^{\prime}\) over \(T(\Sigma\cup\{\bot\})\) can be lifted to the inference between clauses \(C_{1}\ldots C_{n}\) over \(\Sigma^{*}\) in such a way that \(C^{\prime}\) is a \(g\)-clause of \(C\). This can be easily verified for each inference rule in \(\mathfrak{S}\). **Example 3**.: Consider the following Superposition inference with \(g\)-clauses: \[\frac{ad\bot\approx cd\bot\lor aabb\bot\approx cbb\bot}{ad\bot\approx cd\bot \approx cd\bot\approx cbb\bot}\] where \(ad\bot\approx cd\bot\lor aabb\bot\approx cbb\bot\) (resp. \(abb\bot\approx db\bot\)) is a \(g\)-clause of \(a\approx c\lor aa\approx c\) (resp. \(ab\approx d\)) and \(aabb\bot\succ_{g}cbb\bot\) (resp. \(abb\bot\succ_{g}db\bot\)). This Superposition inference between \(g\)-clauses can be lifted to the following Superposition inference between clauses over \(\Sigma^{*}\): \[\frac{a\approx c\lor aa\approx c}{a\approx c\lor ad\approx cb}\] where \(aa\succ c\) and \(ab\succ d\). We see that conclusion \(ad\bot\approx cd\bot\lor adb\bot\approx cbb\bot\) of the Superposition inference between the above \(g\)-clauses is a \(g\)-clause of conclusion \(a\approx c\lor ad\approx cb\) of this inference. **Example 4**.: Consider the following Rewrite inference with \(g\)-clauses: \[\frac{a\bot\not\approx d\bot\lor aabb\bot\not\approx cd\bot}{abb\bot\approx cb\bot}\] where \(aabb\bot\not\approx cd\bot\) is selected and \(a\bot\not\approx d\bot\lor aabb\bot\not\approx cd\bot\) (resp. \(abb\bot\approx cb\bot\)) is a \(g\)-clause of \(a\not\approx d\lor aabb\not\approx cd\) (resp. \(ab\approx c\)) with \(abb\bot\succ_{g}cb\bot\). This Rewrite inference between \(g\)-clauses can be lifted to the following Rewrite inference between clauses over \(\Sigma^{*}\): \[\frac{a\not\approx d\lor aabb\not\approx cd}{a\not\approx d\lor acb\not\approx cd}\] where \(aabb\not\approx cd\) is selected and \(ab\succ c\). We see that conclusion \(a\bot\not\approx d\bot\lor acb\bot\not\approx cd\bot\) of the Rewrite inference between the above \(g\)-clauses is a \(g\)-clause of conclusion \(a\not\approx d\lor acb\not\approx cd\) of this inference. ## 4 Redundancy and Contraction Techniques By Lemma 1 and Definition 2, we may translate a clause \(C:=s_{1}\approx t_{1}\vee\cdots\lor s_{m}\approx t_{m}\lor u_{1}\not\approx v_ {1}\vee\cdots\lor u_{n}\not\approx v_{n}\) over \(\Sigma^{*}\) with all its implied clauses using the monotonicity property into the clause \(s_{1}(x_{1})\approx t_{1}(x_{1})\vee\cdots\lor s_{m}(x_{m})\approx t_{m}(x_{m })\lor u_{1}\bot\not\approx v_{1}\bot\vee\cdots\lor u_{n}\bot\not\approx v_ {n}\bot\) over \(T(\Sigma\cup\{\bot\},X)\) with all its ground instances, where \(x_{1},\ldots,x_{m}\) are distinct variables in \(X\), each symbol from \(\Sigma\) is interpreted as a unary function symbol, and \(\bot\) is the only constant symbol. This allows us to adapt the existing notion of redundancy found in the literature [3, 22]. **Definition 3**.: (i) Let \(R\) be a set of \(g\)-equations or \(g\)-rules. Then the congruence \(\leftrightarrow_{R}^{*}\) defines an equality _Herbrand Interpretation_\(I\), where the domain of \(I\) is \(T(\Sigma\cup\{\bot\})\). Each unary function symbol \(s\in\Sigma\) is interpreted as the unary function \(s_{I}\), where \(s_{I}(u\bot)\) is the \(g\)-term \(su\bot\). (The constant symbol \(\bot\) is simply interpreted as the constant \(\bot\).) The only predicate \(\approx\) is interpreted by \(s\bot\approx t\bot\) if \(s\bot\leftrightarrow_{R}^{*}t\bot\). We denote by \(R^{*}\) the interpretation \(I\) defined by \(R\) in this way. \(I\)_satisfies_ (is a _model_ of) a \(g\)-clause \(\Gamma\to\Delta\), denoted by \(I\models\Gamma\to\Delta\), if \(I\not\supseteq\Gamma\) or \(I\cap\Delta\neq\emptyset\). In this case, we say that \(\Gamma\to\Delta\) is _true_ in \(I\). We say that \(I\)_satisfies_ a clause \(C\) over \(\Sigma^{*}\) if \(I\) satisfies all \(g\)-clauses of \(C\). We say that \(I\)_satisfies_ a set of clauses \(S\) over \(\Sigma^{*}\), denoted by \(I\models S\), if \(I\) satisfies every clause in \(S\). (ii) A \(g\)-clause \(C\)_follows_ from a set of \(g\)-clauses \(\{C_{1},\ldots,C_{n}\}\), denoted by \(\{C_{1},\ldots,C_{n}\}\models C\), if \(C\) is true in every model of \(\{C_{1},\ldots,C_{k}\}\). **Definition 4**.: Let \(S\) be a set of clauses over \(\Sigma^{*}\). (i) A \(g\)-clause \(C\) is _redundant_ w.r.t. \(S\) if there exist \(g\)-clauses \(C^{\prime}_{1},\ldots,C^{\prime}_{k}\) of clauses \(C_{1},\ldots,C_{k}\) in \(S\), such that \(\{C^{\prime}_{1},\ldots,C^{\prime}_{k}\}\models C\) and \(C\succ_{g}C^{\prime}_{i}\) for all \(1\leq i\leq k\). A clause in \(S\) is _redundant_ w.r.t. \(S\) if all its \(g\)-clauses are redundant w.r.t. \(S\). (ii) An inference \(\pi\) with conclusion \(D\) is _redundant_ w.r.t. \(S\) if for every \(g\)-instance of \(\pi\) with maximal premise \(C^{\prime}\) (w.r.t. \(\succ_{g}\)) and conclusion \(D^{\prime}\), there exist \(g\)-clauses \(C^{\prime}_{1},\ldots,C^{\prime}_{k}\) of clauses \(C_{1},\ldots,C_{k}\) in \(S\) such that \(\{C^{\prime}_{1},\ldots,C^{\prime}_{k}\}\models D^{\prime}\) and \(C^{\prime}\succ_{g}C^{\prime}_{i}\) for all \(1\leq i\leq k\), where \(D^{\prime}\) is a \(g\)-clause of \(D\). **Lemma 5**.: _If an equation \(l\approx r\) simplifies a clause \(C\lor l_{1}ll_{2}\bowtie v\) into \(C\lor l_{1}rl_{2}\bowtie v\) using the Simplification rule, then \(C\lor l_{1}ll_{2}\bowtie v\) is redundant w.r.t. \(\{C\lor l_{1}rl_{2}\bowtie v,l\approx r\}\)._ Proof.: Suppose that \(l\approx r\) simplifies \(D:=C\lor l_{1}ll_{2}\not\approx v\) into \(C\lor l_{1}rl_{2}\not\approx v\), where \(l_{1}ll_{2}\not\approx v\) is selected for \(D\). Then, every \(g\)-clause \(D^{\prime}\) of \(D\) has the form \(D^{\prime}:=C^{\prime}\lor l_{1}ll_{2}\bot\not\approx v\bot\), where \(C^{\prime}\) is a \(g\)-clause of \(C\). Now, we may infer that \(\{D^{\prime\prime},ll_{2}\bot\approx lr_{2}\bot\}\models D^{\prime}\), where \(D^{\prime\prime}:=C^{\prime}\lor l_{1}rl_{2}\not\approx v\bot\) is a \(g\)-clause of \(C\lor l_{1}rl_{2}\not\approx v\) and \(ll_{2}\bot\approx lr_{2}\bot\) is a \(g\)-equation of \(l\approx r\). We also have \(D^{\prime}\succ_{g}D^{\prime\prime}\) and \(D^{\prime}\succ_{g}ll_{2}\bot\approx rl_{2}\bot\), and thus the conclusion follows. Otherwise, suppose that \(l\approx r\) simplifies \(D:=C\lor l_{1}ll_{2}\approx v\) into \(C\lor l_{1}rl_{2}\approx v\). Then every \(g\)-clause \(D^{\prime}\) of \(D\) has the form \(D^{\prime}:=C^{\prime}\lor l_{1}ll_{2}w\bot\approx vw\bot\) for some \(w\in\Sigma^{*}\), where \(C^{\prime}\) is a \(g\)-clause of \(C\). Now, we have \(\{D^{\prime\prime},ll_{2}w\bot\approx lr_{2}w\bot\}\models D^{\prime}\), where \(D^{\prime\prime}:=C^{\prime}\lor l_{1}rl_{2}w\bot\approx vw\bot\) is a \(g\)-clause of \(C\lor l_{1}rl_{2}\approx v\) for some \(w\in\Sigma^{*}\) and \(ll_{2}w\bot\approx rl_{2}w\bot\) is a \(g\)-equation of \(l\approx r\). We also have \(D^{\prime}\succ_{g}D^{\prime\prime}\) and \(D^{\prime}\succ_{g}ll_{2}w\bot\approx rl_{2}w\bot\) because \(l_{1}\) is not \(\lambda\) in the condition of the rule (i.e., \(l_{1}ll_{2}w\bot\succ_{g}ll_{2}w\bot\)), and thus the conclusion follows. We see that if \(C\) subsumes \(C^{\prime}\) with \(C\) and \(C^{\prime}\) containing the same number of literals, then they are the same when viewed as the finite multisets, so we can remove \(C^{\prime}\). Therefore, we exclude this case in the following lemma. **Lemma 6**.: _If a clause \(C\) subsumes a clause \(D\) and \(C\) contains fewer literals than \(D\), then \(D\) is redundant w.r.t. \(\{C\}\)._ Proof.: Suppose that \(C\) subsumes \(D\) and \(C\) contains fewer literals than \(D\). Then \(D\) can be denoted by \(C\lor B\) for some nonempty clause \(B\). Now, for every \(g\)-clause \(D^{\prime}:=C^{\prime}\lor B^{\prime}\) of \(D\), we have \(\{C^{\prime}\}\models D^{\prime}\) with \(D^{\prime}\succ_{g}C^{\prime}\), where \(C^{\prime}\) and \(B^{\prime}\) are \(g\)-clauses of \(C\) and \(B\), respectively. Thus, \(D\) is redundant w.r.t. \(\{C\}\). **Lemma 7**.: _A tautology \(C\lor s\approx s\) is redundant._ Proof.: It is easy to see that for every \(g\)-clause \(C^{\prime}\lor su\bot\approx su\bot\) of \(C\lor s\approx s\), we have \(\models C^{\prime}\lor su\bot\approx su\bot\), where \(u\in\Sigma^{*}\) and \(C^{\prime}\) is a \(g\)-clause of \(C\). Thus, \(C\lor s\approx s\) is redundant. ## 5 Refutational Completeness In this section, we adapt the model construction and equational theorem proving techniques used in [3, 18, 22] and show that \(\mathfrak{S}\) with the contraction rules is refutationally complete. **Definition 8**.: A \(g\)-equation \(s\bot\approx t\bot\) is _reductive_ for a \(g\)-clause \(C:=D\lor s\bot\approx t\bot\) if \(s\bot\approx t\bot\) is strictly maximal (w.r.t. \(\succ_{g}\)) in \(C\) with \(s\bot\succ_{g}t\bot\). **Definition 9**.: (Model Construction) Let \(S\) be a set of clauses over \(\Sigma^{*}\). We use induction on \(\succ_{g}\) to define the sets \(R_{C},E_{C}\), and \(I_{C}\) for all \(g\)-clauses \(C\) of clauses in \(S\). Let \(C\) be such a \(g\)-clause of a clause in \(S\) and suppose that \(E_{C^{\prime}}\) has been defined for all \(g\)-clauses \(C^{\prime}\) of clauses in \(S\) for which \(C\succ_{g}C^{\prime}\). Then we define by \(R_{C}=\bigcup_{C\succ_{g}C^{\prime}}E_{C^{\prime}}\). We also define by \(I_{C}\) the equality interpretation \(R_{C}^{*}\), which denotes the least congruence containing \(R_{C}\). Now, let \(C:=D\lor s\bot\approx t\bot\) such that \(C\) is not a \(g\)-clause of a clause with a selected literal in \(S\). Then \(C\) produces \(E_{C}=\{s\bot\to t\bot\}\) if the following conditions are met: (1) \(I_{C}\not\models C\), (2) \(I_{C}\not\models t\bot\approx t^{\prime}\bot\) for every \(s\bot\approx t^{\prime}\bot\) in \(D\), (3) \(s\bot\approx t\bot\) is reductive for \(C\), and (4) \(s\bot\) is irreducible by \(R_{C}\). We say that \(C\) is _productive_ and produces \(E_{C}\) if it satisfies all of the above conditions. Otherwise, \(E_{C}=\emptyset\). Finally, we define \(I_{S}\) as the equality interpretation \(R_{S}^{*}\), where \(R_{S}=\bigcup_{C}E_{C}\) is the set of all \(g\)-rules produced by \(g\)-clauses of clauses in \(S\). **Lemma 10**.: _(i) \(R_{S}\) has the Church-Rosser property. (ii) \(R_{S}\) is terminating. (iii) For \(g\)-terms \(u\bot\) and \(v\bot\), \(I_{S}\models u\bot\approx v\bot\) if and only if \(u\bot\bot_{R_{S}}v\bot\). (iv) If \(I_{S}\models s\approx t\), then \(I_{S}\models usv\approx uv\) for nonempty strings \(s,t,u,v\in\Sigma^{*}\)._ Proof.: (i) \(R_{S}\) is left-reduced because there are no overlaps among the left-hand sides of rewrite rules in \(R_{S}\), and thus \(R_{S}\) has the Church-Rosser property. (ii) For each rewrite rule \(l\bot\to r\bot\) in \(R_{S}\), we have \(l\bot\succ_{g}r\bot\), and thus \(R_{S}\) is terminating. (iii) Since \(R_{S}\) has the Church-Rosser property and is terminating by (i) and (ii), respectively, \(R_{S}\) is convergent. Thus, \(I_{S}\models u\bot\approx v\bot\) if and only if \(u\bot\downarrow_{R_{S}}v\bot\) for \(g\)-terms \(u\bot\) and \(v\bot\). (iv) Suppose that \(I_{S}\models s\approx t\) for nonempty strings \(s\) and \(t\). Then, we have \(I_{S}\models svw\bot\approx tvw\bot\) for all strings \(v\) and \(w\) by Definition 3(i). Similarly, since \(I_{S}\) is an equality Herbrand interpretation, we also have \(I_{S}\models usw\bot\approx uvw\bot\) for all strings \(u\), which means that \(I_{S}\models usv\approx uv\) by Definition 3(i). Lemma 10(iv) says that the monotonicity assumption used in this paper holds w.r.t. a model constructed by Definition 9. **Definition 11**.: Let \(S\) be a set of clauses over \(\Sigma^{*}\). We say that \(S\) is _saturated_ under \(\mathfrak{S}\) if every inference by \(\mathfrak{S}\) with premises in \(S\) is redundant w.r.t. \(S\). **Definition 12**.: Let \(C:=s_{1}\approx t_{1}\vee\cdots\lor s_{m}\approx t_{m}\lor u_{1}\not\approx v _{1}\vee\cdots\lor u_{n}\not\approx v_{n}\) be a clause over \(\Sigma^{*}\), and \(C^{\prime}=s_{1}w_{1}\bot\approx t_{1}w_{1}\bot\vee\cdots\lor s_{m}w_{m}\bot \approx t_{m}w_{m}\bot\lor u_{1}\bot\not\approx v_{1}\bot\vee\cdots\lor u_{ n}\bot\not\approx v_{n}\bot\) for some strings \(w_{1},\ldots,w_{m}\) be a \(g\)-clause of \(C\). We say that \(C^{\prime}\) is a _reduced \(g\)-clause_ of \(C\) w.r.t. a rewrite system \(R\) if every \(w_{i}\bot\), \(1\leq i\leq m\), is not reducible by \(R\). In the proof of the following lemma, we write \(s[t]_{suf}\) to indicate that \(t\) occurs in \(s\) as a suffix and (ambiguously) denote by \(s[u]_{suf}\) the result of replacing the occurrence of \(t\) (as a suffix of \(s\)) by \(u\). **Lemma 13**.: _Let \(S\) be saturated under \(\mathfrak{S}\) not containing the empty clause and \(C\) be a \(g\)-clause of a clause in \(S\). Then \(C\) is true in \(I_{S}\). More specifically, (i) If \(C\) is redundant w.r.t. \(S\), then it is true in \(I_{S}\)._ _(ii) If \(C\) is not a reduced \(g\)-clause of a clause in \(S\) w.r.t. \(R_{S}\), then it is true in \(I_{S}\)._ _(iii) If \(C:=C^{\prime}\lor s_{\perp}\approx t_{\perp}\) produces the rule \(s_{\perp}\to t_{\perp}\), then \(C^{\prime}\) is false and \(C\) is true in \(I_{S}\)._ _(iv) If \(C\) is a \(g\)-clause of a clause in \(S\) with a selected literal, then it is true in \(I_{S}\)._ _(v) If \(C\) is non-productive, then it is true in \(I_{S}\)._ Proof.: We use induction on \(\succ_{g}\) and assume that (i)-(v) hold for every \(g\)-clause \(D\) of a clause in \(S\) with \(C\succ_{g}D\). (i) Suppose that \(C\) is redundant w.r.t. \(S\). Then there exist \(g\)-clauses \(C^{\prime}_{1},\ldots,C^{\prime}_{k}\) of clauses \(C_{1},\ldots,C_{k}\) in \(S\), such that \(\{C^{\prime}_{1},\ldots,C^{\prime}_{k}\}\models C\) and \(C\succ_{g}C^{\prime}_{i}\) for all \(1\leq i\leq k\). By the induction hypothesis, each \(C^{\prime}_{i}\), \(1\leq i\leq k\), is true in \(I_{S}\). Thus, \(C\) is true in \(I_{S}\). (ii) Suppose that \(C\) is a \(g\)-clause of a clause \(B:=s_{1}\approx t_{1}\vee\cdots\lor s_{m}\approx t_{m}\lor u_{1}\not\approx v _{1}\vee\cdots\lor u_{n}\not\approx v_{n}\) in \(S\) but is not a reduced \(g\)-clause w.r.t. \(R_{S}\). Then \(C\) is of the form \(C:=s_{1}w_{1}.\bot\approx t_{1}w_{1}.\bot\vee\cdots\lor s_{m}w_{m}.\bot\approx t _{m}w_{m}.\bot\lor u_{1}.\not\approx v_{1}.\bot\vee\cdots\lor u_{n}.\bot \not\approx v_{n}.\bot\) for \(w_{1},\ldots,w_{m}\in\Sigma^{*}\) and some \(w_{k}.\bot\) is reducible by \(R_{S}\). Now, consider \(C^{\prime}=s_{1}w^{\prime}_{1}.\bot\approx t_{1}w^{\prime}_{1}.\bot\vee\cdots \lor s_{m}w^{\prime}_{m}.\bot\approx t_{m}w^{\prime}_{m}.\bot\lor u_{1}.\not \approx v_{1}.\bot\vee\cdots\lor u_{n}.\bot\not\approx v_{n}.\bot\), where \(w^{\prime}_{i}.\bot\) is the normal form of \(w_{i}.\bot\) w.r.t. \(R_{S}\) for each \(1\leq i\leq m\). Then \(C^{\prime}\) is a reduced \(g\)-clause of \(B\) w.r.t. \(R_{S}\), and is true in \(I_{S}\) by the induction hypothesis. Since each \(w_{i}.\bot\approx w^{\prime}_{i}.\bot\), \(1\leq i\leq m\), is true in \(I_{S}\) by Lemma 10(iii), we may infer that \(C\) is true in \(I_{S}\). In the remainder of the proof of this lemma, we assume that \(C\) is neither redundant w.r.t. \(S\) nor is it a reducible \(g\)-clause w.r.t. \(R_{S}\) of some clause in \(S\). (Otherwise, we are done by (i) or (ii).) (iii) Suppose that \(C:=C^{\prime}\lor s_{\perp}\approx t_{\perp}\) produces the rule \(s_{\perp}\to t_{\perp}\). Since \(s_{\perp}\to t_{\perp}\in E_{C}\subset R_{S}\), we see that \(C\) is true in \(I_{S}\). We show that \(C^{\prime}\) is false in \(I_{S}\). Let \(C^{\prime}:=\Gamma\to\Delta\). Then \(I_{C}\not\models C^{\prime}\) by Definition 9, which implies that \(I_{C}\cap\Delta=\emptyset,I_{C}\supseteq\Gamma\), and thus \(I_{S}\supseteq\Gamma\). It remains to show that \(I_{S}\cap\Delta=\emptyset\). Suppose to the contrary that \(\Delta\) contains an equation \(s^{\prime}.\bot\approx t^{\prime}.\bot\) which is true in \(I_{S}\). Since \(I_{C}\cap\Delta=\emptyset\), we must have \(s^{\prime}.\bot\approx t^{\prime}.\bot\in I\setminus I_{C}\), which is only possible if \(s.\bot=s^{\prime}.\bot\) and \(I_{C}\models t_{\perp}\approx t^{\prime}.\bot\), contradicting condition (2) in Definition 9. (iv) Suppose that \(C\) is of the form \(C:=B^{\prime}\lor s.\bot\not\approx t.\bot\), where \(s.\not\approx t.\bot\) is a \(g\)-literal of a selected literal in a clause in \(S\) and \(B^{\prime}\) is a \(g\)-clause of \(B\). (iv.1) If \(s.\bot=t_{\perp}\), then \(B^{\prime}\) is an equality resolvent of \(C\) and the Equality Resolution inferences can be lifted. By saturation of \(S\) under \(\mathfrak{S}\) and the induction hypothesis, \(B^{\prime}\) is true in \(I_{S}\). Thus, \(C\) is true in \(I_{S}\). (iv.2) If \(s.\bot\neq t.\bot\), then suppose to the contrary that \(C\) is false in \(I_{S}\). Then we have \(I_{S}\models s.\bot\approx t.\bot\), which implies that \(s.\bot\) or \(t.\bot\) is reducible by \(R_{S}\) by Lemma 10(iii). Without loss of generality, we assume that \(s.\bot\) is reducible by \(R_{S}\) with some rule \(lu.\bot\to ru\bot\) for some \(u\in\Sigma^{*}\) produced by a productive \(g\)-clause \(D^{\prime}\lor lu.\bot\approx ru.\bot\) of a clause \(D\lor l\approx r\in S\). This means that \(s.\bot\) has a suffix \(lu.\bot\). Now, consider the following inference by Rewriting: \[\frac{B\lor s[lu]_{suf}\not\approx t}{B\lor D\lor s[ru]_{suf}\not\approx t}\] where \(s[lu]_{suf}\not\approx t\) is selected for the left premise. The conclusion of the above inference has a \(g\)-clause \(C^{\prime}:=B^{\prime}\lor D^{\prime}\lor s.\bot[ru.\bot]_{suf}\not\approx t.\bot\). By saturation of \(S\) under \(\mathfrak{S}\) and the induction hypothesis, \(C^{\prime}\) must be true in \(I_{S}\). Moreover, we see that \(s.\bot[ru.\bot]_{suf}\not\approx t.\bot\) is false in \(I_{S}\) by Lemma 10(iii), and \(D^{\prime}\) are false in \(I_{S}\) by (iii). This means that \(B^{\prime}\) is true in \(I_{S}\), and thus \(C\) (i.e., \(C=B^{\prime}\lor s.\bot\not\approx t.\bot\)) is true in \(I_{S}\), which is the required contradiction. (v) If \(C\) is non-productive, then we assume that \(C\) is not a \(g\)-clause of a clause with a selected literal. Otherwise, the proof is done by (iv). This means that \(C\) is of the form \(C:=B^{\prime}\lor su.\bot\approx tu.\bot\), where \(su.\bot\approx tu.\bot\) is maximal in \(C\) and \(B^{\prime}\) contains no selected literal. If \(su.\bot=tu.\bot\), then we are done. Therefore, without loss of generality, we assume that \(su.\bot\succ_{g}tu.\bot\). As \(C\) is non-productive, it means that (at least) one of the conditions in Definition 9 does not hold. If condition (1) does not hold, then \(I_{C}\models C\), so we have \(I_{S}\models C\), i.e., \(C\) is true in \(I_{S}\). If condition (1) holds but condition (2) does not hold, then \(C\) is of the form \(C:=B_{1}^{\prime}\lor su\bot\approx tu\bot\lor svw\bot\approx t^{\prime}vw\bot\), where \(su=svw\) (i.e., \(u=vw\)) and \(I_{C}\models tu\bot\approx t^{\prime}vw\bot\). Suppose first that \(tu\bot=t^{\prime}vw\bot\). Then we have \(t=t^{\prime}\) since \(u=vw\). Now, consider the following inference by Factoring: \[\frac{B_{1}\lor s\approx t\lor sv\approx tv}{B_{1}\lor sv\approx tv}\] The conclusion of the above inference has a \(g\)-clause \(C^{\prime}:=B_{1}^{\prime}\lor svw\bot\approx tvw\bot\), i.e., \(C^{\prime}:=B_{1}^{\prime}\lor su\bot\approx tu\bot\) since \(u=vw\). By saturation of \(S\) under \(\mathfrak{S}\) and the induction hypothesis, \(C^{\prime}\) is true in \(I_{S}\), and thus \(C\) is true in \(I_{S}\). Otherwise, suppose that \(tu\bot\neq t^{\prime}vw\bot\). Then we have \(tu\bot\downarrow_{R_{C}}t^{\prime}vw\bot\) by Lemma 10(iii) and \(tu\bot\succ_{g}t^{\prime}vw\bot\) because \(su\bot\approx tu\bot\) is maximal in \(C\). This means that \(tu\bot\) is reducible by \(R_{C}\) by some rule \(l\tau\bot\to r\tau\bot\) produced by a productive \(g\)-clause \(D^{\prime}\lor l\tau\bot\approx r\tau\bot\) of a clause \(D\lor l\approx r\in S\). Now, we need to consider two cases: (v.1) If \(t\) has the form \(t:=u_{1}u_{2}\) and \(l\) has the form \(l:=u_{2}u_{3}\), then consider the following inference by Paramodulation: \[\frac{B\lor s\approx u_{1}u_{2}\qquad D\lor u_{2}u_{3}\approx r}{B\lor D\lor su _{3}\approx u_{1}r}\] The conclusion of the above inference has a \(g\)-clause \(C^{\prime}:=B^{\prime}\lor D^{\prime}\lor su_{3}\tau\bot\approx u_{1}r\tau\bot\) with \(u=u_{3}\tau\). By saturation of \(S\) under \(\mathfrak{S}\) and the induction hypothesis, \(C^{\prime}\) is true in \(I_{S}\). Since \(D^{\prime}\) is false in \(I_{S}\) by (iii), either \(B^{\prime}\) or \(su_{3}\tau\bot\approx u_{1}r\tau\bot\) is true in \(I_{S}\). If \(B^{\prime}\) is true in \(I_{S}\), so is \(C\). If \(su_{3}\tau\bot\approx u_{1}r\tau\bot\) is true in \(I_{S}\), then \(su\bot\approx tu\bot\) is also true in \(I_{S}\) by Lemma 10(iii), where \(t=u_{1}u_{2}\) and \(u=u_{3}\tau\). Thus, \(C\) is true in \(I_{S}\). (v.2) If \(t\) has the form \(t:=u_{1}u_{2}u_{3}\) and \(l\) has the form \(l:=u_{2}\), then consider the following inference by Rewrite: \[\frac{B\lor s\approx u_{1}u_{2}u_{3}\qquad D\lor u_{2}\approx r}{B\lor D\lor s \approx u_{1}ru_{3}}\] The conclusion of the above inference has a \(g\)-clause \(C^{\prime\prime}:=B^{\prime}\lor D^{\prime}\lor su\bot\approx u_{1}ru_{3}u\bot\) with \(\tau=u_{3}u\). By saturation of \(S\) under \(\mathfrak{S}\) and the induction hypothesis, \(C^{\prime\prime}\) is true in \(I_{S}\). Since \(D^{\prime}\) is false in \(I_{S}\) by (iii), either \(B^{\prime}\) or \(su_{\bot}\approx u_{1}ru_{3}u\bot\) is true in \(I_{S}\). Similarly to case (v.1), if \(B^{\prime}\) is true in \(I_{S}\), so is \(C\). If \(su_{\bot}\approx u_{1}ru_{3}u\bot\) is true in \(I_{S}\), then \(su_{\bot}\approx tu\bot\) is also true in \(I_{S}\) by Lemma 10(iii), where \(t=u_{1}u_{2}u_{3}\). Thus, \(C\) is true in \(I_{S}\). If conditions (1) and (2) hold but condition (3) does not hold, then \(su_{\bot}\approx tu\bot\) is only maximal but is not strictly maximal, so we are in the previous case. (Since \(\succ_{g}\) is total on \(g\)-clauses, condition (2) does not hold.) If conditions (1)-(3) hold but condition (4) does not hold, then \(su_{\bot}\) is reducible by \(R_{C}\) by some rule \(l\tau\bot\to r\tau\bot\) produced by a productive \(g\) clause \(D^{\prime}\lor l\tau\bot\approx r\tau\bot\) of a clause \(D\lor l\approx r\in S\). Again, we need to consider two cases: (v.1') If \(s\) has the form \(s:=u_{1}u_{2}\) and \(l\) has the form \(l:=u_{2}u_{3}\), then consider the following inference by Superposition: \[\frac{B\lor u_{1}u_{2}\approx t\qquad D\lor u_{2}u_{3}\approx r}{B\lor D\lor u_{ 1}r\approx tu_{3}}\] The conclusion of the above inference has a \(g\)-clause \(C^{\prime}:=B^{\prime}\lor D^{\prime}\lor u_{1}r\tau\bot\approx tu_{3}\tau\bot\) with \(u=u_{3}\tau\). By saturation of \(S\) under \(\mathfrak{S}\) and the induction hypothesis, \(C^{\prime}\) is true in \(I_{S}\). Since \(D^{\prime}\) is false in \(I_{S}\) by (iii), either \(B^{\prime}\) or \(u_{1}r\tau\bot\approx tu_{3}\tau\bot\) is true in \(I_{S}\). If \(B^{\prime}\) is true in \(I_{S}\), so is \(C\). If \(u_{1}r\tau\bot\approx tu_{3}\tau\bot\) is true in \(I_{S}\), then \(su\bot\approx tu\bot\) is also true in \(I_{S}\) by Lemma 10(iii), where \(s=u_{1}u_{2}\) and \(u=u_{3}\tau\). Thus, \(C\) is true in \(I_{S}\). (v.2') If \(s\) has the form \(s:=u_{1}u_{2}u_{3}\) and \(l\) has the form \(l:=u_{2}\), then consider the following inference by Rewrite: \[\frac{B\lor u_{1}u_{2}u_{3}\approx t\qquad D\lor u_{2}\approx r}{B\lor D\lor u _{1}ru_{3}\approx t}\] The conclusion of the above inference has a \(g\)-clause \(C^{\prime\prime}:=B^{\prime}\lor D^{\prime}\lor u_{1}ru_{3}u\bot\approx tu\bot\) with \(\tau=u_{3}u\). By saturation of \(S\) under \(\mathfrak{S}\) and the induction hypothesis, \(C^{\prime\prime}\) is true in \(I_{S}\). Since \(D^{\prime}\) is false in \(I_{S}\) by (iii), either \(B^{\prime}\) or \(u_{1}ru_{3}u\bot\approx tu\bot\) is true in \(I_{S}\). Similarly to case (v.1'), If \(B^{\prime}\) is true in \(I_{S}\), so is \(C\). If \(u_{1}ru_{3}u\bot\approx tu\bot\) is true in \(I_{S}\), then \(su\bot\approx tu\bot\) is also true in \(I_{S}\) by Lemma 10(iii), where \(s=u_{1}u_{2}u_{3}\). Thus, \(C\) is true in \(I_{S}\). **Definition 14**.: (i) A _theorem proving derivation_ is a sequence of sets of clauses \(S_{0}=S,S_{1},\ldots\) over \(\Sigma^{*}\) such that: (i.1) Deduction: \(S_{i}=S_{i-1}\cup\{C\}\) if \(C\) can be deduced from premises in \(S_{i-1}\) by applying an inference rule in \(\mathfrak{S}\). (i.2) Deletion: \(S_{i}=S_{i-1}\setminus\{D\}\) if \(D\) is redundant w.r.t. \(S_{i-1}\).5 Footnote 5: Here, an inference by Simplification combines the Deduction step for \(C\lor l_{1}r_{2}\bowtie v\) and the Deletion step for \(C\lor l_{1}l_{2}\bowtie v\) (see the Simplification rule). (ii) The set \(S_{\infty}:=\bigcup_{i}(\bigcap_{j\geq i}S_{j})\) is the _limit_ of the theorem proving derivation. We see that the soundness of a theorem proving derivation w.r.t. the proposed inference system is straightforward, i.e., \(S_{i}\models S_{i+1}\) for all \(i\geq 0\). **Definition 15**.: A theorem proving derivation \(S_{0},S_{1},S_{2},\ldots\) is _fair_ w.r.t. the inference system \(\mathfrak{S}\) if every inference by \(\mathfrak{S}\) with premises in \(S_{\infty}\) is redundant w.r.t. \(\bigcup_{j}S_{j}\). **Lemma 16**.: _Let \(S\) and \(S^{\prime}\) be sets of clauses over \(\Sigma^{*}\). (i) If \(S\subseteq S^{\prime}\), then any clause which is redundant w.r.t. \(S\) is also redundant w.r.t. \(S^{\prime}\). (ii) If \(S\subseteq S^{\prime}\) and all clauses in \(S^{\prime}\setminus S\) are redundant w.r.t. \(S^{\prime}\), then any clause or inference which is redundant w.r.t. \(S^{\prime}\) is also redundant w.r.t. \(S\)._ Proof.: The proof of part (i) is obvious. For part (ii), suppose that a clause \(C\) is redundant w.r.t. \(S^{\prime}\) and let \(C^{\prime}\) be a \(g\)-clause of it. Then there exists a minimal set \(N:=\{C^{\prime}_{1},\ldots,C^{\prime}_{n}\}\) (w.r.t. \(\succ_{g}\)) of \(g\)-clauses of clauses in \(S^{\prime}\) such that \(N\models C^{\prime}\) and \(C^{\prime}\succ_{g}C^{\prime}_{i}\) for all \(1\leq i\leq n\). We claim that all \(C^{\prime}_{i}\) in \(N\) are not redundant w.r.t. \(S^{\prime}\), which shows that \(C^{\prime}\) is redundant w.r.t. \(S\). Suppose to the contrary that some \(C^{\prime}_{j}\) is redundant w.r.t. \(S^{\prime}\). Then there exist a set \(N^{\prime}:=\{D^{\prime}_{1},\ldots,D^{\prime}_{m}\}\) of \(g\)-clauses of clauses in \(S^{\prime}\) such that \(N^{\prime}\models C^{\prime}_{j}\) and \(C^{\prime}_{j}\succ_{g}D^{\prime}_{i}\) for all \(1\leq i\leq m\). This means that we have \(\{C^{\prime}_{1},\ldots,C^{\prime}_{j-1},D^{\prime}_{1},\ldots,D^{\prime}_{m},C ^{\prime}_{j+1},\ldots,C^{\prime}_{n}\}\models C^{\prime}\), which contradicts our minimal choice of the set \(N=\{C^{\prime}_{1},\ldots,C^{\prime}_{n}\}\). Next, suppose an inference \(\pi\) with conclusion \(D\) is redundant w.r.t. \(S^{\prime}\) and let \(\pi^{\prime}\) be a \(g\)-instance of it such that \(B\) is the maximal premise and \(D^{\prime}\) is the conclusion of \(\pi^{\prime}\) (i.e., a \(g\)-clause of \(D\)). Then there exists a minimal set \(P:=\{D^{\prime}_{1},\ldots,D^{\prime}_{n}\}\) (w.r.t. \(\succ_{g}\)) of \(g\)-clauses of clauses in \(S^{\prime}\) such that \(P\models D^{\prime}\) and \(B\succ_{g}D^{\prime}_{i}\) for all \(1\leq i\leq n\). As above, we may infer that all \(D^{\prime}_{i}\) in \(P\) are not redundant w.r.t. \(S^{\prime}\), and thus \(\pi^{\prime}\) is redundant w.r.t. \(S\). **Lemma 17**.: _Let \(S_{0},S_{1},\ldots\) be a fair theorem proving derivation w.r.t. \(\mathfrak{S}\). Then \(S_{\infty}\) is saturated under \(\mathfrak{S}\)._ Proof.: If \(S_{\infty}\) contains the empty clause, then it is obvious that \(S_{\infty}\) is saturated under \(\mathfrak{S}\). Therefore, we assume that the empty clause is not in \(S_{\infty}\). If a clause \(C\) is deleted in a theorem proving derivation, then \(C\) is redundant w.r.t. some \(S_{j}\). By Lemma 16(i), it is also redundant w.r.t. \(\bigcup_{j}S_{j}\). Similarly, every clause in \(\bigcup_{j}S_{j}\setminus S_{\infty}\) is redundant w.r.t. \(\bigcup_{j}S_{j}\). By fairness, every inference \(\pi\) by \(\mathfrak{S}\) with premises in \(S_{\infty}\) is redundant w.r.t. \(\bigcup_{j}S_{j}\). Using Lemma 16(ii) and the above, \(\pi\) is also redundant w.r.t. \(S_{\infty}\), which means that \(S_{\infty}\) is saturated under \(\mathfrak{S}\). **Theorem 18**.: _Let \(S_{0},S_{1},\ldots\) be a fair theorem proving derivation w.r.t. \(\mathfrak{S}\). If \(S_{\infty}\) does not contain the empty clause, then \(I_{S_{\infty}}\models S_{0}\) (i.e., \(S_{0}\) is satisfiable.)_ Proof.: Suppose that \(S_{0},S_{1},\ldots\) is a fair theorem proving derivation w.r.t. \(\mathfrak{S}\) and that its limit \(S_{\infty}\) does not contain the empty clause. Then \(S_{\infty}\) is saturated under \(\mathfrak{S}\) by Lemma 17. Let \(C^{\prime}\) be a \(g\)-clause of a clause \(C\) in \(S_{0}\). If \(C\in S_{\infty}\), then \(C^{\prime}\) is true in \(I_{S_{\infty}}\) by Lemma 13. Otherwise, if \(C\notin S_{\infty}\), then \(C\) is redundant w.r.t. some \(S_{j}\). It follows that \(C\) redundant w.r.t. \(\bigcup_{j}S_{j}\) by Lemma 16(i), and thus redundant w.r.t. \(S_{\infty}\) by Lemma 16(ii). This means that there exist \(g\)-clauses \(C^{\prime}_{1},\ldots,C^{\prime}_{k}\) of clauses \(C_{1},\ldots,C_{k}\) in \(S_{\infty}\) such that \(\{C^{\prime}_{1},\ldots,C^{\prime}_{k}\}\models C^{\prime}\) and \(C^{\prime}\succ_{g}C^{\prime}_{i}\) for all \(1\leq i\leq k\). Since each \(C^{\prime}_{i}\), \(1\leq i\leq k\), is true in \(I_{S_{\infty}}\) by Lemma 13, \(C^{\prime}\) is also true in \(I_{S_{\infty}}\), and thus the conclusion follows. The following theorem states that \(\mathfrak{S}\) with the contraction rules is refutationally complete for clauses over \(\Sigma^{*}\). **Theorem 19**.: _Let \(S_{0},S_{1},\ldots\) be a fair theorem proving derivation w.r.t. \(\mathfrak{S}\). Then \(S_{0}\) is unsatisfiable if and only if the empty clause is in some \(S_{j}\)._ Proof.: Suppose that \(S_{0},S_{1},\ldots\) be a fair theorem proving derivation w.r.t. \(\mathfrak{S}\). By the soundness of the derivation, if the empty clause is in some \(S_{j}\), then \(S_{0}\) is unsatisfiable. Otherwise, if the empty clause is not in \(S_{k}\) for all \(k\), then \(S_{\infty}\) does not contain the empty clause by the soundness of the derivation. Applying Theorem 18, we conclude that \(S_{0}\) is satisfiable. ## 6 Conditional Completion In this section, we present a saturation procedure under \(\mathfrak{S}\) for a set of conditional equations over \(\Sigma^{*}\), where a conditional equation is naturally written as an equational Horn clause. A saturation procedure under \(\mathfrak{S}\) can be viewed as _conditional completion_[13] for a set of conditional equations over \(\Sigma^{*}\). If a set of conditional equations over \(\Sigma^{*}\) is simply a set of equations over \(\Sigma^{*}\), then the proposed saturation procedure (w.r.t. \(\succ\)) corresponds to a completion procedure for a string rewriting system. Conditional string rewriting systems were considered in [12] in the context of embedding a finitely generated monoid with decidable word problem into a monoid presented by a finite convergent conditional presentation. It neither discusses a conditional completion (or a saturation) procedure, nor considers the word problems for conditional equations over \(\Sigma^{*}\) in general. First, it is easy to see that a set of equations over \(\Sigma^{*}\) is consistent. Similarly, a set of conditional equations \(R\) over \(\Sigma^{*}\) is consistent because each conditional equation has always a positive literal and we cannot derive the empty clause from \(R\) using a saturation procedure under \(\mathfrak{S}\) that is refutationally complete (cf. Section 9 in [14]). Since we only consider Horn clauses in this section, we neither need to consider the Factoring rule nor the Paramodulation rule in \(\mathfrak{S}\). In the remainder of this section, by a conditional equational theory \(R\), we mean a set of conditional equations \(R\) over \(\Sigma^{*}\). **Definition 20**.: Given a conditional equational theory \(R\) and two finite words \(s,t\in\Sigma^{*}\), a _word problem_ w.r.t. \(R\) is of the form \(\phi:=s\approx^{?}t\). The _goal_ of this word problem is \(s\not\approx t\). We say that a word problem \(s\approx^{?}t\) w.r.t. \(R\) is _decidable_ if there is a decision procedure for determining whether \(s\approx t\) is entailed by \(R\) (i.e., \(R\models s\approx t\)) or not (i.e., \(R\not\models s\approx t\)). Given a conditional equational theory \(R\), let \(G:=s\not\approx t\) be the goal of a word problem \(s\approx^{?}t\) w.r.t. \(R\). (Note that \(G\) does not have any positive literal.) Then we see that \(R\cup\{s\approx t\}\) is consistent if and only if \(R\cup\{G\}\) is inconsistent. This allows one to decide a word problem w.r.t. \(R\) using the equational theorem proving procedure discussed in Section 5. **Lemma 21**.: _Let \(R\) be a conditional equational theory finitely saturated under \(\mathfrak{S}\). Then Rewrite together with Equality Resolution is terminating and refutationally complete for \(R\cup\{G\}\), where \(G\) is the goal of a word problem w.r.t. \(R\)._ Proof.: Since \(R\) is already saturated under \(\mathfrak{S}\), inferences among Horn clauses in \(R\) are redundant and remain redundant in \(R\cup\{G\}\) for a theorem proving derivation starting with \(R\cup\{G\}\). (Here, \(\{G\}\) can be viewed as a _set of support_[3] for a refutation of \(R\cup\{G\}\).) Now, observe that \(G\) is a negative literal, so it should be selected. The only inference rules in \(\mathfrak{S}\) involving a selected literal are the Rewrite and Equality Resolution rule. Furthermore, the derived literals from \(G\) w.r.t. Rewrite will also be selected eventually. Therefore, it suffices to consider positive literals as the right premise (because they contain no selected literal), and \(G\) and its derived literals w.r.t. Rewrite as the left premise for the Rewrite rule. Observe also that if \(G^{\prime}\) is an immediate derived literal from \(G\) w.r.t. Rewrite, then we see that \(G\succ G^{\prime}\). If \(G\) or its derived literal from \(G\) w.r.t. Rewrite becomes of the form \(u\not\approx u\) for some \(u\in\Sigma^{*}\), then it will also be selected and an Equality Resolution inference yields the empty clause. Since \(\succ\) is terminating and there are only finitely many positive literals in \(R\), we may infer that the Rewrite and Equality Resolution inference steps on \(G\) and its derived literals are terminating. (The number of positive literals in \(R\) remains the same during a theorem proving derivation starting with \(R\cup\{G\}\) using our selection strategy.) Finally, since \(\mathfrak{S}\) is refutationally complete by Thereom 19, Rewrite together with Equality Resolution is also refutationally complete for \(R\cup\{G\}\). Given a finitely saturated conditional equational theory \(R\) under \(\mathfrak{S}\), we provide a decision procedure for the word problems w.r.t. \(R\) in the following theorem. **Theorem 22**.: _Let \(R\) be a conditional equational theory finitely saturated under \(\mathfrak{S}\). Then the word problems w.r.t. \(R\) are decidable by Rewrite together with Equality Resolution._ Proof.: Let \(\phi:=s\approx^{?}t\) be a word problem w.r.t. \(R\) and \(G\) be the goal of \(\phi\). We know that by Lemma 21, Rewrite together with Equality Resolution is terminating and refutationally complete for \(R\cup\{G\}\). Let \(R_{0}:=R\cup\{G\},R_{1},\ldots,R_{n}\) be a fair theorem proving derivation w.r.t. Rewrite together with Equality Resolution such that \(R_{n}\) is the limit of this derivation. If \(R_{n}\) contains the empty clause, then \(R_{n}\) is inconsistent, and thus \(R_{0}\) is inconsistent, i.e., \(\{s\not\approx t\}\cup R\) is inconsistent by the soundness of the derivation. Since \(R\) is consistent and \(\{s\not\approx t\}\cup R\) is saturated under \(\mathfrak{S}\), we may infer that \(R\models s\approx t\). Otherwise, if \(R_{n}\) does not contain the empty clause, then \(R_{n}\) is consistent, and thus \(R_{0}\) is consistent by Theorem 19, i.e., \(\{s\not\approx t\}\cup R\) is consistent. Since \(R\) is consistent and \(\{s\not\approx t\}\cup R\) is saturated under \(\mathfrak{S}\), we may infer that \(R\not\models s\approx t\). The following corollary is a consequence of Theorem 22 and the following observation. Let \(R=R_{0},R_{1},\ldots,R_{n}\) be a finite fair theorem proving derivation w.r.t. \(\mathfrak{S}\) for an initial conditional equational theory \(R\) with the limit \(\bar{R}:=R_{n}\). Then \(R\cup\{G\}\) is inconsistent if and only if \(\bar{R}\cup\{G\}\) is inconsistent by the soundness of the derivation and Theorem 19. **Corollary 23**.: _Let \(R=R_{0},R_{1},\ldots\) be a fair theorem proving derivation w.r.t. \(\mathfrak{S}\) for a conditional equational theory \(R\). If \(R\) can be finitely saturated under \(\mathfrak{S}\), then the word problems w.r.t. \(R\) are decidable._ **Example 5**.: Let \(a\succ b\succ c\) and \(R\) be a conditional equational theory consisting of the following rules 1: \(aa\approx\lambda\), 2: \(bb\approx\lambda\), 3: \(ab\approx\lambda\), 4: \(ab\not\approx ba\lor ac\approx ca\), and 5: \(ab\not\approx ba\lor ac\not\approx ca\lor bc\approx cb\). We first saturate \(R\) under \(\mathfrak{S}\): 6: \(\lambda\not\approx ba\lor ac\approx ca\) (\(ab\not\approx ba\) is selected for 4. Rewrite of 4 with 3) 7: \(\lambda\not\approx ba\lor ac\not\approx ca\lor bc\approx cb\) (\(ab\not\approx ba\) is selected for 5. Rewrite of 5 with 3) 8: \(a\approx b\) (Superposition of 1 with 3) 9: \(\lambda\not\approx bb\lor ac\approx ca\) (\(\lambda\not\approx ba\) is selected for 6. Rewrite of 6 with 8) 10: \(\lambda\not\approx\lambda\lor ac\approx ca\) (\(\lambda\not\approx bb\) is selected for 9. Rewrite of 9 with 2) 11: \(ac\approx ca\) (\(\lambda\not\approx\lambda\) is selected for 10. Equality Resolution on 10) 12: \(\lambda\not\approx bb\lor ac\not\approx ca\lor bc\approx cb\) (\(\lambda\not\approx ba\) is selected for 7. Rewrite of 7 with 8) 13: \(\lambda\not\approx\lambda\lor ac\not\approx ca\lor bc\approx cb\) (\(\lambda\not\approx bb\) is selected for 12. Rewrite of 12 with 2) 14: \(ac\not\approx ca\lor bc\approx cb\) ( \(\lambda\not\approx\lambda\) is selected for 13. Equality Resolution on 13) 15: \(ca\not\approx ca\lor bc\approx cb\) ( \(ac\not\approx ca\) is selected for 14. Rewrite of 14 with 11) 16: \(bc\approx cb\) (\(ca\not\approx ca\) is selected for 15. Equality Resolution on 15) ... After some simplification steps, we have a saturated set \(\bar{R}\) for \(R\) under \(\mathfrak{S}\) using our selection strategy (i.e., the selection of negative literals). We may infer that the positive literals in \(\bar{R}\) are as follows. \(1^{\prime}:bb\approx\lambda\), \(2^{\prime}:a\approx b\), and \(3^{\prime}:bc\approx cb\). Note that only the positive literals in \(\bar{R}\) are now needed to solve a word problem w.r.t. \(R\) because of our selection strategy. Now, consider the word problem \(\phi:=acbcba\approx? bccaba\) w.r.t. \(R\), where the goal of \(\phi\) is \(G:=acbcba\not\approx bccaba\). We only need the Rewrite and Equality Resolution steps on \(G\) and its derived literals from \(G\) using \(1^{\prime}\), \(2^{\prime}\), and \(3^{\prime}\). Note that all the following literals are selected except the empty clause. \(4^{\prime}\): \(bcbcbb\not\approx bcbbbb\) (Rewrite steps of \(G\) and its derived literals from \(G\) using \(2^{\prime}\)). \(5^{\prime}\): \(bcbc\not\approx bccb\) (Rewrite steps of \(4^{\prime}\) and its derived literals from \(4^{\prime}\) using \(1^{\prime}\)). \(6^{\prime}\): \(ccbb\not\approx ccbb\) (Rewrite steps of \(5^{\prime}\) and its derived literals from \(5^{\prime}\) using \(3^{\prime}\)). \(7^{\prime}\): \(\square\) (Equality Resolution on \(6^{\prime}\)) Since \(\bar{R}\cup G\) is inconsistent, we see that \(R\cup G\) is inconsistent by the soundness of the derivation, where \(R\) and \(\bar{R}\) are consistent. Therefore, we may infer that \(R\models acbcba\approx bccaba\). ## 7 Related Work Equational reasoning on strings has been studied extensively in the context of string rewriting systems and Thue systems [9] and their related algebraic structures. The monotonicity assumption used in this paper is found in string rewriting systems and Thue systems in the form of a congruence relation (see [9, 18]). See [9, 11, 12, 24] also for the completion of algebraic structures and decidability results using string rewriting systems, in particular _cross-sections_ for finitely presented monoids discussed by Otto et al [24]. However, those systems are not concerned with equational theorem proving for general clauses over strings. If the monotonicity assumption is discarded, then equational theorem proving for clauses over strings can be handled by traditional superposition calculi or SMT with the theory of equality with uninterpreted functions (EUF) and their variants [7] using a simple translation into first-order ground terms. Also, efficient SMT solvers for various string constraints were discussed in the literature (e.g., [21]). Meanwhile, equational theorem proving modulo associativity was studied in [27]. (See also [20] for equational theorem proving with _sequence variables_ and fixed or variadic arity symbols). This approach is not tailored towards (ground) strings, so we need an additional encoding for each string. Also, it is probably less efficient since it is not tailored towards ground strings, and it does not provide a similar decision procedure discussed in Section 6. The proposed calculus is the first sound and refutationally complete equational theorem proving calculus for general clauses over strings under the monotonicity assumption. One may attempt to use the existing superposition calculi for clauses over strings with the proposed translation scheme, which translates clauses over strings into clauses over first-order terms discussed in Section 3.2. However, this does not work because of the Equality Factoring rule [4, 23] or the Merging Paramodulation rule [4], which is essential for first-order superposition theorem proving calculi in general. For example, consider a clause \(a\approx b\lor a\approx c\) with \(a\succ b\succ c\), which is translated into a first-order clause \(a(x)\approx b(x)\lor a(y)\approx c(y)\). The Equality Factoring rule yields \(b(z)\not\approx c(z)\lor a(z)\approx c(z)\) from \(a(x)\approx b(x)\lor a(y)\approx c(y)\), which cannot be translated back into a clause over strings (see Lemma 1). Similarly, a first-order clause produced by Merging Paramodulation may not be translated back into a clause over strings. If one is only concerned with refutational completeness, then the existing superposition calculi6 can be adapted by using the proposed translation scheme. In this case, a saturated set may not be translated back into clauses over strings in some cases, which is an obvious drawback for its applications (see _programs_ in [4]). Footnote 6: The reader is also encouraged to see _AVATAR modulo theories_[25], which is based on the concept of splitting. ## 8 Conclusion This paper has presented a new refutationally complete superposition calculus with strings and provided a framework for equational theorem proving for clauses over strings. The results presented in this paper generalize the results about completion of string rewriting systems and equational theorem proving using equations over strings. The proposed superposition calculus is based on the simple string matching methods and the efficient length-lexicographic ordering that allows one to compare two finite strings in linear time for a fixed signature with its precedence. The proposed approach translates for a clause over strings into the first-order representation of the clause by taking the monotonicity property of equations over strings into account. Then the existing notion of redundancy and model construction techniques for the equational theorem proving framework for clauses over strings has been adapted. This paper has also provided a decision procedure for word problems over strings w.r.t. a set of conditional equations \(R\) over strings if \(R\) can be finitely saturated under the Superposition, Rewrite and Equality Resolution rule. (The complexity analysis of the proposed approach is not discussed in this paper. It is left as a future work for this decision procedure.) Since strings are fundamental objects in mathematics, logic, and computer science including formal language theory, developing applications based on the proposed superposition calculus with strings may be a promising future research direction. Also, the results in this paper may have potential applications in verification systems and solving satisfiability problems [2]. In addition, it would be an interesting future research direction to extend our superposition calculus with strings to superposition calculi with strings using built-in equational theories, such as commutativity, _idempotency_[9], _nilpotency_[16], and their various combinations. For example, research on superposition theorem proving for _commutative monoids_[26] is one such direction.
2301.12056
Variational Latent Branching Model for Off-Policy Evaluation
Model-based methods have recently shown great potential for off-policy evaluation (OPE); offline trajectories induced by behavioral policies are fitted to transitions of Markov decision processes (MDPs), which are used to rollout simulated trajectories and estimate the performance of policies. Model-based OPE methods face two key challenges. First, as offline trajectories are usually fixed, they tend to cover limited state and action space. Second, the performance of model-based methods can be sensitive to the initialization of their parameters. In this work, we propose the variational latent branching model (VLBM) to learn the transition function of MDPs by formulating the environmental dynamics as a compact latent space, from which the next states and rewards are then sampled. Specifically, VLBM leverages and extends the variational inference framework with the recurrent state alignment (RSA), which is designed to capture as much information underlying the limited training data, by smoothing out the information flow between the variational (encoding) and generative (decoding) part of VLBM. Moreover, we also introduce the branching architecture to improve the model's robustness against randomly initialized model weights. The effectiveness of the VLBM is evaluated on the deep OPE (DOPE) benchmark, from which the training trajectories are designed to result in varied coverage of the state-action space. We show that the VLBM outperforms existing state-of-the-art OPE methods in general.
Qitong Gao, Ge Gao, Min Chi, Miroslav Pajic
2023-01-28T02:20:03Z
http://arxiv.org/abs/2301.12056v4
# Variational Latent Branching Model ###### Abstract Model-based methods have recently shown great potential for off-policy evaluation (OPE); offline trajectories induced by behavioral policies are fitted to transitions of Markov decision processes (MDPs), which are used to rollout simulated trajectories and estimate the performance of policies. Model-based OPE methods face two key challenges. First, as offline trajectories are usually fixed, they tend to cover limited state and action space. Second, the performance of model-based methods can be sensitive to the initialization of their parameters. In this work, we propose the variational latent branching model (VLBM) to learn the transition function of MDPs by formulating the environmental dynamics as a compact latent space, from which the next states and rewards are then sampled. Specifically, VLBM leverages and extends the variational inference framework with the _recurrent state alignment (RSA)_, which is designed to capture as much information underlying the limited training data, by smoothing out the information flow between the variational (encoding) and generative (decoding) part of VLBM. Moreover, we also introduce the _branching architecture_ to improve the model's robustness against randomly initialized model weights. The effectiveness of the VLBM is evaluated on the deep OPE (DOPE) benchmark, from which the training trajectories are designed to result in varied coverage of the state-action space. We show that the VLBM outperforms existing state-of-the-art OPE methods in general. ## 1 Introduction Off-policy evaluation (OPE) allows for evaluation of reinforcement learning (RL) policies without online interactions. It is applicable to many domains where on-policy data collection could be prevented due to efficiency and safety concerns, _e.g._, healthcare (Gao et al., 2022c;a; Tang and Wiens, 2021), recommendation systems (Mehrotra et al., 2018; Li et al., 2011), education (Mandel et al., 2014), social science (Segal et al., 2018) and optimal control (Silver et al., 2016; Vinyals et al., 2019; Gao et al., 2020; 2019, 2020b). Recently, as reported in the deep OPE (DOPE) benchmark (Fu et al., 2020b), model-based OPE methods, leveraging feed-forward (Fu et al., 2020b) and auto-regressive (AR) (Zhang et al., 2020a) architectures, have shown promising results toward estimating the return of target policies, by fitting transition functions of MDPs. However, model-based OPE methods remain challenged as they can only be trained using offline trajectory data, which often offers limited coverage of state and action space. Thus, they may perform sub-optimally on tasks where parts of the dynamics are not fully explored (Fu et al., 2020b). Moreover, different initialization of the model weights could lead to varied evaluation performance (Hanin and Rolnick, 2018; Rossi et al., 2019), reducing the robustness of downstream OPE estimations. Some approaches in RL policy optimization literature use latent models trained to capture a compact space from which the dynamics underlying MDPs are extrapolated; this allows learning expressive representations over the state-action space. However, such approaches usually require _online_ data collections as the focus is on quickly navigating to the high-reward regions (Rybkin et al., 2021), as well as on improving coverage of the explored state and action space (Zhang et al., 2019; Hafner et al., 2019, 2020a) or sample efficiency (Lee et al., 2020). In this work, we propose the variational latent branching model (VLBM), aiming to learn a compact and disentangled latent representation space from offline trajectories, which can better capture the dynamics underlying environments. VLBM enriches the architectures and optimization objectives for existing latent modeling frameworks, allowing them to learn from a _fixed_ set of _offline_ trajectories. Specifically, VLBM considers learning variational (encoding) and generative (decoding) distributions, both represented by long short-term memories (LSTMs) with reparameterization (Kingma & Welling, 2013), to encode the state-action pairs and enforce the transitions over the latent space, respectively. To train such models, we optimize over the evidence lower bound (ELBO) jointly with a _recurrent state alignment_ (RSA) term defined over the LSTM states; this ensures that the information encoded into the latent space can be effectively teased out by the decoder. Then, we introduce the _branching architecture_ that allows for multiple decoders to jointly infer from the latent space and reach a consensus, from which the next state and reward are generated. This is designed to mitigate the side effects of model-based methods where different weight initializations could lead to varied performance (Fu et al., 2020; Hanin & Rolnick, 2018; Rossi et al., 2019). We focus on using the VLBM to facilitate OPE since it allows to better distinguish the improvements made upon learning dynamics underlying the MDP used for estimating policy returns, as opposed to RL training where performance can be affected by multiple factors, _e.g._, techniques used for exploration and policy optimization. Moreover, model-based OPE methods is helpful for evaluating the safety and efficacy of RL-based controllers before deployments in the real world (Gao et al., 2022), _e.g._, how a surgical robot would react to states that are critical to a successful procedure. The key contributions of this paper are summarized as follows: (\(i\)) to the best of our knowledge, the VLBM is the first method that leverages variational inference for OPE. It can be trained using offline trajectories and capture environment dynamics over latent space, as well as estimate returns of target (evaluation) policies accurately. (\(ii\)) The design of the RSA loss term and branching architecture can effectively smooth the information flow in the latent space shared by the encoder and decoder, increasing the expressiveness and robustness of the model. This is empirically shown in experiments by comparing with ablation baselines. (\(iii\)) Our method generally outperforms existing model-based and model-free OPE methods, for evaluating policies over various D4RL environments (Fu et al., 2020). Specifically, we follow guidelines provided by the DOPE benchmark (Fu et al., 2020), which contains challenging OPE tasks where the training trajectories include varying levels of coverage of the state-action space, and target policies are designed toward resulting in state-action distributions different from the ones induced by behavioral policies. ## 2 Variational Latent Branching Model In this section, we first introduce the objective of OPE and the variational latent model (VLM) we consider. Then, we propose the recurrent state alignment (RSA) term as well as the branching architecture that constitute the variational latent branching model (VLBM). ### OPE Objective We first introduce the MDP used to characterize the environment. Specifically, an MDP can be defined as a tuple \(\mathcal{M}=(\mathcal{S},\mathcal{A},\mathcal{P},R,s_{0},\gamma)\), where \(\mathcal{S}\) is the set of states, \(\mathcal{A}\) the set of actions, \(\mathcal{P}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}\) is the transition distribution usually captured by probabilities \(p(s_{t}|s_{t-1},a_{t-1})\), \(R:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) is the reward function, \(s_{0}\) is the initial state sampled from the initial state distribution \(p(s_{0})\), \(\gamma\in[0,1)\) is the discounting factor. Finally, the agent interacts with the MDP following some policy \(\pi(a|s)\) which defines the probabilities of taking action \(a\) at state \(s\). Then, the goal of OPE can be formulated as follows. Given trajectories collected by a _behavioral_ policy \(\beta\), \(\rho^{\beta}=\{[(s_{0},a_{0},r_{0},s_{1}),\ldots,(s_{T-1},a_{T-1},r_{T-1},s_{ T})]^{(0)},[(s_{0},a_{0},r_{0},s_{1}),\ldots]^{(1)},\ldots|a_{t}\sim\beta(a_{t}|s_{ t})\}\)1, estimate the expected total return over the unknown state-action visitation distribution \(\rho^{\kappa}\) of the _target_ (evaluation) policy \(\pi\) - _i.e._, for \(T\) being the horizon, Footnote 1: We slightly abuse the notation \(\rho^{\beta}\), to represent either the trajectories or state-action visitation distribution under the behavioral policy, depending on the context. \[\mathbb{E}_{(s,a)\sim\rho^{\kappa},r\sim R}\left[\sum\nolimits_{t=0}^{T} \gamma^{t}R(s_{t},a_{t})\right]. \tag{1}\] ### Variational Latent Model We consider the VLM consisting of a prior \(p(z)\) over the latent variables \(z\in\mathcal{Z}\subset\mathbb{R}^{l}\), with \(\mathcal{Z}\) representing the latent space and \(l\) the dimension, along with a variational encoder \(q_{\psi}(z_{t}|z_{t-1},a_{t-1},s_{t})\) and a generative decoder \(p_{\phi}(z_{t},s_{t},r_{t-1}|z_{t-1},a_{t-1})\), parameterized by \(\psi\) and \(\phi\) respectively. Basics of variational inference are introduced in Appendix F. Latent Prior \(p(z_{0})\).The prior specifies the distribution from which the latent variable of the _initial_ stage, \(z_{0}\), is sampled. We configure \(p(z_{0})\) to follow a Gaussian with zero mean and identity covariance matrix, which is a common choice under the variational inference framework (Kingma and Welling, 2013; Lee et al., 2020). Variational Encoder for Inference \(q_{\psi}(z_{t}|z_{t-1},a_{t-1},s_{t})\). The encoder is used to approximate the intractable posterior, \(p(z_{t}|z_{t-1},a_{t-1},s_{t})=\frac{p(z_{t-1},a_{t-1},z_{t},s_{t})}{\int_{s_{t} \in Z}p(z_{t-1},a_{t-1},z_{t},s_{t})dz_{t}}\), where the denominator requires integrating over the unknown latent space. Specifically, the encoder can be decomposed into two parts, given that \[q_{\psi}(z_{0:T}|s_{0:T},a_{0:T-1})\] \[= q_{\psi}(z_{0}|s_{0})\prod_{t=1}^{T}q_{\psi}(z_{t}|z_{t-1},a_{t -1},s_{t}); \tag{2}\] here, \(q_{\psi}(z_{0}|s_{0})\) encodes the initial state \(s_{0}\) in to the corresponding latent variable \(z_{0}\), then, \(q_{\psi}(z_{t}|z_{t-1},a_{t-1},s_{t})\) enforces the transition from \(z_{t-1}\) to \(z_{t}\) conditioned on \(a_{t-1}\) and \(s_{t}\). Both distributions are _diagonal_ Gaussians2, with means and diagonal of covariance matrices determined by multi-layered perceptron (MLP) (Bishop, 2006) and long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997) respectively. The weights for both neural networks are referred to as \(\psi\) in general. Footnote 2: Assume that different dimensions of the states are non-correlated with each other. Otherwise, the states can be projected to orthogonal basis, such that non-diagonal elements of the covariance matrix will be zeros. Consequently, the _inference_ process for \(z_{t}\) can be summarized as \[z_{0}^{\psi}\sim q_{\psi}(z_{0}|s_{0}),\quad h_{t}^{\psi}=f_{\psi}(h_{t-1}^{ \psi},z_{t-1}^{\psi},a_{t-1},s_{t}),\quad z_{t}^{\psi}\sim q_{\psi}(z_{t}|h_{t} ^{\psi}), \tag{3}\] where \(f_{\psi}\) represents the LSTM layer and \(h_{t}^{\psi}\) the LSTM recurrent (hidden) state. Note that we use \(\psi\) in superscripts to distinguish the variables involved in this _inference_ process, against the _generative_ process introduced below. Moreover, reparameterization can be used to sample \(z_{0}^{\psi}\) and \(z_{t}^{\psi}\), such that gradients of sampling can be back-propagated, as introduced in (Kingma and Welling, 2013). Overview of the inference and generative processes are illustrated in Fig. 1. Generative Decoder for Sampling \(p_{\phi}(z_{t},s_{t},r_{t-1}|z_{t-1},a_{t-1})\).The decoder is used to interact with the target policies and acts as a synthetic environment during policy evaluation, from which the expected returns can be estimated as the mean return of simulated trajectories. The decoder can be represented by the multiplication of three diagonal Gaussian distributions, given that \[p_{\phi}(z_{1:T},s_{0:T},r_{0:T-1}|z_{0},\pi)=\prod_{t=0}^{T}p_{\phi}(s_{t}|z_ {t})\prod_{t=1}^{T}p_{\phi}(z_{t}|z_{t-1},a_{t-1})p_{\phi}(r_{t-1}|z_{t}), \tag{4}\] with \(a_{t}\sim\pi(a_{t}|s_{t})\) at each time step. Specifically, \(p_{\phi}(z_{t}|z_{t-1},a_{t-1})\) has its mean and covariance determined by an LSTM, enforcing the transition from \(z_{t-1}\) to \(z_{t}\) in the latent space given action \(a_{t-1}\). In what follows, \(p_{\phi}(s_{t}|z_{t})\) and \(p_{\phi}(r_{t-1}|z_{t})\) generate the current state \(s_{t}\) and reward \(r_{t-1}\) given \(z_{t}\), whose mean and covariance are determined by MLPs. As a result, the _generative_ process starts with sampling the initial latent variable from the latent prior, _i.e._, \(z_{0}^{\phi}\sim p(z_{0})\). Then, the initial state \(s_{0}^{\phi}\sim p_{\phi}(s_{0}|z_{0}^{\phi})\) and action \(a_{0}\sim\pi(a_{0}|s_{0}^{\phi})\) are obtained from \(p_{\phi}\) and target policy \(\pi\), respectively; the rest of _generative_ process can be summarized as \[h_{t}^{\phi}=f_{\phi}(h_{t-1}^{\phi},z_{t-1}^{\phi},a_{t-1}), \quad\tilde{h}_{t}^{\phi}=g_{\phi}(h_{t}^{\phi}),\quad z_{t}^{\phi}\sim p_{ \phi}(\tilde{h}_{t}^{\phi}),\] \[s_{t}^{\phi}\sim p_{\phi}(s_{t}|z_{t}^{\phi}),\quad r_{t-1}^{ \phi}\sim p_{\phi}(r_{t-1}|z_{t}^{\phi}),\quad a_{t}\sim\pi(a_{t}|s_{t}^{\phi }), \tag{5}\] Figure 1: Architecture of variational latent model (VLM) we consider. where \(f_{\phi}\) is the LSTM layer producing recurrent state \(h_{t}^{\phi}\). Then, an MLP \(g_{\phi}\) is used to generate mapping between \(h_{t}^{\phi}\) and \(\tilde{h}_{t}^{\phi}\) that will be used for recurrent state alignment (RSA) introduced below, to augment the information flow between the inference and generative process. Furthermore, to train the elements in the encoder (3) and decoder (5), one can maximize the evidence lower bound (ELBO), a lower bound of the joint log-likelihood \(p(s_{0:T},r_{0:T-1})\), following \[\mathcal{L}_{ELBO}(\psi,\phi)= \mathbb{E}_{q_{\phi}}\Big{[}\sum\nolimits_{t=0}^{T}\log p_{\phi} (s_{t}|z_{t})+\sum\nolimits_{t=1}^{T}\log p_{\phi}(r_{t-1}|z_{t})-KL\big{(}q_ {\psi}(z_{0}|s_{0})||p(z_{0})\big{)}\] \[-\sum\nolimits_{t=1}^{T}KL\big{(}q_{\psi}(z_{t}|z_{t-1},a_{t-1},s _{t})||p_{\phi}(z_{t}|z_{t-1},a_{t-1})\big{)}\Big{]}; \tag{6}\] here, the first two terms represent the log-likelihood of reconstructing the states and rewards, and the last two terms regularize the approximated posterior. The proof can be found in Appendix E. ### Recurrent State Alignment The latent model discussed above is somewhat reminiscent of the ones used in model-based RL policy training methods, _e.g._, recurrent state space model (RSSM) used in PlaNet (Hafner et al., 2019) and Dreamer (Hafner et al., 2020;b), as well as similar ones in Lee et al. (2020); Lu et al. (2022). Such methods rely on a _growing_ experience buffer for training, which is collected _online_ by the target policy that is being concurrently updated (with exploration noise added); however, OPE aims to extrapolate returns from a fixed set of _offline_ trajectories which may result in limited coverage of the state and action space. Consequently, directly applying VLM for OPE can lead to subpar performance empirically; see results in Sec. 3. Moreover, the encoder above plays a key role of capturing the temporal transitions between latent variables, _i.e._, \(p_{\phi}(z_{t}|z_{t-1},a_{t-1},s_{t})\) from (2). However, it is _absent_ in the generative process, as the decoder leverages a separate network to determine the latent transitions, _i.e._, \(p_{\phi}(z_{t}|z_{t-1},a_{t-1})\). Moreover, from the ELBO (6) above it can be seen that only the KL-divergence terms are used to regularize these two parts, which may not be sufficient for OPE as limited offline trajectories are provided. As a result, we introduce the RSA term as part of the training objective, to further regularize \(p_{\psi}(z_{t}|z_{t-1},a_{t-1},s_{t})\) and \(p_{\phi}(z_{t}|z_{t-1},a_{t-1})\). A graphical illustration of RSA can be found in Fig. 2.3. Footnote 3: Rewards and actions are omitted for conciseness of the presentation. Specifically, RSA is defined as the mean _pairwise_ squared error between \(h_{t}^{\psi}\) from the encoder (3) and \(\tilde{h}_{t}^{\phi}\) from the decoder (5), _i.e._, \[\mathcal{L}_{RSA}(\tilde{h}_{t}^{\phi},h_{t}^{\psi};\psi,\phi)= \frac{1}{N}\sum_{i=1}^{N}\sum_{t=0}^{T}\frac{M(M-1)}{2}\Big{[} \sum_{j=1}^{M-1}\sum_{k=j+1}^{M}\big{(}(\tilde{h}_{t}^{\phi}[j]-\tilde{h}_{t}^ {\phi}[k])-(h_{t}^{\psi}[j]-h_{t}^{\psi}[k])\big{)}^{2}\Big{]}; \tag{7}\] here, we assume that both LSTM recurrent states have the same dimension \(\tilde{h}_{t}^{\phi},h_{t}^{\psi}\in\mathbb{R}^{M}\), with \(h_{t}^{(\cdot)}[j]\) referring to the \(j\)-th element of the recurrent state, and \(N\) the number of training trajectories. Here, we choose the pairwise squared loss over the classic mean squared error (MSE), because MSE could be too strong to regularize \(h_{t}^{\psi}\) and \(\tilde{h}_{t}^{\phi}\) which support the inference and generative processes respectively and are not supposed to be exactly the same. In contrast, the pairwise loss (7) can Figure 2: (Left) Recurrent state alignment (RSA) applied over the recurrent hidden states between inference and generative process illustrated separately. (Right) Single-step forward pass of the variational latent branching model (VLBM), the training objectives for each branch and final predictions. promote structural similarity between the LSTM recurrent states of the encoder and decoder, without strictly enforcing them to become the same. Note that this design choice has been justified in Sec. 3 through an ablation study by comparing against models trained with MSE. In general, the pairwise loss has also been adopted in many domains for similar purposes, _e.g._, object detection (Gould et al., 2009; Rocco et al., 2018), ranking systems (Doughty et al., 2018; Saquli et al., 2021) and contrastive learning (Wang et al., 2021; Chen et al., 2020). Similarly, we apply the pairwise loss over \(h_{t}^{\psi}\) and \(\tilde{h}_{t}^{\phi}\), instead of directly over \(h_{t}^{\psi}\) and \(h_{t}^{\phi}\), as the mapping \(g_{\phi}\) (from equation 5) could serve as a regularization layer to ensure optimality over \(\mathcal{L}_{RSA}\) without changing \(h_{t}^{\psi},h_{t}^{\phi}\) significantly. As a result, the objective for training the VLM, following architectures specified in (3) and (5), can be formulated as \[\max_{\psi,\phi}\mathcal{L}_{VLM}(\psi,\phi)=\max_{\psi,\phi}\left(\mathcal{L }_{ELBO}(\psi,\phi)-C\cdot\mathcal{L}_{RSA}(\tilde{h}_{t}^{\phi},h_{t}^{\psi} ;\psi,\phi)\right), \tag{8}\] with \(C>0\) and \(C\in\mathbb{R}\) being the constant balancing the scale of the ELBO and RSA terms. ### Branching for Generative Decoder The performance of model-based methods can vary upon different design factors (Fu et al., 2020; Hanin & Rolnick, 2018). Specifically, Rossi et al. (2019) has found that the convergence speed and optimality of variational models are sensitive to the choice of weight initialization techniques. Moreover, under the typical variational inference setup followed by the VLM above, the latent transitions reconstructed by the decoder, \(p_{\phi}(z_{t}|z_{t-1},a_{t-1})\), are only trained through regularization losses in (6) and (7), but are fully responsible for rolling out trajectories during evaluation. Consequently, in this sub-section we introduce the branching architecture for decoder, with the goal of minimizing the impact brought by random weight initialization of the networks, and allowing the decoder to best reconstruct the latent transitions \(p_{\phi}(z_{t}|z_{t-1},a_{t-1})\) as well as \(s_{t}\)'s and \(r_{t-1}\)'s correctly. Specifically, the branching architecture leverages an ensemble of \(B\in\mathbb{Z}^{+}\) decoders to tease out information from the latent space formulated by the encoder, with final predictions sampled from a mixture of the Gaussian output distributions from (5). Note that the classic setup of ensembles is not considered, _i.e._, train and average over \(B\) VLMs end-to-end; because in this case \(B\) different latent space exist, each of which is still associated with a single decoder, leaving the challenges above unresolved. This design choice is justified by ablations studies in Sec. 3, by comparing VLBM against a (classic) ensemble of VLMs. **Branching Architecture.** Consider the generative process involving \(B\) branches of the decoders parameterized by \(\{\phi_{1},\dots,\phi_{B}\}\). The forward architecture over a single step is illustrated in Fig. 2.4 Specifically, the procedure of sampling \(z_{t}^{\phi_{k}}\) and \(s_{t}^{\phi_{k}}\) for each \(b\in[1,B]\) follows from (5). Recall that by definition \(p_{\phi_{b}}(s_{t}|z_{t}^{\phi_{b}})\) follows multivariate Gaussian with mean and diagonal of covariance matrix determined by the corresponding MLPs, _i.e._, \(\mu(s_{t}^{\phi_{b}})=\phi_{b,\mu}^{MLP}(z_{t}^{\phi_{b}})\) and \(\Sigma_{diag}(s_{t}^{\phi_{b}})=\phi_{b,\Sigma}^{MLP}(z_{t}^{\phi_{b}})\). In what follows, the final outcome \(s_{t}^{\phi}\) can be sampled following diagonal Gaussian with mean and variance determined by weighted averaging across all branches using weights \(w_{b}\)'s, _i.e._, Footnote 4: For simplicity, the parts generating rewards are omitted without lost of generality. \[s_{t}^{\phi}\sim p_{\phi}(s_{t}|z_{t}^{\phi_{1}},\dots,z_{t}^{\phi_{B}})= \mathcal{N}\Big{(}\boldsymbol{\mu}=\sum_{b}w_{b}\cdot\mu(s_{t}^{\phi_{b}}), \boldsymbol{\Sigma}_{diag}=\sum_{b}w_{b}^{2}\cdot\Sigma_{diag}(s_{t}^{\phi_{b} })\Big{)}. \tag{9}\] The objective below can be used to jointly update, \(w_{b}\)'s, \(\psi\) and \(\phi_{b}\)'s, _i.e._, \[\max_{\psi,\phi,w}\mathcal{L}_{VLBM}(\psi,\phi_{1},\dots,\phi_{B},w_{1},\dots,w_{B})\] \[= \max_{\psi,\phi,w}\Big{(}\sum_{t=0}^{T}\log p_{\phi}(s_{t}^{\phi} |z_{t}^{\phi_{1}},\dots,z_{t}^{\phi_{B}})-C_{1}\cdot\sum_{b}\mathcal{L}_{RSA}( \tilde{h}_{t}^{\phi_{b}},h_{t}^{\psi};\psi,\phi_{b})+C_{2}\sum_{b}\mathcal{L}_ {ELBO}(\psi,\phi_{b})\Big{)},\] \[\text{s.t.}\quad w_{1},\dots,w_{B}>0\,\ \sum_{b}w_{b}=1\text{ and constants }C_{1},C_{2}>0. \tag{10}\] Though the first term above already propagates through all \(w_{b}\)'s and \(\phi_{b}\)'s, the third term and constraints over \(w_{b}\)'s regularize \(\phi_{b}\) in each individual branch such that they are all trained toward maximizing the likelihood \(p_{\phi_{b}}(s_{t}^{\phi_{b}}|z_{t}^{\phi_{b}})\). Pseudo-code for training and evaluating the VLBM can be found in Appendix C. Further, in practice, one can define \(w_{b}=\frac{v_{b}^{2}}{\epsilon+\sum_{b}v_{b}^{2}}\), with \(v_{b}\in\mathbb{R}\) the learnable variables and \(0<\epsilon\ll 1\), \(\epsilon\in\mathbb{R}\), the constant ensuring denominator to be greater than zero, to convert (10) into unconstrained optimization and solve it using gradient descent. Lastly, note that complementary latent modeling methods, _e.g._, latent overshooting from Hafner et al. (2019), could be adopted in (10). However, we keep the objective straightforward, so that the source of performance improvements can be isolated. ## 3 Experiments To evaluate the VLBM, we follow the guidelines from the deep OPE (DOPE) benchmark (Fu et al., 2020). Specifically, we follow the D4RL branch in DOPE and use the Gym-Mujoco and Adroit suites as the test base (Fu et al., 2020). Such environments have long horizons and high-dimensional state and action space, which are usually challenging for model-based methods. The provided offline trajectories for training are collected using behavioral policies at varied scale, including limited exploration, human teleoperation etc., which can result in different levels of coverage over the state-action space. Also, the target (evaluation) policies are generated using online RL training, aiming to reduce the similarity between behavioral and target policies; it introduces another challenge that during evaluation the agent may visit states unseen from training trajectories. Environmental and Training Setup.A total of 8 environments are provided by Gym-Mujoco and Adroit suites (Fu et al., 2020; 2). Moreover, each environment is provided with 5 (for Gym-Mujoco) or 3 (for Adroit) training datasets collected using different behavioral policies, resulting in a total of 32 sets of env-dataset tasks5 - a full list can be found in Appendix A. DOPE also provides 11 target policies for each environment, whose performance are to be evaluated by the OPE methods. They in general result in varied scales of returns, as shown in the x-axes of Fig. 7. Moreover, we consider the decoder to have \(B=10\) branches, _i.e._, \(\{p_{\phi_{1}},\dots,p_{\phi_{10}}\}\). The dimension of latent space is set to be 16, _i.e._, \(z\in\mathcal{Z}\subset\mathbb{R}^{16}\). Other implementation details can be found in Appendix A. Footnote 5: From now on the dataset names are abbreviated by their initials, _e.g._, Ant-M-R refers to Ant-Medium-Replay. Baselines and Evaluation Metrics.In addition to the five baselines reported from DOPE, _i.e._, importance sampling (IS) (Precup, 2000), doubly robust (DR) (Thomas and Brunskill, 2016), variational power method (VPM) (Wen et al., 2020), distribution correction estimation (DICE) (Yang et al., 2020), and fitted Q-evaluation (FQE) (Le et al., 2019), the effectiveness of VLBM is also compared against the state-of-the-art model-based OPE method leveraging the auto-regressive (AR) architecture (Zhang et al., 2020). Specifically, for each task we train an ensemble of 10 AR models, for fair comparisons against VLBM which leverages the branching architecture; see Appendix A for details of the AR ensemble setup. Following the DOPE benchmark (Fu et al., 2020), our evaluation metrics includes rank correlation, regret@1, and mean absolute error (MAE). VLBM and all baselines are trained using 3 different random seeds over each task, leading to the results reported below. Ablation.Four ablation baselines are also considered, _i.e._, VLM, VLM+RSA, VLM+RSA(MSE) and VLM+RSA Ensemble. Specifically, VLM refers to the model introduced in Sec. 2.2, trained toward maximizing only the ELBO, _i.e._, (6). Note that, arguably, VLM could be seen as the generalization of directly applying latent-models proposed in existing RL policy optimization literature (Lee et al., 2020; Hafner et al., 2019; 2020; 20; Lu et al., 2022); details can be found in Sec. 4 below. The VLM+RSA ablation baseline follows the same model architecture as VLM, but is trained to optimize over both ELBO and recurrent state alignment (RSA) as introduced in (8), _i.e._, branching is not used comparing to VLBM. The design of these two baselines can help analyze the effectiveness of the RSA Figure 3: Mean rank correlation, regret@1 and MAE over all the 32 Gym-Mujoco and Adroit tasks, showing VLBM achieves state-of-the-art performance overall. loss term and branching architecture introduced in Sec. 2.3 and 2.4. Moreover, VLM+RSA(MSE) uses mean squared error to replace the pairwise loss introduced in (7), and the VLM+RSA Ensemble applies classic ensembles by averaging over \(B\) VLM+RSA models end-to-end, instead of branching from decoder as in VLBM. These two ablation baselines can help justify the use of pairwise loss for RSA, and the benefit of using branching architecture over classic ensembles. Results.Fig. 3 shows the mean overall performance attained by VLBM and baselines over all the 32 Gym-Mujoco and Adroit tasks. In general VLBM leads to significantly increased rank correlations and decreased regret@1's over existing methods, with MAEs maintained at the state-of-the-art level. Specifically, VLBM achieves state-of-the-art performance in 31, 29, and 15 (out of 32) tasks in terms of rank correlation, regret@1 and MAE, respectively. Performance for each task can be found in Tables 1- 6 at the end of Appendices. Note that results for IS, VPM, DICE, DR, and FQE are obtained directly from DOPE benchmark (Fu et al., 2020), since the same experimental setup is considered. Fig. 4 and 5 visualize the mean performance for each Gym-Mujoco and Adroit environment respectively, over all the associated datasets. It can be also observed that the model-based and FQE baselines generally perform better than the other baselines, which is consistent with findings from DOPE. The fact that VLM+RSA outperforming the VLM ablation baseline, as shown in Fig. 4, illustrates the need of the RSA loss term to smooth the flow of information between the encoder and decoder, in the latent space. Moreover, one can observe that VLM+RSA(MSE) sometimes performs worse than VLM, and significantly worse than VLM+RSA in general. Specifically, it has be found that, compared to VLM and VLM+RSA respectively, VLM+RSA(MSE) significantly worsen at least two metrics in 7 and 12 (out of 20) Gym-Mujoco tasks; detailed performance over these tasks can be found in Tables 1- 6 at the end of Appendices. Such a finding backs up the design choice of using pairwise loss for RSA instead of MSE, as MSE could be overly strong to regularize the LSTM recurrent states of the encoder and decoder, while pairwise loss only enforces structural similarities. Moreover, VLBM significantly improves rank correlations and regrets greatly compared to VLM+RSA, illustrating the importance of the branching architecture. In the paragraph below, we show empirically the benefits brought in by branching over classic ensembles. Figure 4: Mean rank correlation, regret@1 and MAE over all datasets, for each Mujoco environment. Figure 5: Distribution of all branching weights, \(w_{b}\)’s, over all VLBMs trained on the 32 tasks. Figure 6: Distribution of all branching weights, \(w_{b}\)’s, over all VLBMs trained on the 32 tasks. Branching versus Classic Ensembles.Fig. 4 shows that the VLM+RSA Ensemble does not improve performance over the VLM+RSA in general, and even leads to worse overall rank correlations and regrets in Walker2d and Hopper environments. This supports the rationale provided in Sec. 2.4 that each decoder still samples from different latent space exclusively, and averaging over the output distributions may not help reduce the disturbance brought in by the modeling artifacts under the variational inference framework, _e.g._, random weight initializations (Hanin & Rolnick, 2018; Rossi et al., 2019). In contrast, the VLBM leverages the branching architecture, allowing all the branches to sample from the same latent space formulated by the encoder. Empirically, we find that the branching weights, \(w_{b}\)'s in (9), allows VLBM to kill branches that are not helpful toward reconstructing the trajectories accurately, to possibly overcome bad initializations etc. Over all the the 32 tasks we consider, most of VLBMs only keep 1-3 branches (out of 10), _i.e._, \(w_{b}<10^{-5}\) for all other branches. The distribution of all \(w_{b}\)'s, from VLBMs trained on the 32 tasks, are shown in Fig. 6; one can observe that most of the \(w_{b}\)'s are close to zero, while the others generally fall in the range of \((0,0.25]\) and \([0.75,1)\). AR ensembles also lead to compelling rank correlations and regrets, but attains much smaller margins in MAEs over other baselines in general; see Fig. 3. From Fig. 7, one can observe that it tends to significantly under-estimate most of the high-performing policies. Scatter plots for the other tasks can be found in Appendix A, which also show this trend. The reason could be that its model architecture and training objectives are designed to directly learn the transitions of the MDP; thus, may produce biased predictions when the target policies lead to visitation of the states that are not substantially presented in training data, since such data are obtained using behavioral policies that are sub-optimal. In contrast, the VLBM can leverage RSA and branching against such situations, thus outperforming AR ensembles in most of the OPE tasks in terms of all metrics we considered. Interestingly, Fig. 7 also shows that latent models could sometimes over-estimate the returns. For example, in Hopper-M-E and Walker2d-M-E, VLM tends to over-estimate most policies. The VLBM performs consistently well in Hopper-M-E, but is mildly affected by such an effect in Walker2d-M-E, though over fewer policies and smaller margins. It has been found that variational inference may fall short in approximating true distributions that are asymmetric, and produce biased estimations (Yao et al., 2018). So the hypothesis would be that the dynamics used to define certain environments may lead to asymmetry in the true posterior \(p(z_{t}|z_{t-1},a_{t-1},s_{t})\), which could be hard to be captured by the latent modeling framework we consider. More comprehensive understanding of such behavior can be explored in future work. However, the VLBM still significantly outperforms VLM overall, and achieves top-performing rank correlations and regrets; such results illustrate the VLBM's improved robustness as a result of its architectural design and choices over training objective. \(t\)-SNE Visualization of the Latent Space.Fig. 8 illustrates t-SNE visualization of the latent space by rolling out trajectories using all target policies respectively, followed by feeding the state-action pairs into the encoder of VLBM which maps them into the latent space. It shows the encoded state-action pairs induced from policies with similar performance are in general swirled and clustered together, illustrating that VLBM can learn expressive and disentangled representations of its inputs. Figure 8: \(t\)-SNE visualization over the latent space, capturing encoded state-action visitations induced from all target policies. Each point is colored by the corresponding policy from which it is generated. Policies in the legend are sorted in the order of increasing performance. Figure 7: Correlation between the estimated (y-axis) and true returns (x-axis), across different model-based OPE methods and environments. ## 4 Related Work Latent Modeling in RL.Though variational inference has rarely been explored to facilitate model-based OPE methods so far, there exist several latent models designed for RL policy optimization that are related to our work, such as SLAC (Lee et al., 2020), SOLAR (Zhang et al., 2019), LatCo (Rybkin et al., 2021), PlaNet (Hafner et al., 2019), Dreamer (Hafner et al., 2020; 20). Below we discuss the connections and distinctions between VLBM and the latent models leveraged by them, with a detailed overview of these methods provided in Appendix G. Specifically, SLAC and SOLAR learn latent representations of the dynamics jointly with optimization of the target policies, using the latent information to improve sample efficiency. Similarly, LatCo performs trajectory optimization over the latent space to allow for temporarily bypassing dynamic constraints. As a result, latent models used in such methods are not designed toward rolling out trajectories independently, as opposed to the use of VLBM in this paper. PlaNet and Dreamer train the recurrent state space model (RSSM) using a _growing_ experience dataset collected by the target policy that is being concurrently updated (with exploration noise added), which requires _online_ data collection. In contrast, under the OPE setup, VLBM is trained over a _fixed_ set of offline trajectories collected over unknown behavioral policies. Moreover, note that the VLM baseline is somewhat reminiscent of the RSSM and similar ones as in Lee et al. (2020); Lu et al. (2022), however, experiments above show that directly using VLM for OPE could lead to subpar performance. On the other hand, though MOPO (Yu et al., 2020), LOHPO (Rafailov et al., 2021) and COMBO (Yu et al., 2021) can learn from offline data, they focus on quantifying the uncertainty of model's predictions toward next states and rewards, followed by incorporating them into policy optimization objectives to penalize for visiting regions where transitions are not fully captured; thus, such works are also orthogonal to the use case of OPE. Ope.Classic OPE methods adopt IS to estimate expectations over the unknown visitation distribution over the target policy, resulting in weighted IS, step-wise IS and weighted step-wise IS (Precup, 2000). IS can lead to estimations with low (or zero) bias, but with high variance (Kostrikov and Nachum, 2020; Jiang and Li, 2016), which sparks a long line of research to address this challenge. DR methods propose to reduce variance by coupling IS with a value function approximator (Jiang and Li, 2016; Thomas and Brunskill, 2016; Farajtabar et al., 2018). However, the introduction of such approximations may increase bias, so the method proposed in Tang et al. (2019) attempts to balance the scale of bias and variance for DR. Unlike IS and DR methods that require the behavioral policies to be fully known, DICE family of estimators (Zhang et al., 2020; 20; Yang et al., 2021; 20; Nachum et al., 2019; Dai et al., 2020) and VPM (Wen et al., 2020) can be behavioral-agnostic; they directly capture marginalized IS weights as the ratio between the propensity of the target policy to visit particular state-action pairs, relative to their likelihood of appearing in the logged data. There also exist FQE methods which extrapolate policy returns from approximated Q-functions (Hao et al., 2021; Le et al., 2019; Kostrikov and Nachum, 2020). Existing model-based OPE methods are designed to directly fit MDP transitions using feed-forward (Fu et al., 2020) or auto-regressive (Zhang et al., 2020) models, and has shown promising results over model-free methods as reported in a recent benchmark (Fu et al., 2020). However, such model-based approaches could be sensitive to the initialization of weights (Hanin and Rolnick, 2018; Rossi et al., 2019) and produce biased predictions, due to the limited coverage over state and action space provided by offline trajectories (Fu et al., 2020). Instead, VLBM mitigates such effects by capturing the dynamics over the latent space, such that states and rewards are evolved from a compact feature space over time. Moreover, RSA and the branching can lead to increased expressiveness and robustness, such that future states and rewards are predicted accurately. There also exist OPE methods proposed toward specific applications (Chen et al., 2022; Saito et al., 2021; Gao et al., 2023; 2022). ## 5 Conclusion and Future Work We have developed the VLBM which can accurately capture the dynamics underlying environments from offline training data that provide limited coverage of the state and action space; this is achieved by using the RSA term to smooth out the information flow from the encoders to decoders in the latent space, as well as the branching architecture which improve VLBM's robustness against random initializations. We have followed evaluation guidelines provided by the DOPE benchmark, and experimental results have shown that the VLBM generally outperforms the state-of-the-art model-based OPE method using AR architectures, as well as other model-free methods. VLBM can also facilitate off-policy optimizations, which can be explored in future works. Specifically, VLBM can serve as a synthetic environment on which optimal controllers (_e.g._, linear-quadratic regulator) can be deployed. On the other hand, similar to Dreamer and SLAC, policies can be updated jointly with training of VLBM, but without the need of online interactions with the environment during training. #### Acknowledgments This work is sponsored in part by the AFOSR under award number FA9550-19-1-0169, and by the NSF CNS-1652544, CNS-1837499, DUE-1726550, IIS-1651909 and DUE-2013502 awards, as well as the National AI Institute for Edge Computing Leveraging Next Generation Wireless Networks, Grant CNS-2112562.
2310.04175
Equivariant Nica-Pimsner quotients associated with strong compactly aligned product systems
We parametrise the gauge-invariant ideals of the Toeplitz-Nica-Pimsner algebra of a strong compactly aligned product system over $\mathbb{Z}_+^d$ by using $2^d$-tuples of ideals of the coefficient algebra that are invariant, partially ordered, and maximal. We give an algebraic characterisation of maximality that allows the iteration of a $2^d$-tuple to the maximal one inducing the same gauge-invariant ideal. The parametrisation respects inclusions and intersections, while we characterise the join operation on the $2^d$-tuples that renders the parametrisation a lattice isomorphism. The problem of the parametrisation of the gauge-invariant ideals is equivalent to the study of relative Cuntz-Nica-Pimsner algebras, for which we provide a generalised Gauge-Invariant Uniqueness Theorem. We focus further on equivariant quotients of the Cuntz-Nica-Pimsner algebra and provide applications to regular product systems, C*-dynamical systems, strong finitely aligned higher-rank graphs, and product systems on finite frames. In particular, we provide a description of the parametrisation for (possibly non-automorphic) C*-dynamical systems and row-finite higher-rank graphs, which squares with known results when restricting to crossed products and to locally convex row-finite higher-rank graphs.
Joseph A. Dessi, Evgenios T. A. Kakariadis
2023-10-06T11:46:11Z
http://arxiv.org/abs/2310.04175v1
# Equivariant Nica-Pimsner quotients associated with strong compactly aligned product systems ###### Abstract. We parametrise the gauge-invariant ideals of the Toeplitz-Nica-Pimsner algebra of a strong compactly aligned product system over \(\mathbb{Z}_{+}^{d}\) by using \(2^{d}\)-tuples of ideals of the coefficient algebra that are invariant, partially ordered, and maximal. We give an algebraic characterisation of maximality that allows the iteration of a \(2^{d}\)-tuple to the maximal one inducing the same gauge-invariant ideal. The parametrisation respects inclusions and intersections, while we characterise the join operation on the \(2^{d}\)-tuples that renders the parametrisation a lattice isomorphism. The problem of the parametrisation of the gauge-invariant ideals is equivalent to the study of relative Cuntz-Nica-Pimsner algebras, for which we provide a generalised Gauge-Invariant Uniqueness Theorem. We focus further on equivariant quotients of the Cuntz-Nica-Pimsner algebra and provide applications to regular product systems, C*-dynamical systems, strong finitely aligned higher-rank graphs, and product systems on finite frames. In particular, we provide a description of the parametrisation for (possibly non-automorphic) C*-dynamical systems and row-finite higher-rank graphs, which squares with known results when restricting to crossed products and to locally convex row-finite higher-rank graphs. Key words and phrases:Product systems, Toeplitz-Nica-Pimsner algebras, equivariant ideals 2010 Mathematics Subject Classification: 46L08, 47L55, 46L05 ###### Contents * 1 Introduction * 2 C*-correspondences and product systems * 3 Relative \(2^{d}\)-tuples and relative Cuntz-Nica-Pimsner algebras * 4 NT-\(2^{d}\)-tuples and gauge-invariant ideals of \(\mathcal{NT}_{X}\) * 5 Applications ## 1. Introduction ### Product systems Product systems offer a uniform language to encode a large number of C*-constructions associated with a unital subsemigroup \(P\) of a discrete group \(G\), e.g., C*-dynamical systems, higher-rank graphs, isometric semigroup representations, non-invertible dynamics, and subshifts. They form the semigroup analogue of group Fell bundles, with the ability to encode transformations that may not be reversible. Their associated C*-algebras provide a vast source for producing examples and counterexamples, and _the current programme seeks to relate their C*-properties to properties of the geometric structure_. Much progress has been made in this endeavour for \(P=\mathbb{Z}_{+}\), however many questions remain open even for \(P=\mathbb{Z}_{+}^{d}\). When \(P=\mathbb{Z}_{+}\), the product system comes from a single C*-correspondence \(X\). The quantisation is employed by a Fock space construction where the elements of \(X\) act as left creation operators, giving rise to the Toeplitz-Pimsner algebra \(\mathcal{T}_{X}\). There is a distinguished equivariant quotient of \(\mathcal{T}_{X}\) that plays the role of the boundary or quasidiagonal universe for \(X\), namely the Cuntz-Pimsner algebra \(\mathcal{O}_{X}\), which is the minimal for equivariant representations that are injective on \(X\). The Cuntz-Pimsner algebra arises naturally in the C*-theory, as it encapsulates the Cuntz-Krieger algebra of a graph and the C*-crossed product of a single \(*\)-automorphism. Due to their ample applications, C*-algebras of single C*-correspondences have been under thorough study. Major developments in this direction include the parametrisation of equivariant ideals [34], the study of ideal structure and simplicity [9], K-theory computation [33] and classification [8], necessary and sufficient conditions for nuclearity and exactness [33], and the parametrisation of the KMS-simplex [37]. The provided descriptions relate to structural properties of the C*-correspondence and its coefficient algebra, with direct translations into properties of the inducing geometric object. For example, the gauge-invariant ideals of the Cuntz-Krieger algebra of a row-finite graph are in bijection with the hereditary saturated vertex sets of the ambient graph [3]. Several of these questions have been treated with much success beyond the \(\mathbb{Z}_{+}\)-case. One of the pivotal steps was introduced by Fowler [21] for compactly aligned product systems over quasi-lattices, where the associated C*-algebras admit a Wick ordering due to the Nica-Pimsner relations of the Fock representation. The KMS-simplex of the Fock C*-algebra, and in particular KMS-states of finite type, was studied by Afsar, Neshveyev and Larsen [1], encompassing the work of many hands. Extending further, Kwasniewski and Larsen [35, 36] provided a detailed study of the C*-algebras associated with right LCM semigroups. In this case the boundary quotient and its link to the C*-envelope was identified by Dor-On, the second-named author, Katsoulis, Laca and Li [17], while nuclearity and exactness was answered by the second-named author, Katsoulis, Laca and Li [31]. In her breakthrough results, Sehmen [48, 49] identified in full generality the boundary quotient of the Fock representation through the strong covariance relations linking it to the C*-envelope of the nonselfadjoint tensor algebra. In their recent work, Brix, Carlsen and Sims [6] explore the ideal structure for C*-algebras related to commuting local homeomorphisms, pushing the theory well beyond simplicity. The aforementioned results fit in a wider programme for bringing the C*-algebras of product systems within Elliott's Classification Programme, which has been answered for \(P=\mathbb{Z}_{+}\) by Brown, Tikuisis and Zelenberg [8]. It asserts that if the coefficient algbera \(A\) of a C*-correspondence \(X\) is classifiable, the left action is by compacts, and \(X\) admits a Rokhlin property, then \(\mathcal{O}_{X}\) is classifiable. One of the key features in the examination of \(\mathcal{O}_{X}\) is that the strong covariance is encoded by a single ideal of \(A\) coined by Katsura [33]. However, the picture for general semigroups can be much more complicated, since the strong covariance relations make extensive use of families of submodules induced by the right ideal space of the semigroup [48]. Even their description for compactly aligned product systems over quasi-lattices, as coined by Carlsen, Larsen, Sims and Vitadello [10], is based on checking relations on families of compact operators on all possible finite subsets of the semigroup. ### Motivation When moving towards a classification in terms of [8], it is reasonable to ask whether the picture is simplified when the left actions are by compacts. This has been achieved by Dor-On and the second-named author [16] in the context of strong compactly aligned product systems over \(P=\mathbb{Z}_{+}^{d}\), thus opening a direction of lifting results from the \(P=\mathbb{Z}_{+}\) case. Strong compact alignment induces an elimination argument on projections, while still accounting for major examples, e.g., C*-dynamical systems, row-finite higher-rank graphs, product systems on finite frames, and regular product systems. The strong covariance relations here are reduced to checking covariance on a family of \(2^{d}\) ideals of \(A\), in direct connection to Katsura's ideal [33]. This leads to our first point of motivation for the current work. The parametrisation of the gauge-invariant ideals for \(P=\mathbb{Z}_{+}\) established by Katsura [34] is implemented by pairs of ideals of \(A\) satisfying a compatibility relation. Due to the analogy established in [16] for the strong covariance relations, it is natural to ask whether a similar parametrisation lifts to the \(P=\mathbb{Z}_{+}^{d}\) case, as has been inquired in [16, Question 9.2]. We draw further motivation from examples of strong compactly aligned product systems that have been examined in this respect. The classical result appears in the case of automorphic dynamical systems over \(\mathbb{Z}^{d}\), where there is a lattice isomorphism between the gauge-invariant ideals of the crossed product \(A\rtimes_{\alpha}\mathbb{Z}^{d}\) and the \(\alpha\)-invariant ideals of \(A\). In the context of graph C*-algebras of locally convex row-finite higher-rank graphs, the parametrisation is obtained by hereditary saturated vertex sets through the work of Raeburn, Sims and Yeend [45, 46]. Sims [50] further provided a parametrisation for finitely aligned higher-rank graphs, encoded by hereditary vertex sets together with satiated path sets. In the current work we wish to encompass C*-dynamics and higher-rank graphs in a uniform way, and provide an alternative of [50] in the row-finite case depending only on properties of vertex sets. Crossed products and graph C*-algebras are incarnations of the Cuntz-Nica-Pimsner algebra \(\mathcal{NO}_{X}\) of the associated product system \(X\). However, here we are interested in covering _all_ possible equivariant quotients of the Toeplitz-Nica-Pimsner algebra \(\mathcal{NT}_{X}\). As a base case, consider \(\mathcal{T}\otimes\mathcal{T}\) (for the Toeplitz algebra \(\mathcal{T}\)) and associate the vertices of the square (as \(d=2\)) to ideals of \(\mathcal{T}\otimes\mathcal{T}\) so that for the compact operators \(\mathcal{K}\subseteq\mathcal{T}\). Then the gauge-invariant ideals of \(\mathcal{T}\otimes\mathcal{T}\) can be read off an inclusion-perserving association by considering vertex sets of the square, i.e., we obtain A similar decomposition for the boundary ideal \(\ker\{\mathcal{NT}_{X}\to\mathcal{NO}_{X}\}\) has been established in [16] and is akin to the one of Deaconu [12], further exploited by Fletcher [20]. Such a decomposition has been used successfully for the computation of the K-theory of \(\mathcal{NO}_{X}\) in terms of \(A\) for low-rank cases, such as finite 2-rank graphs by Evans [18], and for two commuting \(*\)-automorphisms by Barlak [2]. The general case still remains unresolved, and our motivation is to shed more light on this construction. At the same time, we are interested in equivariant quotients of \(\mathcal{NT}_{X}\) that may not be injective on \(X\), as they arise naturally in the context of KMS-states. There has been a great number of results on KMS-states for C*-algebras of finite graphs and dynamics, following the seminal work of Exel and Laca [19], and of Laca and Neshveyev [37]. Similar studies have been carried out for finite higher-rank graphs, with Christensen [11] providing the complete picture. A parametrisation of the gauge-invariant KMS-states has been obtained by the second-named author in the presence of finite frames [30]. One of the main tools in [11, 30] is the Wold decomposition of a KMS-state in parts that are \(F\)-finite and \(F^{c}\)-infinite, for \(F\subseteq\{1,\ldots,d\}\). This corresponds to KMS-states annihilating the gauge-invariant ideal generated by the projections along \(F^{c}\). The construction in [30] uses an ambient product system over \(F\) with the coefficient algebra arising from the \(F^{c}\)-core. Here we wish to close the circle with [30], and provide a full characterisation for each \(F\)-quotient of the Toeplitz-Nica-Pimsner algebra as the Cuntz-Nica-Pimsner algebra of the \(F\)-induced product system. There is an interplay between equivariant quotients and relative Cuntz-Nica-Pimsner algebras, for which we wish to establish a Gauge-Invariant Uniqueness Theorem. This type of result was initiated by an Huef and Raeburn for Cuntz-Krieger algebras [25], and various generalisations were given by Doplicher, Pinzari and Zuccante [15], Fowler, Muhly and Raeburn [22], and Fowler and Raeburn [23]. Katsura [34] proved the Gauge-Invariant Uniqueness Theorem in full generality for relative Cuntz-Pimsner algebras in the \(P=\mathbb{Z}_{+}\) case. A further point of motivation has been to establish a similar result for \(P=\mathbb{Z}_{+}^{d}\). This relies on analysing polynomial equations on the cores, as has been pointed out in [16]. Due to the Nica-Pimsner relations, the solutions need to adhere to invariance as well as a partial ordering; however simple examples show that different subsets of solutions may induce the same gauge-invariant ideal, thus posing the question of finding the appropriate compatibility conditions that characterise the maximal solution set. ### Summary of main results In the current work we establish the parametrisation of the gauge-invariant ideals of the Toeplitz-Nica-Pimsner algebra \(\mathcal{NT}_{X}\) of a strong compactly aligned product system \(X\) over \(\mathbb{Z}_{+}^{d}\), continuing the programme of [16]. This class contains product systems where the left actions are by compacts. As a byproduct we obtain the parametrisation of the gauge-invariant ideals for any relative Cuntz-Nica-Pimsner algebra of \(X\), including the boundary quotient \(\mathcal{NO}_{X}\). In the process we produce a Gauge-Invariant Uniqueness Theorem for equivariant quotients in-between \(\mathcal{NT}_{X}\) and \(\mathcal{NO}_{X}\). These results are in direct analogy to (and recover) the one-dimensional case [34]. We apply our results to C*-dynamical systems over \(\mathbb{Z}_{+}^{d}\). When the system is injective, the gauge-invariant ideals of the Cuntz-Nica-Pimsner algebra correspond to positively and negatively invariant ideals of \(A\), hence recovering the classical C*-crossed product result for \(*\)-automorphisms. Moreover, we recover the parametrisation of [46] for locally convex row-finite higher-rank graphs by hereditary saturated sets. We provide the parametrisation in the case of (just) row-finite higher-rank graphs, showing that this too can be achieved through vertex sets alone. Additionally, we interpret the parametrisation when the product system is regular; if in particular the coefficient algebra is simple, then \(\mathcal{NO}_{X}\) does not admit non-trivial gauge-invariant ideals. In the presence of finite frames, we address the decomposition of [30], and show that the quotient of \(\mathcal{NT}_{X}\) by the \(F\)-ideal can be realised as the boundary C*-algebra of a product system supported on \(F\) with coefficients in the \(F^{c}\)-core (this is achieved in two ways). ### Description of main results Let us fix notation. We write \([d]:=\{1,\ldots,d\}\) for \(d\in\mathbb{N}\). We write \(\underline{n}\) for the elements of \(\mathbb{Z}_{+}^{d}\) and will denote its generators by \(\underline{i}\) for \(i\in[d]\). We write \(\underline{n}\perp F\) for \(F\subseteq[d]\) if \(\operatorname{supp}\underline{n}\cap F=\emptyset\). Moreover we write \(\underline{1}_{F}:=\sum\{\underline{i}\mid i\in F\}\) for \(\emptyset\neq F\subseteq[d]\). For a product system \(X=\{X_{\underline{n}}\}_{\underline{n}\in\mathbb{Z}_{+}^{d}}\) with coefficients in a C*-algebra \(A\) and an ideal \(I\) of \(A\), we write \[X_{\underline{n}}(I):=[\big{\langle}X_{\underline{n}},IX_{\underline{n}} \big{\rangle}]\quad\text{and}\quad X_{\underline{n}}^{-1}(I):=\{a\in A\mid \big{\langle}X_{\underline{n}},aX_{\underline{n}}\big{\rangle}\subseteq I\}.\] A strong compactly aligned product system \(X=\{X_{\underline{n}}\}_{\underline{n}\in\mathbb{Z}_{+}^{d}}\) with coefficients in \(A\) is a compactly aligned product system that in addition satisfies \[\mathcal{K}(X_{\underline{n}})\otimes\operatorname{id}_{X_{\underline{i}}} \subseteq\mathcal{K}(X_{\underline{n}}\otimes_{A}X_{\underline{i}})\text{ whenever }\underline{n}\perp\underline{i}\text{, where }i\in[d],\underline{n}\in \mathbb{Z}_{+}^{d}\setminus\{\underline{0}\}.\] A \(2^{d}\)-tuple \(\mathcal{L}:=\{\mathcal{L}_{F}\}_{F\subseteq[d]}\) of \(X\) is a family of \(2^{d}\) non-empty subsets of \(A\). If \((\pi,t)\) is a Nica-covariant representation of \(X\) and \(i\in[d]\), we use an approximate unit \((k_{\underline{i},\lambda})_{\lambda\in\Lambda}\) of \(\mathcal{K}(X_{\underline{i}})\) to define the projection \(p_{\underline{i}}:=\text{w*-}\lim_{\lambda}\psi_{\underline{i}}(k_{\underline {i},\lambda})\), and we set \[q_{\emptyset}:=I,q_{\underline{i}}:=I-p_{\underline{i}},\text{ and }q_{F}:= \prod_{i\in F}(I-p_{\underline{i}})\text{ for }\emptyset\neq F\subseteq[d].\] The key relation is that if \(a\in\bigcap\{\phi_{\underline{i}}^{-1}(\mathcal{K}(X_{\underline{i}}))\mid i \in F\}\), then \[\pi(a)q_{F}=\pi(a)+\sum\{(-1)^{\underline{n}}|\underline{n}|\psi_{\underline{ n}}(\phi_{\underline{n}}(a))\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F}\},\] and thus \(\pi(a)q_{F}\in\mathrm{C}^{*}(\pi,t)\), although it may not be that \(q_{F}\in\mathrm{C}^{*}(\pi,t)\). We reserve \((\overline{\pi}_{X},\overline{t}_{X})\) for the universal Nica-covariant representation of \(X\). Due to the aforementioned relation, if \(\mathcal{L}\) is a \(2^{d}\)-tuple such that \(\mathcal{L}_{F}\subseteq\bigcap\{\phi_{\underline{i}}^{-1}(\mathcal{K}(X_{ \underline{i}}))\mid i\in F\}\) for every \(\emptyset\neq F\subseteq[d]\), then the ideal \(\big{\langle}\overline{\pi}_{X}(\mathcal{L}_{F})\overline{q}_{X,F}\mid F \subseteq[d]\big{\rangle}\) is a gauge-invariant ideal of \(\mathcal{NT}_{X}\). We call such tuples _relative_, and write \(\mathcal{NO}(\mathcal{L},X)\) for the corresponding equivariant quotient. The main result in [16] is that \(\mathcal{NO}_{X}\cong\mathcal{NO}(\mathcal{I},X)\) for the family \(\mathcal{I}:=\{\mathcal{I}_{F}\}_{F\subseteq[d]}\), where \[\mathcal{I}_{F}:=\bigcap\{X_{\underline{n}}^{-1}(\mathcal{J}_{F})\mid\underline {n}\perp F\}\quad\text{for}\quad\mathcal{J}_{F}:=(\bigcap_{i\in F}\ker\phi_{ \underline{i}})^{\perp}\cap(\bigcap_{i\in[d]}\phi_{\underline{i}}^{-1}( \mathcal{K}(X_{\underline{i}}))).\] We note that \(\mathcal{I}_{\emptyset}=\mathcal{J}_{\emptyset}=\{0\}\). Every \(\mathcal{I}_{F}\) is \(F^{\perp}\)-invariant (in fact the largest \(F^{\perp}\)-invariant ideal of \(\mathcal{J}_{F}\)), and the family \(\mathcal{I}\) is partially ordered. In order to understand a general equivariant quotient, we pare down these properties. First we consider the case where \(\mathcal{L}\) is a \(2^{d}\)-tuple of \(X\) satisfying \(\mathcal{L}\subseteq\mathcal{I}\); we term such tuples as (E)-\(2^{d}\)-tuples (where "E" stands for embedding). By definition the quotient \(\mathcal{N}\mathcal{O}(\mathcal{L},X)\) lies in-between \(\mathcal{N}\mathcal{T}_{X}\) and \(\mathcal{N}\mathcal{O}_{X}\), and thus is injective on \(X\). In Propositions 3.2.4 and 3.2.6, we show that we can then induce (E)-\(2^{d}\)-tuples \(\mathrm{Inv}(\mathcal{L})\) and \(\mathrm{PO}(\mathcal{L})\) defined by \[\mathrm{Inv}(\mathcal{L})_{F}:=\overline{\mathrm{span}}\{X_{\underline{n}}( \mathcal{L}_{F})\mid\underline{n}\perp F\}\quad\text{and}\quad\mathrm{PO}( \mathcal{L})_{F}:=\sum\{\langle\mathcal{L}_{D}\rangle\mid D\subseteq F\}\] for all \(F\subseteq[d]\), such that \(\mathcal{L}\subseteq\mathrm{Inv}(\mathcal{L})\), \(\mathcal{L}\subseteq\mathrm{PO}(\mathcal{L})\) and \[\mathcal{N}\mathcal{O}(\mathcal{L},X)=\mathcal{N}\mathcal{O}(\mathrm{Inv}( \mathcal{L}),X)=\mathcal{N}\mathcal{O}(\mathrm{PO}(\mathcal{L}),X).\] In particular \(\mathrm{Inv}(\mathcal{L})\) is invariant, \(\mathrm{PO}(\mathcal{L})\) is partially ordered, and \(\mathrm{PO}(\mathcal{L})\) is invariant if \(\mathcal{L}\) is invariant. Hence without loss of generality we may restrict to invariant, partially ordered (E)-\(2^{d}\)-tuples of ideals by replacing \(\mathcal{L}\) with \(\mathrm{PO}(\mathrm{Inv}(\mathcal{L}))\). However, as we demonstrate in Example 3.1.11, these properties alone do not provide injectivity of the association \(\mathcal{L}\mapsto\mathcal{N}\mathcal{O}(\mathcal{L},X)\). It is not hard to see that there is a unique maximal (E)-\(2^{d}\)-tuple \(\mathcal{M}\) such that \(\mathrm{PO}(\mathrm{Inv}(\mathcal{L}))\subseteq\mathcal{M}\) and \[\mathcal{N}\mathcal{O}(\mathcal{L},X)=\mathcal{N}\mathcal{O}(\mathrm{PO}( \mathrm{Inv}(\mathcal{L})),X)=\mathcal{N}\mathcal{O}(\mathcal{M},X).\] The prototypical example of maximal (E)-\(2^{d}\)-tuples is obtained from equivariant injective Nica-covariant representations \((\pi,t)\), by defining \[\mathcal{L}_{F}^{(\pi,t)}:=\pi^{-1}(B_{(\underline{0}\,\underline{1}_{F}]}^{( \pi,t)})\text{ for all }F\subseteq[d].\] Here \(B_{(\underline{0}\,\underline{1}_{F}]}^{(\pi,t)}\) denotes the \((\underline{0},\underline{1}_{F}]\)-core of \(\mathrm{C}^{*}(\pi,t)\), and \(\mathcal{L}_{\emptyset}^{(\pi,t)}=\{0\}\) since \(\pi\) is injective. With this in hand, we make the following key observation towards a Gauge-Invariant Uniqueness Theorem (GIUT) for relative Cuntz-Nica-Pimsner algebras. **Theorem A**.: _(Theorem 3.2.11) Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be a maximal (E)-\(2^{d}\)-tuple of \(X\) and suppose that \((\pi,t)\) is a Nica-covariant representation of \(X\). Then \(\mathcal{N}\mathcal{O}(\mathcal{L},X)\cong\mathrm{C}^{*}(\pi,t)\) via a (unique) canonical \(*\)-isomorphism if and only if \((\pi,t)\) admits a gauge action and \(\mathcal{L}^{(\pi,t)}=\mathcal{L}\)._ Since maximality is a necessary ingredient for the GIUT, we give an algebraic characterisation without reference to a Nica-covariant representation. To this end, for every \(\emptyset\neq F\subsetneq[d]\) we define \[\mathcal{L}_{\mathrm{inv},F}:=\bigcap_{\underline{m}\perp F}X_{\underline{m}}^ {-1}(\cap_{F\subsetneq D}\mathcal{L}_{D})\quad\text{and}\quad\mathcal{L}_{ \mathrm{lim},F}:=\{a\in A\mid\lim_{\underline{m}\perp F}\|\phi_{\underline{m} }(a)+\mathcal{K}(X_{\underline{m}}\mathcal{L}_{F})\|=0\}.\] Note that the definitions of \(\mathcal{L}_{\mathrm{inv},F}\) and \(\mathcal{L}_{\mathrm{lim},F}\) do not require \(\mathcal{L}\) to be an (E)-\(2^{d}\)-tuple. When \(\mathcal{L}\) is an (E)-\(2^{d}\)-tuple, we define the \(2^{d}\)-tuple \(\mathcal{L}^{(1)}\) by \[\mathcal{L}_{F}^{(1)}:=\begin{cases}\{0\}&\text{if }F=\emptyset,\\ \mathcal{I}_{F}\cap\mathcal{L}_{\mathrm{inv},F}\cap\mathcal{L}_{\mathrm{lim},F} &\text{if }\emptyset\neq F\subsetneq[d],\\ \mathcal{L}_{[d]}&\text{if }F=[d].\end{cases}\] In Proposition 3.4.5 we show that \(\mathcal{L}^{(1)}\) is an (E)-\(2^{d}\)-tuple of ideals that is invariant and partially ordered when \(\mathcal{L}\) is so, satisfying \[\left\langle\overline{\pi}_{X}(\mathcal{L}_{F})\overline{q}_{X,F}\mid F \subseteq[d]\right\rangle=\left\langle\overline{\pi}_{X}(\mathcal{L}_{F}^{(1) })\overline{q}_{X,F}\mid F\subseteq[d]\right\rangle.\] Maximality is then described in terms of the first iteration. **Theorem B**.: _(Theorem 3.4.6) Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and suppose that \(\mathcal{L}\) is a \(2^{d}\)-tuple of \(X\). Then \(\mathcal{L}\) is a maximal (E)-\(2^{d}\)-tuple of \(X\) if and only if \(\mathcal{L}\) satisfies the following four conditions:_ * \(\mathcal{L}\) _consists of ideals and_ \(\mathcal{L}\subseteq\mathcal{J}\)_,_ * \(\mathcal{L}\) _is invariant,_ * \(\mathcal{L}\) _is partially ordered,_ * \(\mathcal{L}^{(1)}\subseteq\mathcal{L}\)_._ For \(k\in\mathbb{Z}_{+}\) we write \(\mathcal{L}^{(k+1)}:=(\mathcal{L}^{(k)})^{(1)}\), and by convention we set \(\mathcal{L}^{(0)}:=\mathcal{L}\). By induction we have that \[\mathcal{L}^{(k)}\subseteq\mathcal{L}^{(k+1)}\text{ and }\mathcal{N}\mathcal{O}( \mathcal{L},X)=\mathcal{N}\mathcal{O}(\mathcal{L}^{(k)},X),\text{ for all }k\in\mathbb{Z}_{+}.\] Thus it is natural to ask if these iterations stabilise to the maximal (E)-\(2^{d}\)-tuple inducing \(\mathcal{N}\mathcal{O}(\mathcal{L},X)\). In Theorem 3.4.7 we show that \[\mathcal{L}^{(d-1)}=\mathcal{L}^{(m)}\text{ for all }m\geq d-1,\] and thus \(\mathcal{L}^{(d-1)}\) is the maximal (E)-\(2^{d}\)-tuple inducing \(\mathcal{N}\mathcal{O}(\mathcal{L},X)\). Hence we obtain the maximal (E)-\(2^{d}\)-tuple inducing \(\mathcal{N}\mathcal{O}(\mathcal{L},X)\) by first enlarging \(\mathcal{L}\) to an (E)-\(2^{d}\)-tuple that is invariant, partially ordered and consists of ideals, and then taking its \((d-1)\)-iteration. Combining Theorem A with Theorem B then gives the full form of the Gauge-Invariant Uniqueness Theorem. **Theorem C**.: _(Theorem 3.4.9) Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be an (E)-\(2^{d}\)-tuple of \(X\) and \((\pi,t)\) be a Nica-covariant representation of \(X\). Then \(\mathcal{N}\mathcal{O}(\mathcal{L},X)\cong\mathrm{C}^{*}(\pi,t)\) via a (unique) canonical \(*\)-isomorphism if and only if \((\pi,t)\) admits a gauge action and_ \[\mathcal{L}^{(\pi,t)}=\bigg{(}\operatorname{PO}\left(\operatorname{Inv}\left( \mathcal{L}\right)\right)\bigg{)}^{(d-1)}.\] Next we pass to the parametrisation of the quotients that may not be injective on \(X\). We circumvent this by "deleting the kernel", i.e., by utilising the quotient product system construction. To this end, we write \([\,\cdot\,]_{I}\) for the quotient maps induced by a positively invariant ideal \(I\subseteq A\), following the notation of [34]. We say that \(\mathcal{L}\) is an _NT-\(2^{d}\)-tuple (of X)_ if the family \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}:=\{[\mathcal{L}_{F}]_{\mathcal{L}_{ \emptyset}}\}_{F\subseteq[d]}\) is a maximal (E)-\(2^{d}\)-tuple of \([X]_{\mathcal{L}_{\emptyset}}\). In Subsection 4.1 we provide a detailed description of the structural properties that render a \(2^{d}\)-tuple \(\mathcal{L}\) an NT-\(2^{d}\)-tuple. In particular, in Proposition 4.1.5 we provide a characterisation of an NT-\(2^{d}\)-tuple with no reference to the quotient product system construction when \(A\) acts on \(X\) by compacts. By performing this reduction and combining with the results for (E)-\(2^{d}\)-tuples, we obtain the parametrisation in its full generality. **Theorem C**.: _(Proposition 4.2.2, Theorem 4.2.3). Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Then there is a bijection between the set of NT-\(2^{d}\)-tuples of \(X\) and the set of gauge-invariant ideals of \(\mathcal{N}\mathcal{T}_{X}\) given by_ \[\mathcal{L} \mapsto\mathfrak{J}^{\mathcal{L}}:=\ker\pi^{\mathcal{L}}\times t ^{\mathcal{L}},\text{ for }\pi^{\mathcal{L}}\times t^{\mathcal{L}}\colon\mathcal{N}\mathcal{T}_{X} \rightarrow\mathcal{N}\mathcal{O}([\mathcal{L}]_{\mathcal{L}_{\emptyset}},[X]_ {\mathcal{L}_{\emptyset}})\] \[\mathfrak{J} \mapsto\mathcal{L}^{3}:=\mathcal{L}^{(Q_{3}\circ\overline{n}_{X},Q_{3}\circ\overline{t}_{X})},\text{ for }Q_{3}\colon\mathcal{N}\mathcal{T}_{X} \rightarrow\mathcal{N}\mathcal{T}_{X}/\mathfrak{J}.\] _Moreover, if \(\mathcal{L}\) is an NT-\(2^{d}\)-tuple of \(X\), then we have that_ \[\mathfrak{J}^{\mathcal{L}}=\langle\overline{\pi}_{X}(a)+\sum_{ \underline{0}\neq\underline{n}\leq 1_{F}}(-1)^{|\underline{n}|}\overline{\psi}_{X, \underline{n}}(k_{\underline{n}})\;|F\subseteq[d],a\in\mathcal{L}_{F},k_{ \underline{n}}\in\mathcal{K}(X_{\underline{n}}),\] \[[\phi_{\underline{n}}(a)]_{\mathcal{L}_{\emptyset}}=[k_{ \underline{n}}]_{\mathcal{L}_{\emptyset}}\text{ for all }\underline{0}\neq\underline{n}\leq 1_{F}\rangle.\] We note here that the description of \(\mathfrak{J}^{\mathcal{L}}\) is new even for the \(d=1\) case. Moreover, the Nica-covariant representation \((\pi^{\mathcal{L}},t^{\mathcal{L}})\) is well-defined since the association \(X\rightarrow[X]_{\mathcal{L}_{\emptyset}}\) lifts to a canonical \(*\)-epimorphism \(\mathcal{N}\mathcal{T}_{X}\rightarrow\mathcal{N}\mathcal{T}_{[X]_{\mathcal{L}_{ \emptyset}}}\). It is known that this is not the case in general for \(\mathcal{N}\mathcal{O}_{X}\) and \(\mathcal{N}\mathcal{O}_{[X]_{\mathcal{L}_{\emptyset}}}\) (even for \(d=1\), see Example 5.3.6). Nevertheless, by using the NT-\(2^{d}\)-tuple machinery, we determine precisely when this holds as part of our applications in Subsection 5.1. The key requirement is that \(\mathcal{L}_{\emptyset}\) is both positively and negatively invariant, and \(\mathcal{I}\subseteq\mathcal{L}\). The bijection of Theorem C induces a lattice structure on the NT-\(2^{d}\)-tuples that renders it a lattice isomorphism, where we use the usual lattice operations for the gauge-invariant ideals. It is then important to understand the join and meet operations in this setting. The bijection is easily seen to preserve inclusions and intersections. For the join operation we use the iteration process that describes the maximality property. **Theorem D**.: _(Proposition 4.2.6, Proposition 4.2.7) Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). We equip the set of NT-\(2^{d}\)-tuples of \(X\) with the lattice structure determined by the operations_ \[\mathcal{L}_{1}\vee\mathcal{L}_{2}:=\mathcal{L}^{3^{\mathcal{L}_{1}}+3^{ \mathcal{L}_{2}}}\quad\text{and}\quad\mathcal{L}_{1}\wedge\mathcal{L}_{2}:= \mathcal{L}^{3^{\mathcal{L}_{1}}\cap\overline{3}^{\mathcal{L}_{2}}}.\] _Then_ \[(\mathcal{L}_{1}\wedge\mathcal{L}_{2})_{F}=\mathcal{L}_{1,F}\cap\mathcal{L}_{2,F}\text{ for all }F\subseteq[d],\] _and_ \[(\mathcal{L}_{1}\vee\mathcal{L}_{2})_{F}=\begin{cases}\overline{\pi}_{X}^{-1} (\mathfrak{J}^{\mathcal{L}_{1}}+\mathfrak{J}^{\mathcal{L}_{2}})&\text{ if }F= \emptyset,\\ \left[\,\cdot\right]^{-1}_{(\mathcal{L}_{1}\vee\mathcal{L}_{2})_{\emptyset}} \left[\big{(}\left(\mathcal{L}_{1,F}+\mathcal{L}_{2,F}+(\mathcal{L}_{1}\vee \mathcal{L}_{2})_{\emptyset}\right)/(\mathcal{L}_{1}\vee\mathcal{L}_{2})_{ \emptyset}\big{)}^{(d-1)}\right]&\text{ if }\emptyset\neq F\subseteq[d].\end{cases}\] The parametrisation on \(\mathcal{NT}_{X}\) descends naturally to \(\mathcal{NO}(\mathcal{K},X)\) for any relative \(2^{d}\)-tuple \(\mathcal{K}\). The only difference is that the lattice isomorphism is by NT-\(2^{d}\)-tuples that contain \(\mathcal{K}\). By choosing \(\mathcal{K}=\mathcal{I}\) we obtain the parametrisation on \(\mathcal{NO}_{X}\) by what we call _NO-\(2^{d}\)-tuples_. Next we pass to applying our results to specific classes of product systems. First we deal with regular product systems, i.e., when each left action is injective and by compacts. In Corollary 5.2.3 we show that the parametrisation for the gauge-invariant ideals of \(\mathcal{NO}_{X}\) is given by single ideals of \(A\) that are positively and negatively invariant. This relies on the fact that the positively and negatively invariant ideal will be the \(\mathcal{L}_{\emptyset}\)-member of an NO-\(2^{d}\)-tuple \(\mathcal{L}\), while by regularity \(\mathcal{L}_{F}=A\) for all \(\emptyset\neq F\subseteq[d]\). This implies that \(\mathcal{NO}_{X}\) does not admit non-trivial gauge-invariant ideals when \(A\) is simple. Passing to \(\mathcal{NT}_{X}\), in Corollary 5.2.6 we show that the gauge-invariant ideals of \(\mathcal{NT}_{X}\) are in bijection with the families of incomparable subsets of \([d]\), when \(A\) is non-zero and simple. A direct application for the Toeplitz algebra \(\mathcal{T}^{\otimes d}\) of \(\mathbb{Z}_{+}^{d}\) produces the parametrisation of its gauge-invariant ideals by vertex sets on the \(d\)-hypercube. The second class of examples we consider arises from semigroup actions \(\alpha\colon\mathbb{Z}_{+}^{d}\to\operatorname{End}(A)\). We write \(X_{\alpha}\) for the associated product system. We characterise the NT-\(2^{d}\)-tuples as follows. **Corollary E**.: _(Corollary 5.3.5) Let \((A,\alpha,\mathbb{Z}_{+}^{d})\) be a C*-dynamical system. Let \(\mathcal{K}\) and \(\mathcal{L}\) be \(2^{d}\)-tuples of \(X_{\alpha}\). Then \(\mathcal{L}\) is a \(\mathcal{K}\)-relative NO-\(2^{d}\)-tuple of \(X_{\alpha}\) if and only if \(\mathcal{K}\subseteq\mathcal{L}\) and the following hold:_ * \(\mathcal{L}\) _consists of ideals and_ \(\mathcal{L}_{F}\cap(\bigcap_{i\in F}\alpha_{\underline{i}}^{-1}(\mathcal{L}_ {\emptyset}))\subseteq\mathcal{L}_{\emptyset}\) _for all_ \(\emptyset\neq F\subseteq[d]\)_,_ * \(\mathcal{L}_{F}\subseteq\bigcap_{\underline{n}\perp F}\alpha_{\underline{n}} ^{-1}(\mathcal{L}_{F})\) _for all_ \(F\subseteq[d]\)_,_ * \(\mathcal{L}\) _is partially ordered,_ * \(I_{1,F}\cap I_{2,F}\cap I_{3,F}\subseteq\mathcal{L}_{F}\) _for all_ \(\emptyset\neq F\subsetneq[d]\)_, where_ * \(I_{1,F}:=\bigcap_{\underline{n}\perp F}\alpha_{\underline{n}}^{-1}(\{a\in A \mid a(\bigcap_{i\in F}\alpha_{\underline{i}}^{-1}(\mathcal{L}_{\emptyset}) )\subseteq\mathcal{L}_{\emptyset}\})\)_,_ * \(I_{2,F}:=\bigcap_{\underline{m}\perp F}\alpha_{\underline{m}}^{-1}(\cap_{F \subset D}\mathcal{L}_{D})\)_,_ * \(I_{3,F}:=\{a\in A\mid\lim_{\underline{m}\perp F}\|\alpha_{\underline{m}}(a)+[ \alpha_{\underline{m}}(A)\mathcal{L}_{F}\alpha_{\underline{m}}(A)]\|=0\}\)_._ If the dynamical system is in particular injective, then the associated product system is regular. Thus we derive a bijection between the gauge-invariant ideals of \(\mathcal{NO}_{X_{\alpha}}\) and the single ideals \(I\) of \(A\) such that \(\alpha_{\underline{n}}(I)\subseteq I\) and \(\alpha_{\underline{n}}^{-1}(I)\subseteq I\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). When the system is automorphic we recover the well-known parametrisation of the gauge-invariant ideals of the crossed product \(A\rtimes_{\alpha}\mathbb{Z}^{d}\) by ideals \(I\subseteq A\) such that \(\alpha_{\underline{n}}(I)=I\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). The third class of examples we consider relates to finitely aligned higher-rank graphs \((\Lambda,d)\). We write \(X(\Lambda)\) for the associated product system. In keeping with the literature, in this case we write \(k\) for the rank and reserve \(d\) for the degree map. We use \(r\) and \(s\) for the range and source maps. One of the main subclasses here is that of row-finite higher-rank graphs. The key property is that the strong covariance ideals are in bijection with \(F\)-tracing vertices, i.e., vertices that are not \(F\)-sources and the source of any \(F^{c}\)-path ending on them is not an \(F\)-source as well **[16]**. A family \(H=\{H_{F}\}_{F\subseteq[k]}\) of subsets of vertices is called _absorbent (in \(\Lambda\))_ if the following holds for every \(\emptyset\neq F\subsetneq[k]\): a vertex \(v\in\Lambda^{\underline{0}}\) belongs to \(H_{F}\) whenever it satisfies * \(v\) is \(F\)-tracing, * \(s(v\Lambda^{\underline{m}})\subseteq\cap_{F\subsetneq D}H_{D}\) for all \(\underline{m}\perp F\), and * there exists \(\underline{m}\perp F\) such that whenever \(\underline{n}\perp F\) and \(\underline{n}\geq\underline{m}\), we have that \(s(v\Lambda^{\underline{n}})\subseteq H_{F}\) and \(|v\Lambda^{\underline{n}}|<\infty\). By translating our characterisation of NT-\(2^{k}\)-tuples, we obtain the following corollary. **Corollary F**.: _(Proposition 5.4.12) Let \((\Lambda,d)\) be a strong finitely aligned \(k\)-graph. Let \(\mathcal{L}\) be a \(2^{k}\)-tuple of \(X(\Lambda)\) that consists of ideals and let \(H_{\mathcal{L}}\) be the corresponding family of sets of vertices of \(\Lambda\). Then \(\mathcal{L}\) is an NT-\(2^{k}\)-tuple of \(X(\Lambda)\) if and only if the following four conditions hold:_ * _for each_ \(\emptyset\neq F\subseteq[k]\)_, we have that_ * \(H_{\mathcal{L},F}\subseteq H_{\mathcal{L},\emptyset}\cup\{v\notin H_{ \mathcal{L},\emptyset}\mid|v\Gamma(\Lambda\setminus H_{\mathcal{L},\emptyset}) ^{\underline{i}}|<\infty\ \forall i\in[k]\ \mathrm{and}\ v\ \mathrm{is\ not\ an\ }F\mathrm{\text{\rm-source\ in\ }}\Gamma(\Lambda\setminus H_{ \mathcal{L},\emptyset})\}\)_,_ * \(H_{\mathcal{L}}\) _is hereditary in_ \(\Lambda\)_,_ * \(H_{\mathcal{L}}\) _is partially ordered,_ * \(H_{\mathcal{L}}\setminus H_{\mathcal{L},\emptyset}:=\{H_{\mathcal{L},F} \setminus H_{\mathcal{L},\emptyset}\}_{F\subseteq[k]}\) _is absorbent in_ \(\Gamma(\Lambda\setminus H_{\mathcal{L},\emptyset})\)_._ Due to Proposition 4.1.5 we can provide a further translation for the case of row-finite graphs. **Corollary G**.: _(Proposition 5.4.13) Let \((\Lambda,d)\) be a row-finite \(k\)-graph. Let \(\mathcal{L}\) be a \(2^{k}\)-tuple of \(X(\Lambda)\) that consists of ideals and let \(H_{\mathcal{L}}\) be the corresponding family of sets of vertices of \(\Lambda\). Then \(\mathcal{L}\) is an NT-\(2^{k}\)-tuple of \(X(\Lambda)\) if and only if the following four conditions hold:_ * _for each_ \(\emptyset\neq F\subseteq[k]\)_, we have that_ * \(H_{\mathcal{L},F}\subseteq H_{\mathcal{L},\emptyset}\cup\{v\notin H_{ \mathcal{L},\emptyset}\mid v\ \mathrm{is\ not\ an\ }F\mathrm{\text{\rm-source\ in\ }}\Gamma:=\Gamma(\Lambda\setminus H_{ \mathcal{L},\emptyset})\}\)_,_ * \(H_{\mathcal{L}}\) _is hereditary in_ \(\Lambda\)_,_ * \(H_{\mathcal{L}}\) _is partially ordered,_ * \(H_{1,F}\cap H_{2,F}\cap H_{3,F}\subseteq H_{\mathcal{L},F}\) _for all_ \(\emptyset\neq F\subsetneq[k]\)_, where_ * \(H_{1,F}:=\bigcap_{\underline{n}\perp F}\{v\in\Lambda^{\underline{0}}\mid s(v \Lambda^{\underline{n}})\subseteq H_{\mathcal{L},\emptyset}\cup\{v\notin H_{ \mathcal{L},\emptyset}\mid v\ \mathrm{is\ not\ an\ }F\mathrm{\text{\rm-source\ in\ }}\Gamma\}\}\)_,_ * \(H_{2,F}:=\bigcap_{\underline{m}\perp F}\{v\in\Lambda^{\underline{0}}\mid s(v \Lambda^{\underline{m}})\subseteq\cap_{F\subsetneq D}H_{\mathcal{L},D}\}\)_,_ * \(H_{3,F}\) _is the set of all_ \(v\in\Lambda^{\underline{0}}\) _for which there exists_ \(\underline{m}\perp F\) _such that whenever_ \(\underline{n}\perp F\) _and_ \(\underline{n}\geq\underline{m}\)_, we have that_ \(s(v\Lambda^{\underline{n}})\subseteq H_{\mathcal{L},F}\)_._ If \(\Lambda\) is locally convex and row-finite, then positive (and negative) invariance of an ideal is equivalent to the related set of vertices being hereditary (and saturated). In this case the NO-\(2^{k}\)-tuples \(\mathcal{L}\) are determined just by \(\mathcal{L}_{\emptyset}\), and thus our results recover the parametrisation of Raeburn, Sims and Yeend **[46]** (Corollary 5.4.20). Finally, we restrict our attention to product systems wherein each fibre admits a finite frame (except perhaps for \(A\)). In connection with the parametrisation of the KMS-states, we exploit a decomposition of \(X\) with respect to a given \(\emptyset\neq F\subseteq[d]\). One direction of this construction was implicit in **[30]**, and here we close this circle. Namely, given \(X\) and \(F\subseteq[d]\), we define \[B^{F^{\perp}}:=\mathrm{C}^{*}(\overline{\pi}_{X}(A),\overline{t}_{X, \underline{i}}(X_{\underline{i}})\mid i\in F^{c})\subseteq\mathcal{NT}_{X},\] a collection \(Z_{X}^{F^{\perp}}:=\{X_{\underline{n}}\}_{\underline{n}\perp F}\), and a collection \(Y_{X}^{F}:=\{Y_{X,\underline{n}}^{F}\}_{\underline{n}\in\mathrm{supp}^{-1}(F)}\) by \[Y_{X,\underline{0}}^{F}:=B^{F^{\perp}}\quad\mathrm{and}\quad Y_{X,\underline{n }}^{F}:=[\overline{t}_{X,\underline{n}}(X_{\underline{n}})B^{F^{\perp}}] \subseteq\mathcal{NT}_{X}\ \mathrm{for\ all\ }\underline{0}\neq\underline{n}\in\mathrm{supp}^{-1}(F).\] Then the collections \(Z_{X}^{F^{\perp}}\) and \(Y_{X}^{F}\) become product systems with the structure inherited from \(X\) and \(\mathcal{NT}_{X}\), with \[\mathcal{NT}_{Z_{X}^{F^{\perp}}}\cong B^{F^{\perp}}\quad\mathrm{and}\quad \mathcal{NT}_{Y_{X}^{F}}\cong\mathcal{NT}_{X}.\] We are interested in describing the quotient of \(\mathcal{NT}_{X}\) by \(\left<\overline{\pi}_{X}(A)\overline{q}_{X,\underline{i}}\mid i\in F\right>\) as a Cuntz-Nica-Pimsner algebra. We have two approaches in this endeavour, differing only at which point we wish to delete the kernel. **Corollary H**.: _(Corollary 5.5.23) Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\), wherein \(X_{\underline{i}}\) admits a finite frame for all \(i\in[d]\), and fix \(\emptyset\neq F\subseteq[d]\). On the one hand, define the positively invariant ideal_ \[I_{Y^{F}_{X}}:=\ker\{Y^{F}_{X,\underline{0}}\to\mathcal{N}\mathcal{T}_{Y^{F}_{ X}}/\left\langle\overline{\pi}_{Y^{F}_{X}}(Y^{F}_{X,\underline{0}})\overline{q}_{Y^ {F}_{X},\underline{i}}\mid i\in F\right\rangle\}\] _for the product system \(Y^{F}_{X}\) related to \(X\) and \(F\). On the other hand, define the positively invariant ideal_ \[I^{F}_{X}:=\ker\{A\to\mathcal{N}\mathcal{T}_{X}/\left\langle\overline{\pi}_{X }(A)\overline{q}_{X,\underline{i}}\mid i\in F\right\rangle\}\] _for \(X\), and consider the product system \(Y^{F}_{[X]_{I^{F}_{X}}}\) related to \([X]_{I^{F}_{X}}\) and \(F\). Then there are canonical \(*\)-isomorphisms_ \[\mathcal{NC}_{[Y^{F}_{X}]_{I^{F}_{X}}}\cong\mathcal{N}\mathcal{T}_{X}/\left\langle \overline{\pi}_{X}(A)\overline{q}_{X,\underline{i}}\mid i\in F\right\rangle \cong\mathcal{NC}_{Y^{F}_{[X]_{I^{F}_{X}}}}.\] _If, in addition, \(X_{\underline{i}}\) is injective for all \(i\in F\), then \(Y^{F}_{X}\) is regular, \(I_{Y^{F}_{X}}=\{0\}\) and \(I^{F}_{X}=\{0\}\)._ This gives another avenue for obtaining the results of [30]. The \(F^{c}\)-equivariant KMS-states of \(\mathcal{N}\mathcal{T}_{X}\) that annihilate \(\left\langle\overline{\pi}_{X}(A)\overline{q}_{X,\underline{i}}\mid i\in F\right\rangle\) can be obtained from tracial states of \(A\) annihilating \(I^{F}_{X}\) by first inducing a KMS-state of finite type on the Toeplitz-Nica-Pimsner algebra of \(Z^{F^{\perp}}_{[X]_{I^{F}_{X}}}\) (by using the Fock space construction) and then extending it to a KMS-state on the Cuntz-Nica-Pimsner algebra of \(Y^{F}_{[X]_{I^{F}_{X}}}\) (by using a direct limit argument on the fixed point algebra). We close with a note that several of the construction arguments we use apply to the general setting of product systems and we have opted to include them at that generality. The problem of parametrising the gauge-invariant ideals in the general case requires a different approach, as there is no direct connection to ideals of just the coefficient algebra. Indeed, this is the case even for the reduced strong covariant algebra [17, 48, 49]. ### Contents of sections In Section 2 we present the constructions that we will need. We place extra attention on the quotient product system construction, which we present at full generality. We then give a presentation of the main results of [16], and elucidate some of the key points that are used in the sequel. Most importantly, we show that injectivity on the fixed point algebra reduces to checking just on the \([\underline{0},\underline{1}_{[d]}]\)-core, a trick that was used implicitly in [16]. Moreover, we study the \(IXI\) construction and show an association between \(\mathcal{N}\mathcal{T}_{IXI}\) and \(\mathcal{N}\mathcal{T}_{X}\) (resp. \(\mathcal{NO}_{IXI}\) and \(\mathcal{NO}_{X}\)) that is used in Section 5. In Section 3 we focus on \(2^{d}\)-tuples and the ideals that they induce. We give a step-by-step analysis of relativity, invariance, and partial ordering, and we demonstrate the maximality condition required for our parametrisation. After presenting (E)-\(2^{d}\)-tuples and maximal (E)-\(2^{d}\)-tuples, we prove the Gauge-Invariant Uniqueness Theorem for equivariant quotients in-between \(\mathcal{N}\mathcal{T}_{X}\) and \(\mathcal{NO}_{X}\). We also capture maximal (E)-\(2^{d}\)-tuples algebraically and show how they can be constructed. In Section 4 we give the full parametrisation by NT-\(2^{d}\)-tuples, and show how this descends to the relative Cuntz-Nica-Pimsner algebras. Moreover, we study the induced lattice isomorphism and give a description of the join and meet operations. Finally, in Section 5 we give applications and connections with the literature. First we study a case where the NO-\(2^{d}\)-tuples depend only on the selection of a single ideal, which is then used in the specific cases of injective C*-dynamical systems and locally convex row-finite graphs. Moving beyond that point, we interpret our parametrisation in terms of the structural data for C*-dynamical systems and row-finite higher-rank graphs in general. As a special case we also study the impact on regular product systems. We close this section by examining product systems on finite frames. **Acknowledgements.** Joseph Dessi acknowledges support from EPSRC as part of his PhD thesis on the programme "The Structure of C*-Algebras of Product Systems" (Ref. 2441268). Evgenios Kakariadis acknowledges support from EPSRC as part of the programme "Operator Algebras for Product Systems" (EP/T02576X/1). The main results of the manuscript were pre-announced by Joseph Dessi at the "Algebras of operators on Banach spaces and C*-algebras" workshop that took place at the Institute of Mathematics at the Czech Academy of Sciences on July 7-8, 2022. Joseph Dessi would like to thank the organisers for their hospitality. After communicating a finalised version of the manuscript to Adam Dor-On, we were informed that Boris Bilich has been independently investigating the parametrisation of gauge-invariant ideals by using product system extensions when the left action is by compacts [4]. The authors would like to thank Boris Bilich, Adam Dor-On and Ralf Meyer for bringing this line of research to their attention. **Open access statement.** For the purpose of open access, the authors have applied a Creative Commons Attribution (CC BY) license to any Author Accepted Manuscript (AAM) version arising. ## 2. C*-correspondences and product systems ### Notation By a lattice we will always mean a distributive lattice with operations \(\vee\) and \(\wedge\). We write \(\mathbb{Z}_{+}\) for the nonnegative integers \(\{0,1,\dots\}\) and \(\mathbb{N}\) for the positive integers \(\{1,2,\dots\}\). We denote the unit circle in the complex plane by \(\mathbb{T}\). If \(A,B\) and \(C\) are sets and \(f\colon A\times B\to C\) is a map, then we set \[f(A,B):=\{f(a,b)\mid a\in A,b\in B\};\] for example, if \(H\) is a Hilbert space, then \(\langle H,H\rangle:=\{\langle\xi,\eta\rangle\mid\xi,\eta\in H\}\). If \(V\) is a normed vector space and \(S\subseteq V\) is a subset, then \([S]\) denotes the norm-closed linear span of \(S\) inside \(V\). If we only wish to take the linear span then this will always be clearly stated. We recall the Hewitt-Cohen Factorisation Theorem, e.g., [47, Proposition 2.33]. **Theorem 2.1.1** (Hewitt-Cohen Factorisation Theorem).: _Let \(A\) be a C*-algebra, \(X\) be a Banach space and \(\pi\colon A\to\mathcal{B}(X)\) be a bounded homomorphism. Then \([\pi(A)X]=\pi(A)X\)._ All ideals of C*-algebras are taken to be two-sided and norm-closed. If \(A\) is a C*-algebra and \(S\subseteq A\) is a subset, then \(\langle S\rangle\) denotes the ideal of \(A\) generated by \(S\). If \(I\subseteq A\) is an ideal, then we set \(I^{\perp}:=\{a\in A\mid aI=\{0\}\}\). If \(A=\mathrm{C}^{*}(a_{i}\mid i\in\mathbb{I})\) and \(B=\mathrm{C}^{*}(b_{i}\mid i\in\mathbb{I})\) are C*-algebras, then a map \(\Phi\colon A\to B\) will be called _canonical_ if it preserves generators of the same index, i.e., \(\Phi(a_{i})=b_{i}\) for all \(i\in\mathbb{I}\). ### C*-correspondences We assume familiarity with the elementary theory of right Hilbert C*-modules. The reader is addressed to [38, 41] for an excellent introduction to the subject. We will briefly outline the fundamentals of the theory of C*-correspondences. We also recount Katsura's parametrisation in the C*-correspondences context [34]. Let \(A\) be a C*-algebra and \(X\) be a right Hilbert \(A\)-module. We write \(\mathcal{L}(X)\) for the C*-algebra of adjointable operators on \(X\), and \(\mathcal{K}(X)\) for the ideal of (generalised) compact operators on \(X\). Recall that \(\mathcal{K}(X)\) is densely spanned by the rank-one operators \(\Theta^{X}_{\xi,\eta}\colon\zeta\mapsto\xi\,\langle\eta,\zeta\rangle\), for \(\xi,\eta,\zeta\in X\). When the right Hilbert C*-module \(X\) is clear from the context, we will write \(\Theta_{\xi,\eta}\) instead of \(\Theta_{\xi,\eta}^{X}\). A _C*-correspondence_\(X\) over a C*-algebra \(A\) is a right Hilbert \(A\)-module equipped with a left action implemented by a \(*\)-homomorphism \(\phi_{X}\colon A\to\mathcal{L}(X)\). When the left action is clear from the context, we will abbreviate \(\phi_{X}(a)\xi\) as \(a\xi\), for \(a\in A\) and \(\xi\in X\). The C*-correspondence \(X\) is said to be _non-degenerate_ if \([\phi_{X}(A)X]=X\). If \(\phi_{X}\) is injective, then we say that \(X\) is _injective_. If \(X\) is injective and \(\phi_{X}(A)\subseteq\mathcal{K}(X)\), then we say that \(X\) is _regular_. Any C*-algebra \(A\) can be viewed as a non-degenerate C*-correspondence over itself, with the right (resp. left) action given by right (resp. left) multiplication in \(A\), and \(A\)-valued inner product given by \(\langle a,b\rangle=a^{*}b\) for all \(a,b\in A\). Then \(A\cong\mathcal{K}(A)\) by the left action \(\phi_{A}\), and thus \(A\) is non-degenerate by an application of an approximate unit. Let \(X\) and \(Y\) be C*-correspondences over a C*-algebra \(A\). We call an \(A\)-bimodule linear map \(u\colon X\to Y\) a _unitary_ if it is an inner-product-preserving surjection. If such a \(u\) exists, then it is adjointable, and we say that \(X\) and \(Y\) are _unitarily equivalent_ (symbol. \(X\cong Y\)). We write \(X\otimes_{A}Y\) for the \(A\)-balanced tensor product. Given \(S\in\mathcal{L}(X)\), there exists an operator \(S\otimes\mathrm{id}_{Y}\in\mathcal{L}(X\otimes_{A}Y)\) determined on simple tensors by \(\xi\otimes\eta\mapsto(S\xi)\otimes\eta\) for all \(\xi\in X\) and \(\eta\in Y\), e.g., [38, p. 42]. The assignment \(S\mapsto S\otimes\mathrm{id}_{Y}\) constitutes a unital \(*\)-homomorphism from \(\mathcal{L}(X)\) to \(\mathcal{L}(X\otimes_{A}Y)\). In this way we can define a left action \(\phi_{X\otimes_{A}Y}\) on \(X\otimes_{A}Y\) by \(\phi_{X\otimes_{A}Y}(a)=\phi_{X}(a)\otimes\mathrm{id}_{Y}\) for all \(a\in A\), thereby endowing \(X\otimes_{A}Y\) with the structure of a C*-correspondence over \(A\). The \(A\)-balanced tensor product is associative. Moreover, the right action of \(X\) yields a unitary \(X\otimes_{A}A\to X\) determined by \(\xi\otimes a\mapsto\xi a\) for all \(a\in A\) and \(\xi\in X\). The left action of \(X\) yields a unitary \(A\otimes_{A}X\to[\phi_{X}(A)X]\) determined by \(a\otimes\xi\mapsto\phi_{X}(a)\xi\) for all \(a\in A\) and \(\xi\in X\). A _(Toeplitz) representation_\((\pi,t)\) of the C*-correspondence \(X\) on \(\mathcal{B}(H)\) is a pair of a \(*\)-homomorphism \(\pi\colon A\to\mathcal{B}(H)\) and a linear map \(t\colon X\to\mathcal{B}(H)\) that preserves the left action and inner product of \(X\). Then \((\pi,t)\) automatically preserves the right action of \(X\). Moreover, there exists a \(*\)-homomorphism \(\psi\colon\mathcal{K}(X)\to\mathcal{B}(H)\) such that \(\psi(\Theta_{\xi,\eta})=t(\xi)t(\eta)^{*}\) for all \(\xi,\eta\in X\), e.g., [27, Lemma 2.2]. We say that \((\pi,t)\) is _injective_ if \(\pi\) is injective; then both \(t\) and \(\psi\) are isometric. Given a representation \((\pi,t)\) of \(X\), we write \(\mathrm{C}^{*}(\pi,t)\) for the C*-algebra generated by \(\pi(A)\) and \(t(X)\). A representation \((\pi,t)\) is said to _admit a gauge action_\(\gamma\colon\mathbb{T}\to\mathrm{Aut}(\mathrm{C}^{*}(\pi,t))\) if \(\gamma\) is a group homomorphism, \(\{\gamma_{z}\}_{z\in\mathbb{T}}\) is point-norm continuous and \[\gamma_{z}(\pi(a))=\pi(a)\text{ for all }a\in A\text{ and }\gamma_{z}(t(\xi))=zt(\xi)\text{ for all }\xi\in X,\] for all \(z\in\mathbb{T}\). When a gauge action \(\gamma\) exists, it is necessarily unique. An ideal \(\mathfrak{J}\subseteq\mathrm{C}^{*}(\pi,t)\) is called _gauge-invariant_ or _equivariant_ if \(\gamma_{z}(\mathfrak{J})\subseteq\mathfrak{J}\) for all \(z\in\mathbb{T}\) (thus \(\gamma_{z}(\mathfrak{J})=\mathfrak{J}\) for all \(z\in\mathbb{T}\)). The _Toeplitz-Pimsner algebra_\(\mathcal{T}_{X}\) is the universal C*-algebra with respect to the representations of \(X\). Let \(J\subseteq A\) be such that \(J\subseteq\phi_{X}^{-1}(\mathcal{K}(X))\). The _\(J\)-relative Cuntz-Pimsner algebra_\(\mathcal{O}(J,X)\) is the universal C*-algebra with respect to the _\(J\)-covariant_ representations of \(X\); that is, the representations \((\pi,t)\) of \(X\) satisfying \(\pi(a)=\psi(\phi_{X}(a))\) for all \(a\in J\). Traditionally the relative Cuntz-Pimsner algebras are defined with respect to ideals of \(A\) rather than just subsets, however the two versions are equivalent since \(\mathcal{O}(J,X)=\mathcal{O}(\left\langle J\right\rangle,X)\). When \(J=\{0\}\), we have that \(\mathcal{O}(J,X)=\mathcal{T}_{X}\). For the ideal \[J_{X}:=(\ker\phi_{X})^{\perp}\cap\phi_{X}^{-1}(\mathcal{K}(X)),\] we obtain that \(\mathcal{O}(J_{X},X)\) is the _Cuntz-Pimsner algebra_\(\mathcal{O}_{X}\)[33]. Katsura's ideal \(J_{X}\) is the largest ideal on which the restriction of \(\phi_{X}\) is injective with image contained in \(\mathcal{K}(X)\)[32]. One of the main tools in the theory is the Gauge-Invariant Uniqueness Theorem, obtained in its full generality by Katsura [34]. An alternative proof can be found in [28], and Frei [24] extended this method to include all relative Cuntz-Pimsner algebras, in connection with [34]. **Theorem 2.2.1** (\(\mathbb{Z}_{+}\)-Ginot).: _[_34_, Corollary 11.8]_ _Let \(X\) be a C*-correspondence over a C*-algebra \(A\), let \(J\subseteq A\) be an ideal satisfying \(J\subseteq J_{X}\) and let \((\pi,t)\) be a representation of \(X\). Then \(\mathcal{O}(J,X)\cong\mathrm{C}^{*}(\pi,t)\) via a (unique) canonical \(*\)-isomorphism if and only if \((\pi,t)\) is injective, admits a gauge action and satisfies \(J=\pi^{-1}(\psi(\mathcal{K}(X)))\)._ Let \(A\) be a C*-algebra, let \(I\subseteq A\) be an ideal and let \(X\) be a right Hilbert \(A\)-module. Then the set \(XI\) is a closed linear subspace of \(X\) that is invariant under the right action of \(A\), e.g, [22, p. 576] or [34, Corollary 1.4]. In particular, we have that \([XI]=XI\). Consequently, \(XI\) is itself a right Hilbert \(A\)-module under the operations and \(A\)-valued inner product inherited from \(X\). We may also view \(XI\) as a right Hilbert \(I\)-module. Due to [22, Lemma 2.6], we will identify \(\mathcal{K}(XI)\) as an ideal of \(\mathcal{K}(X)\) in the following natural way: \[\mathcal{K}(XI)=\overline{\mathrm{span}}\{\Theta_{\xi,\eta}^{X}\mid\xi,\eta \in XI\}\subseteq\mathcal{K}(X).\] When \(X\) is in addition a C*-correspondence over \(A\), we may equip \(XI\) with a C*-correspondence structure via the left action \[\phi_{XI}\colon A\to\mathcal{L}(XI);\phi_{XI}(a)=\phi_{X}(a)|_{XI}\text{ for all }a\in A.\] By restricting \(\phi_{XI}\) to \(I\), we may also view \(XI\) as a C*-correspondence over \(I\). Following [34], and in order to ease notation, we will use the symbol \([\,\cdot\,]_{I}\) to denote the quotient maps associated with a right Hilbert \(A\)-module \(X\) and an ideal \(I\subseteq A\). For example, we use it for both the quotient map \(A\to A/I\equiv[A]_{I}\) and the quotient map \(X\to X/XI\equiv[X]_{I}\). We equip the complex vector space \([X]_{I}\) with the following right \([A]_{I}\)-module multiplication: \[[\xi]_{I}[a]_{I}=[\xi a]_{I}\text{ for all }a\in A,\xi\in X,\] as well as the following \([A]_{I}\)-valued inner product: \[\langle[\xi]_{I},[\eta]_{I}\rangle=[\langle\xi,\eta\rangle]_{I}\text{ for all }\xi,\eta\in X.\] Consequently \([X]_{I}\) carries the structure of an inner-product right \([A]_{I}\)-module. By [34, Lemma 1.5], the canonical norm on \([X]_{I}\) induced by the \([A]_{I}\)-valued inner product coincides with the usual quotient norm. Thus \([X]_{I}\) is a right Hilbert \([A]_{I}\)-module. We may define a \(*\)-homomorphism \([\,\cdot\,]_{I}\colon\mathcal{L}(X)\to\mathcal{L}([X]_{I})\) by \[[S]_{I}[\xi]_{I}=[S\xi]_{I}\text{ for all }S\in\mathcal{L}(X),\xi\in X.\] We include [34, Lemma 1.6] in its entirety, as we will be making frequent reference to it. **Lemma 2.2.2**.: **[**34**, Lemma 1.6]** _Let \(X\) be a right Hilbert module over a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal. Then for all \(\xi,\eta\in X\), we have that \([\Theta^{X}_{\xi,\eta}]_{I}=\Theta^{[X]_{I}}_{[\xi]_{I},[\eta]_{I}}\). The restriction of the map \([\,\cdot\,]_{I}\colon\mathcal{L}(X)\to\mathcal{L}([X]_{I})\) to \(\mathcal{K}(X)\) is a surjection onto \(\mathcal{K}([X]_{I})\) with kernel \(\mathcal{K}(XI)\)._ Therefore, given an ideal \(I\subseteq A\), we obtain the surjective maps \[A\to A/I\text{ with kernel }I,\] \[X\to X/XI\text{ with kernel }XI,\] \[\mathcal{K}(X)\to\mathcal{K}(X/XI)\text{ with kernel }\mathcal{K}(XI),\] as well as the map \(\mathcal{L}(X)\to\mathcal{L}(X/XI)\) (which may _not_ be surjective), all of which will be denoted by the same symbol \([\,\cdot\,]_{I}\). Lemma 2.2.2 implies that if \(k\in\mathcal{K}(X)\), then \[k\in\mathcal{K}(XI)\iff\langle X,kX\rangle\subseteq I. \tag{2.1}\] Lemma 2.2.2 provides a straightforward way of seeing \(\mathcal{K}(XI)\) as an ideal in \(\mathcal{K}(X)\), and in turn as an ideal in \(\mathcal{L}(X)\). Hence we may consider the quotient C*-algebra \(\mathcal{L}(X)/\mathcal{K}(XI)\). We recall the following result extracted from [38, Lemma 4.6], slightly rewritten to match our setting. **Lemma 2.2.3**.: **[**38**, Lemma 4.6]** _Let \(X\) and \(Y\) be C*-correspondences over a C*-algebra \(A\). For \(x\in X\), the equation \(\Theta_{x}(y)=x\otimes y\;(y\in Y)\) defines an element \(\Theta_{x}\in\mathcal{L}(Y,X\otimes_{A}Y)\) satisfying_ \[\|\Theta_{x}\|=\|\phi_{Y}(\langle x,x\rangle^{1/2})\|\leq\|x\|\quad\text{and} \quad\Theta^{*}_{x}(x^{\prime}\otimes y)=\phi_{Y}(\langle x,x^{\prime}\rangle )y\;(x^{\prime}\in X,y\in Y).\] As a consequence, we obtain the following lemma and corollary. **Lemma 2.2.4**.: _Let \(X\) and \(Y\) be C*-correspondences over a C*-algebra \(A\). Let \(I\subseteq A\) be an ideal and suppose \(a\in A\) satisfies \(\phi_{Y}(a)\in\mathcal{K}(YI)\). Then \(\Theta^{X}_{\xi a,\eta}\otimes\mathrm{id}_{Y}\in\mathcal{K}((X\otimes_{A}Y)I)\) for all \(\xi,\eta\in X\)._ Proof.: One may directly verify that \(\Theta^{X}_{\xi a,\eta}\otimes\mathrm{id}_{Y}=\Theta_{\xi}\phi_{Y}(a)\Theta^{ *}_{\eta}\in\mathcal{K}(X\otimes_{A}Y)\), where \(\Theta_{\xi},\Theta_{\eta}\in\mathcal{L}(Y,X\otimes_{A}Y)\) are defined as in Lemma 2.2.3. Here we use that \(\phi_{Y}(a)\in\mathcal{K}(YI)\subseteq\mathcal{K}(Y)\) together with [38, p. 9, (1.6)]. Hence, for \(\zeta,\zeta^{\prime}\in X\otimes_{A}Y\), we have that \[\big{\langle}\zeta,(\Theta^{X}_{\xi a,\eta}\otimes\mathrm{id}_{Y})(\zeta^{ \prime})\big{\rangle}=\big{\langle}\Theta^{*}_{\xi}\zeta,\phi_{Y}(a)\Theta^{*}_ {\eta}\zeta^{\prime}\big{\rangle}\in I,\] where we have used the forward implication of (2.1) applied to the C*-correspondence \(Y\) and \(k=\phi_{Y}(a)\in\mathcal{K}(YI)\) to establish the membership to \(I\). An application of the converse implication of (2.1) gives that \(\Theta^{X}_{\xi a,\eta}\otimes\mathrm{id}_{Y}\in\mathcal{K}((X\otimes_{A}Y)I)\), as required. **Corollary 2.2.5**.: _Let \(X\) and \(Y\) be C*-correspondences over a C*-algebra \(A\). Let \(I\subseteq A\) be an ideal and suppose that \(\phi_{Y}(I)\subseteq\mathcal{K}(YI)\). Then \(k\otimes\mathrm{id}_{Y}\in\mathcal{K}((X\otimes_{A}Y)I)\) for all \(k\in\mathcal{K}(XI)\)._ **Proof.** The claim follows by applying Lemma 2.2.4 to finite linear combinations of rank-one operators in \(\mathcal{K}(XI)\) and their norm-limits. If \(X\) is a C*-correspondence over \(A\), then we need to make an additional imposition on \(I\) in order for \([X]_{I}\) to carry a canonical structure as a C*-correspondence over \([A]_{I}\). More specifically, we say that \(I\) is _positively invariant (for \(X\))_ if it satisfies \[X(I):=[\langle X,IX\rangle]\subseteq I.\] Notice that the space \(X(I)\) is an ideal of \(A\) (in fact, this is true even when \(I\) is replaced by any subset of \(A\)). When \(I\) is positively invariant, we can define a left action on \([X]_{I}\) by \[\phi_{[X]_{I}}\colon[A]_{I}\to\mathcal{L}([X]_{I});[a]_{I}\mapsto[\phi_{X}(a )]_{I}\text{ for all }a\in A.\] To ease notation, we will denote \(\phi_{[X]_{I}}\) by \([\phi_{X}]_{I}\). **Lemma 2.2.6**.: _Let \(X\) be a C*-correspondence over a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal. Then the following are equivalent:_ * \(I\) _is positively invariant for_ \(X\)_;_ * \(IX\subseteq XI\)_;_ * \(IXI=IX\)_._ **Proof.** [(i) \(\Rightarrow\) (ii)]: Fix \(a\in I\) and \(\xi\in X\). By [38, Lemma 4.4], there exists \(\eta\in X\) such that \[\phi_{X}(a)\xi=\eta\left\langle\phi_{X}(a)\xi,\phi_{X}(a)\xi\right\rangle^{ \frac{1}{4}}.\] Notice that \(\left\langle\phi_{X}(a)\xi,\phi_{X}(a)\xi\right\rangle=\left\langle\xi,\phi_{ X}(a^{*}a)\xi\right\rangle\in I\) using positive invariance of \(I\), and hence \(\phi_{X}(a)\xi\in XI\). This shows that \(IX\subseteq XI\), as required. [(ii) \(\Rightarrow\) (iii)]: Trivially \(IXI\subseteq IX\). On the other hand, we have that \(IX=IIX\subseteq IXI\) by assumption, and thus we obtain the required equality. [(iii) \(\Rightarrow\) (i)]: A direct computation yields that \([\langle X,IX\rangle]=[\langle X,IXI\rangle]=[\langle X,IX\rangle\,I]\subseteq I\), and thus \(I\) is positively invariant. **Corollary 2.2.7**.: _Let \(X\) be a C*-correspondence over a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal that is positively invariant for \(X\). Then we have that_ \[\phi_{X}(I)\mathcal{K}(X)\phi_{X}(I)\subseteq\overline{\mathrm{span}}\{\Theta_ {\xi,\eta}^{X}\mid\xi,\eta\in IXI\}.\] **Proof.** The claim follows by checking on rank-one operators and using that \(IXI=IX\) by Lemma 2.2.6. The following lemma will be useful in Section 4. **Lemma 2.2.8**.: _Let \(X\) be a C*-correspondence over a C*-algebra \(A\) such that \(\phi_{X}(A)\subseteq\mathcal{K}(X)\), and let \(I,J\subseteq A\) be ideals. If \(I\) is positively invariant for \(X\) and \(I\subseteq J\), then_ \[\|[\phi_{X}]_{I}([a]_{I})+\mathcal{K}([X]_{I}[J]_{I})\|=\|\phi_{X}(a)+\mathcal{ K}(XJ)\|\text{ for all }a\in A.\] **Proof.** Fix \(a\in A\). Recall that \[M:=\|\phi_{X}(a)+\mathcal{K}(XJ)\|=\inf\{\|\phi_{X}(a)+k\|\mid k\in\mathcal{K} (XJ)\},\] where the norm on the right hand side is taken in \(\mathcal{K}(X)\) since \(\phi_{X}(A)\subseteq\mathcal{K}(X)\). Likewise, we have that \[N:=\|[\phi_{X}]_{I}([a]_{I})+\mathcal{K}([X]_{I}[J]_{I})\|=\inf\{\|[\phi_{X}]_ {I}([a]_{I})+\dot{k}\|\mid\dot{k}\in\mathcal{K}([X]_{I}[J]_{I})\},\] where the norm on the right hand side is taken in \(\mathcal{K}([X]_{I})\) by Lemma 2.2.2. First we show that \(N\leq M\). To this end, it suffices to show that \[N\leq\|\phi_{X}(a)+k\|\text{ for all }k\in\mathcal{K}(XJ).\] Accordingly, fix \(k\in\mathcal{K}(XJ)\). Note that \[\mathcal{K}([X]_{I}[J]_{I})=\mathcal{K}([XJ]_{I})=[\mathcal{K}(XJ)]_{I}\] for the usual \(*\)-epimorphism \([\cdot\,]_{I}\colon\mathcal{K}(X)\to\mathcal{K}([X]_{I})\). Thus \([k]_{I}\in\mathcal{K}([X]_{I}[J]_{I})\). It follows that \[N\leq\|[\phi_{X}]_{I}([a]_{I})+[k]_{I}\|=\|[\phi_{X}(a)+k]_{I}\|\leq\|\phi_{X} (a)+k\|,\] using that \([\cdot\,]_{I}\) is contractive in the second inequality, as required. Finally, to prove that \(M\leq N\) it suffices to show that \[M\leq\|[\phi_{X}]_{I}([a]_{I})+\dot{k}\|\mbox{ for all }\dot{k}\in\mathcal{K}([X]_ {I}[J]_{I}).\] Accordingly, fix \(\dot{k}\in\mathcal{K}([X]_{I}[J]_{I})\) and note that \(\dot{k}=[k]_{I}\) for some \(k\in\mathcal{K}(XJ)\) since \(\mathcal{K}([X]_{I}[J]_{I})=[\mathcal{K}(XJ)]_{I}\). By Lemma 2.2.2, we have a \(*\)-isomorphism \[j\colon\mathcal{K}(X)/\mathcal{K}(XI)\to\mathcal{K}([X]_{I});k^{\prime}+ \mathcal{K}(XI)\mapsto[k^{\prime}]_{I}\mbox{ for all }k^{\prime}\in\mathcal{K}(X).\] In turn, we deduce that \[\|[\phi_{X}]_{I}([a]_{I})+\dot{k}\| =\|[\phi_{X}]_{I}([a]_{I})+[k]_{I}\|=\|[\phi_{X}(a)+k]_{I}\|\] \[=\|j((\phi_{X}(a)+k)+\mathcal{K}(XI))\|=\|(\phi_{X}(a)+k)+\mathcal{ K}(XI)\|\] \[=\inf\{\|\phi_{X}(a)+k+k^{\prime}\|\;|\;k^{\prime}\in\mathcal{K}( XI)\}.\] Note that we use the assumption that \(\phi_{X}(A)\subseteq\mathcal{K}(X)\) to pass to the second line. Thus it suffices to show that \[M\leq\|\phi_{X}(a)+k+k^{\prime}\|\mbox{ for all }k^{\prime}\in\mathcal{K}(XI).\] However, this is immediate since \(I\subseteq J\) and so \(\mathcal{K}(XI)\subseteq\mathcal{K}(XJ)\). In total, we have that \(M=N\), finishing the proof. Moreover, we define two ideals of \(A\) that are related to \(I\) and \(X\), namely \[X^{-1}(I):=\{a\in A\;|\;\;\langle X,aX\rangle\subseteq I\},\] and \[J(I,X):=\{a\in A\;|\;[\phi_{X}(a)]_{I}\in\mathcal{K}([X]_{I}),aX^{-1}(I) \subseteq I\}.\] The use of the ideal \(J(I,X)\) is pivotal in the work of Katsura [34] for accounting for \(*\)-representations of \(\mathcal{T}_{X}\) that may not be injective on \(X\). When \(I\) is positively invariant and \(J\subseteq A\) is an ideal satisfying \(I\subseteq J\), the following lemma illustrates the relationship between \(X^{-1}(J)\) and \([X]_{I}^{-1}([J]_{I})\). **Lemma 2.2.9**.: _Let \(X\) be a C*-correspondence over a C*-algebra \(A\) and let \(I,J\subseteq A\) be ideals. If \(I\) is positively invariant for \(X\) and \(I\subseteq J\), then_ \[X^{-1}(J)=[\cdot\,]_{I}^{-1}([X]_{I}^{-1}([J]_{I})).\] **Proof.** The forward inclusion is immediate by definition of the C*-correspondence operations of \([X]_{I}\). For the reverse inclusion, take \(a\in A\) such that \([a]_{I}\in[X]_{I}^{-1}([J]_{I})\). Then we have that \[[\langle X,aX\rangle]_{I}=\langle[X]_{I},[aX]_{I}\rangle=\langle[X]_{I},[a]_{I }[X]_{I}\rangle\subseteq[J]_{I}.\] In turn, we have that \(\langle X,aX\rangle\subseteq J+I\), and the fact that \(I\subseteq J\) then yields that \(\langle X,aX\rangle\subseteq J\). Hence \(a\in X^{-1}(J)\) by definition, completing the proof. As per [34, Definition 5.6, Definition 5.12], we define a _T-pair_ of \(X\) to be a pair \(\mathcal{L}=\{\mathcal{L}_{\emptyset},\mathcal{L}_{\{1\}}\}\) of ideals of \(A\) such that \(\mathcal{L}_{\emptyset}\) is positively invariant for \(X\) and \(\mathcal{L}_{\emptyset}\subseteq\mathcal{L}_{\{1\}}\subseteq J(\mathcal{L}_{ \emptyset},X)\); a T-pair \(\mathcal{L}\) that satisfies \(J_{X}\subseteq\mathcal{L}_{\{1\}}\) is called an _O-pair_. Our choice of notation here will be clarified in the sequel. Proposition 8.8 of [34] is fundamental to the current work. **Theorem 2.2.10**.: **[**34**, Theorem 8.6, Proposition 8.8]** _Let \(X\) be a C*-correspondence over a C*-algebra \(A\). Then there is a bijection between the set of T-pairs (resp. O-pairs) of \(X\) and the set of gauge-invariant ideals of \(\mathcal{T}_{X}\) (resp. \(\mathcal{O}_{X}\)) that preserves inclusions and intersections._ The bijection of Theorem 2.2.10 restricts appropriately to a parametrisation of the gauge-invariant ideals of any relative Cuntz-Pimsner algebra [34, Proposition 11.9]. The parametrisation of the gauge-invariant ideals of \(\mathcal{T}_{X}\) can be implemented as follows. Firstly, if \(\mathfrak{J}\subseteq\mathcal{T}_{X}\) is a gauge-invariant ideal, then we consider the representation \((Q_{\mathfrak{J}}\circ\overline{\pi}_{X},Q_{\mathfrak{J}}\circ\overline{t}_{ X})\), where \(Q_{\mathfrak{J}}\colon\mathcal{T}_{X}\to\mathcal{T}_{X}/\mathfrak{J}\) is the quotient map and \((\overline{\pi}_{X},\overline{t}_{X})\) is the universal representation of \(X\). We set \[\mathcal{L}_{\emptyset}^{(Q_{\mathfrak{J}}\circ\overline{\pi}_{X},Q_{\mathfrak{ J}}\circ\overline{t}_{X})}:=\ker Q_{\mathfrak{J}}\circ\overline{\pi}_{X}\quad \text{and}\quad\mathcal{L}_{\{1\}}^{(Q_{\mathfrak{J}}\circ\overline{\pi}_{X},Q _{\mathfrak{J}}\circ\overline{t}_{X})}:=(Q_{\mathfrak{J}}\circ\overline{\pi}_ {X})^{-1}((Q_{\mathfrak{J}}\circ\overline{\psi}_{X})(\mathcal{K}(X))).\] It follows that the pair \[\mathcal{L}^{\mathfrak{J}}:=\{\mathcal{L}_{\emptyset}^{(Q_{\mathfrak{J}} \circ\overline{\pi}_{X},Q_{\mathfrak{J}}\circ\overline{t}_{X})},\mathcal{L}_{ \{1\}}^{(Q_{\mathfrak{J}}\circ\overline{\pi}_{X},Q_{\mathfrak{J}}\circ \overline{t}_{X})}\}\] is a T-pair of \(X\)[34, Proposition 5.11]. Next, let \(\mathcal{L}=\{\mathcal{L}_{\emptyset},\mathcal{L}_{\{1\}}\}\) be a T-pair of \(X\). Then \([\mathcal{L}_{\{1\}}]_{\mathcal{L}_{\emptyset}}\) is an ideal of \([A]_{\mathcal{L}_{\emptyset}}\). Additionally, we have that \([\mathcal{L}_{\{1\}}]_{\mathcal{L}_{\emptyset}}\subseteq J_{[X]_{\mathcal{L}_{ \emptyset}}}\) by using [34, Lemma 5.2]. Let \((\tilde{\pi},\tilde{t})\) be the universal \([\mathcal{L}_{\{1\}}]_{\mathcal{L}_{\emptyset}}\)-covariant representation of \([X]_{\mathcal{L}_{\emptyset}}\). Then we may form a representation \((\pi^{\mathcal{L}},t^{\mathcal{L}})\) of \(X\) generating \(\mathcal{O}([\mathcal{L}_{\{1\}}]_{\mathcal{L}_{\emptyset}},[X]_{\mathcal{L}_{ \emptyset}})\) via \[\pi^{\mathcal{L}}(a)=\tilde{\pi}([a]_{\mathcal{L}_{\emptyset}})\text{ and }t^{\mathcal{L}}(\xi)=\tilde{t}([\xi]_{\mathcal{L}_{\emptyset}})\text{ for all }a\in A,\xi\in X.\] The universal property of \(\mathcal{T}_{X}\) then guarantees a (unique) canonical \(*\)-epimorphism \[\pi^{\mathcal{L}}\times t^{\mathcal{L}}\colon\mathcal{T}_{X}\to\mathcal{O}([ \mathcal{L}_{\{1\}}]_{\mathcal{L}_{\emptyset}},[X]_{\mathcal{L}_{\emptyset}}).\] We set \[\mathfrak{J}^{\mathcal{L}}:=\ker\pi^{\mathcal{L}}\times t^{\mathcal{L}}\] and observe that \(\mathfrak{J}^{\mathcal{L}}\) is a gauge-invariant ideal of \(\mathcal{T}_{X}\). By [34, Proposition 8.8], the maps \[\mathfrak{J} \mapsto\mathcal{L}^{\mathfrak{J}}\text{ for all gauge-invariant ideals }\mathfrak{J}\text{ of }\mathcal{T}_{X},\] \[\mathcal{L} \mapsto\mathfrak{J}^{\mathcal{L}}\text{ for all T-pairs }\mathcal{L}\text{ of }X,\] are mutually inverse. The notation we have used here will be revisited in the sequel. ### Product systems Let \(P\) be a subsemigroup of a discrete group \(G\) that contains the identity \(e\) of \(G\) (i.e., \(P\) is _unital_). A _product system \(X\) over \(P\) with coefficients in a C*-algebra \(A\)_ is a family \(\{X_{p}\}_{p\in P}\) of C*-correspondences over \(A\) together with multiplication maps \(u_{p,q}\colon X_{p}\otimes_{A}X_{q}\to X_{pq}\) for all \(p,q\in P\), such that: 1. \(X_{e}=A\), viewing \(A\) as a C*-correspondence over itself; 2. if \(p=e\), then \(u_{e,q}\colon A\otimes_{A}X_{q}\to[\phi_{q}(A)X_{q}]\) is the unitary implementing the left action of \(A\) on \(X_{q}\); 3. if \(q=e\), then \(u_{p,e}\colon X_{p}\otimes_{A}A\to X_{p}\) is the unitary implementing the right action of \(A\) on \(X_{p}\); 4. if \(p,q\in P\setminus\{e\}\), then \(u_{p,q}\colon X_{p}\otimes_{A}X_{q}\to X_{pq}\) is a unitary; 5. the multiplication maps are associative in the sense that \[u_{pq,r}(u_{p,q}\otimes\mathrm{id}_{X_{r}})=u_{p,qr}(\mathrm{id}_{X_{p}} \otimes u_{q,r})\text{ for all }p,q,r\in P.\] Note that we use \(\phi_{p}\) to denote the left action \(\phi_{X_{p}}\) of \(X_{p}\) for each \(p\in P\). We refer to the C*-correspondences \(X_{p}\) as the _fibres_ of \(X\). We do _not_ assume that the fibres are non-degenerate. If \(X_{p}\) is injective (resp. regular) for all \(p\in P\), then we say that \(X\) is _injective_ (resp. _regular_). Axioms (i) and (ii) imply that the unitary \(u_{e,e}\colon A\otimes_{A}A\to A\) is simply multiplication in \(A\). For brevity, we will write \(u_{p,q}(\xi_{p}\otimes\xi_{q})\) as \(\xi_{p}\xi_{q}\) for all \(p,q\in P\), \(\xi_{p}\in X_{p}\) and \(\xi_{q}\in X_{q}\), with the understanding that \(\xi_{p}\) and \(\xi_{q}\) are allowed to differ when \(p=q\). Axioms (ii) and (v) imply that \[\phi_{pq}(a)(\xi_{p}\xi_{q})=(\phi_{p}(a)\xi_{p})\xi_{q}\text{ for all }\xi_{p} \in X_{p},\xi_{q}\in X_{q},p,q\in P.\] For \(p\in P\setminus\{e\}\) and \(q\in P\), we use the product system structure to define a \(*\)-homomorphism \(\iota_{p}^{pq}\colon\mathcal{L}(X_{p})\to\mathcal{L}(X_{pq})\) by \[\iota_{p}^{pq}(S)=u_{p,q}(S\otimes\mathrm{id}_{X_{q}})u_{p,q}^{*}\text{ for all }S\in\mathcal{L}(X_{p}).\] This is also captured by the formula \[\iota_{p}^{pq}(S)(\xi_{p}\xi_{q})=(S\xi_{p})\xi_{q}\text{ for all }\xi_{p}\in X _{p}\text{ and }\xi_{q}\in X_{q}.\] For each \(q\in P\) we also define a \(*\)-homomorphism \(\iota_{e}^{q}\colon\mathcal{K}(A)\to\mathcal{L}(X_{q})\) by \(\iota_{e}^{q}(\phi_{e}(a))=\phi_{q}(a)\) for all \(a\in A\). Moreover, we have that \[\iota_{p}^{p}=\operatorname{id}_{\mathcal{L}(X_{p})}\text{ for all }p\in P \setminus\{e\}\quad\text{and}\quad\iota_{e}^{e}=\operatorname{id}_{\mathcal{K }(A)}.\] The theory of product systems includes that of C*-correspondences in the sense that every C*-correspondence \(X\) over a C*-algebra \(A\) can be viewed as the product system \(\{X_{n}\}_{n\in\mathbb{Z}_{+}}\) with \[X_{0}=A\quad\text{and}\quad X_{n}=X^{\otimes n}\text{ for all }n\in\mathbb{N},\] and multiplication maps \(u_{n,m}\) for \(n,m\neq 0\) given by the natural inclusions. The notion of isomorphism for product systems is given by unitary equivalence. **Definition 2.3.1**.: Let \(P\) be a unital subsemigroup of a discrete group \(G\) and let \(A\) and \(B\) be C*-algebras. Let \(X\) and \(Y\) be product systems over \(P\) with coefficients in \(A\) and \(B\), respectively. Denote the multiplication maps of \(X\) by \(\{u_{p,q}^{X}\}_{p,q\in P}\) and the multiplication maps of \(Y\) by \(\{u_{p,q}^{Y}\}_{p,q\in P}\). We say that \(X\) and \(Y\) are _unitarily equivalent_ (symb. \(X\cong Y\)) if there exist surjective linear maps \(W_{p}\colon X_{p}\to Y_{p}\) for all \(p\in P\) with the following properties: 1. \(W_{e}\colon A\to B\) is a \(*\)-isomorphism; 2. \(\big{\langle}W_{p}(\xi_{p}),W_{p}(\xi_{p}^{\prime})\big{\rangle}=W_{e}\big{(} \langle\xi_{p},\xi_{p}^{\prime}\rangle\big{)}\) for all \(\xi_{p},\xi_{p}^{\prime}\in X_{p}\) and \(p\in P\setminus\{e\}\); 3. \(\phi_{\mathcal{V}_{p}}(W_{e}(a))W_{p}(\xi_{p})=W_{p}(\phi_{X_{p}}(a)\xi_{p})\) for all \(a\in A,\xi_{p}\in X_{p}\) and \(p\in P\setminus\{e\}\); 4. \(W_{p}(\xi_{p})W_{e}(a)=W_{p}(\xi_{p}a)\) for all \(a\in A,\xi_{p}\in X_{p}\) and \(p\in P\setminus\{e\}\); 5. \(u_{p,q}^{Y}\circ(W_{p}\otimes W_{q})=W_{pq}\circ u_{p,q}^{X}\) for all \(p,q\in P\). In this case, we say that \(\{W_{p}\}_{p\in P}\) implements a unitary equivalence between \(X\) and \(Y\). **Remark 2.3.2**.: Many structural properties of product systems are preserved under unitary equivalence. Let \(\{W_{p}\}_{p\in P}\) implement a unitary equivalence between product systems \(X\) and \(Y\). Item (i) guarantees that items (ii)-(iv) hold when \(p=e\). Likewise, items (i) and (ii) imply that \(W_{p}\) is injective for all \(p\in P\). Hence \(W_{p}\) is bijective for all \(p\in P\). Consequently, the collection \(\{W_{p}^{-1}\}_{p\in P}\) defines a unitary equivalence between \(Y\) and \(X\). For all \(p,q\in P\), the map \(W_{p}\otimes W_{q}\in\mathcal{B}(X_{p}\otimes_{A}X_{q},Y_{p}\otimes_{B}Y_{q})\) is defined on simple tensors via \[(W_{p}\otimes W_{q})(\xi_{p}\otimes\xi_{q})=W_{p}(\xi_{p})\otimes W_{q}(\xi_{ q})\text{ for all }\xi_{p}\in X_{p},\xi_{q}\in X_{q}.\] It follows that \((W_{p}\otimes W_{q})^{-1}=W_{p}^{-1}\otimes W_{q}^{-1}\), and that \[W_{p}\mathcal{K}(X_{p})W_{p}^{-1}=\mathcal{K}(Y_{p}). \tag{2.2}\] Fixing \(a\in A\), we also have that \[W_{p}\phi_{X_{p}}(a)W_{p}^{-1}=\phi_{Y_{p}}(W_{e}(a)). \tag{2.3}\] Finally, let \(\{\iota_{p}^{pq}\}_{p,q\in P}\) denote the connecting \(*\)-homomorphisms of \(X\) and let \(\{j_{p}^{pq}\}_{p,q\in P}\) denote the connecting \(*\)-homomorphisms of \(Y\). For \(p\in P\setminus\{e\}\) and \(q\in P\), we have that \[\iota_{p}^{pq}(\Theta^{X_{p}}_{\xi_{p},\xi_{p}^{\prime}})=W_{pq}^{-1}j_{p}^{pq} (\Theta^{Y_{p}}_{W_{p}(\xi_{p}),W_{p}(\xi_{p}^{\prime})})W_{pq}\text{ for all }\xi_{p},\xi_{p}^{\prime}\in X_{p}. \tag{2.4}\] A _(Toeplitz) representation_\((\pi,t)\) of \(X\) on \(\mathcal{B}(H)\) consists of a family \(\{(\pi,t_{p})\}_{p\in P}\), where \((\pi,t_{p})\) is a representation of \(X_{p}\) on \(\mathcal{B}(H)\) for all \(p\in P\), \(t_{e}=\pi\) and \[t_{p}(\xi_{p})t_{q}(\xi_{q})=t_{pq}(\xi_{p}\xi_{q})\text{ for all }\xi_{p}\in X_{p},\xi_{q}\in X_{q},p,q\in P.\] We write \(\psi_{p}\) for the induced \(*\)-homomorphism \(\mathcal{K}(X_{p})\to\mathcal{B}(H)\) for all \(p\in P\). We say that \((\pi,t)\) is _injective_ if \(\pi\) is injective; in this case \(t_{p}\) and \(\psi_{p}\) are isometric for all \(p\in P\). We denote the C*-algebra generated by \(\pi(A)\) and every \(t_{p}(X_{p})\) by \(\mathrm{C}^{*}(\pi,t)\). We write \(\mathcal{T}_{X}\) for the universal C*-algebra with respect to the Toeplitz representations of \(X\), and refer to it as the _Toeplitz algebra (of \(X\))_. The following proposition is well known. **Proposition 2.3.3**.: _Let \(P\) be a unital subsemigroup of a discrete group \(G\). Let \(X\) be a product system over \(P\) with coefficients in a C*-algebra \(A\). Then there exists a C*-algebra \(\mathcal{T}_{X}\) and a representation \((\pi_{X},t_{X})\) of \(X\) on \(\mathcal{T}_{X}\) such that:_ * \(\mathcal{T}_{X}=\mathrm{C}^{*}(\pi_{X},t_{X})\)_;_ * _if_ \((\pi,t)\) _is a representation of_ \(X\)_, then there exists a (unique) canonical_ \(*\)_-epimorphism_ \(\pi\times t\colon\mathcal{T}_{X}\to\mathrm{C}^{*}(\pi,t)\)_, i.e.,_ \((\pi\times t)(t_{X,p}(\xi_{p}))=t_{p}(\xi_{p})\) _for all_ \(\xi_{p}\in X_{p}\) _and_ \(p\in P\)_._ _The pair \((\mathcal{T}_{X},(\pi_{X},t_{X}))\) is unique up to canonical \(*\)-isomorphism._ Unitary equivalence induces a bijection between representations in the following sense. **Proposition 2.3.4**.: _Let \(P\) be a unital subsemigroup of a discrete group \(G\) and let \(A\) and \(B\) be C*-algebras. Let \(X\) and \(Y\) be product systems over \(P\) with coefficients in \(A\) and \(B\), respectively. Suppose that \(X\) and \(Y\) are unitarily equivalent by a collection \(\{W_{p}\colon X_{p}\to Y_{p}\}_{p\in P}\). Then there is a bijection between their sets of representations, i.e., \(\{(\pi,t_{p})\}_{p\in P}\) is a representation of \(X\) if and only if \(\{(\pi\circ W_{e}^{-1},t_{p}\circ W_{p}^{-1})\}_{p\in P}\) is a representation of \(Y\)._ **Proof.** The result follows by straightforward algebraic computations and Remark 2.3.2. Under the assumptions of Proposition 2.3.4, the latter guarantees that \(\mathcal{T}_{X}\) and \(\mathcal{T}_{Y}\) are universal with respect to the "same" representations, and hence \(\mathcal{T}_{X}\cong\mathcal{T}_{Y}\) canonically. The quintessential example of a representation that we will be using throughout is the Fock representation. Here we define this representation and collect its basic properties. Firstly, we set \(\mathcal{F}X:=\sum_{p\in P}X_{p}\), as the direct sum of right Hilbert \(A\)-modules, e.g., [38, p. 6]. For each \(p\in P\), we identify \(X_{p}\) with the \(p\)-th direct summand of \(\mathcal{F}X\). For each \(p,q\in P\), we define \(\iota_{p,q}\colon\mathcal{L}(X_{p},X_{q})\to\mathcal{L}(\mathcal{F}X)\) by \[\iota_{p,q}(S)\xi_{r}=\begin{cases}S\xi_{p}&\text{if $r=p$},\\ 0&\text{otherwise},\end{cases}\] for all \(S\in\mathcal{L}(X_{p},X_{q}),\xi_{r}\in X_{r}\) and \(r\in P\), and note that \(\iota_{p,q}(S)^{*}=\iota_{q,p}(S^{*})\). It follows that \(\iota_{p,q}\) is an isometric linear map, and hence we may identify \(\mathcal{L}(X_{p},X_{q})\) as a subspace of \(\mathcal{L}(\mathcal{F}X)\). When \(p=q\), we have that \(\iota_{p,p}\) is a \(*\)-homomorphism, and thus we may identify \(\mathcal{L}(X_{p})\) as a C*-subalgebra of \(\mathcal{L}(\mathcal{F}X)\). More generally, we may identify \(\bigoplus_{p\in Q}\mathcal{L}(X_{p})\) as a C*-subalgebra of \(\mathcal{L}(\mathcal{F}X)\), for any subset \(Q\) of \(P\). The _Fock representation_\((\overline{\pi},\overline{t})\) is the representation of \(X\) on \(\mathcal{L}(\mathcal{F}X)\) determined by \[\overline{\pi}(a)\xi_{q}=\phi_{q}(a)\xi_{q}\quad\text{and}\quad\overline{t}_{p }(\xi_{p})\xi_{q}=\xi_{p}\xi_{q},\] for all \(a\in A,\xi_{p}\in X_{p},\xi_{q}\in X_{q}\) and \(p,q\in P\). For \(\xi_{p}\in X_{p}\) with \(p\neq e\), we have that \(\overline{t}_{p}(\xi_{p})^{*}\) maps \(X_{r}\) to \(0\) whenever \(r\notin pP\). Conversely, if \(r=pq\) for some \(q\in P\), then \[\overline{t}_{p}(\xi_{p})^{*}(\iota_{p}\eta_{q})=\phi_{q}(\langle\xi_{p},\eta_ {p}\rangle)\eta_{q}\text{ for all }\eta_{p}\in X_{p},\eta_{q}\in X_{q}.\] Since \(X_{pq}\) is unitarily equivalent to \(X_{p}\otimes_{A}X_{q}\), and \(X_{pq}\) embeds isometrically in \(\mathcal{F}X\), this formula is sufficient to determine \(\overline{t}_{p}(\xi_{p})^{*}\) on all of \(X_{pq}\) and hence on all of \(\mathcal{F}X\). The Fock representation is injective, for if \(\overline{\pi}(a)=0\) for some \(a\in A\) then in particular \(aa^{*}=\overline{\pi}(a)a^{*}=0\), and thus \(a=0\). Next we illustrate how the quotient construction for C*-correspondences extends to the setting of product systems. The following definition is motivated by the notion of positive invariance for C*-correspondences. **Definition 2.3.5**.: Let \(P\) be a unital subsemigroup of a discrete group \(G\). Let \(X\) be a product system over \(P\) with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal. We say that \(I\) is _positively invariant (for \(X\))_ if it satisfies \[X(I):=\overline{\operatorname{span}}\{\langle X_{p},IX_{p}\rangle\mid p\in P \}\subseteq I.\] Notice that the space \(X(I)\) is an ideal of \(A\) (and in fact, this is true even when \(I\) is replaced by any subset of \(A\)), and that an ideal \(I\) is positively invariant for \(X\) if and only if it is positively invariant for every fibre of \(X\). This observation lies at the heart of the following proposition. **Proposition 2.3.6**.: _Let \(P\) be a unital subsemigroup of a discrete group \(G\). Let \(X\) be a product system over \(P\) with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal that is positively invariant for \(X\). Set_ \[[X]_{I}:=\{[X_{p}]_{I}\}_{p\in P},\;\text{where}\;\;[X_{p}]_{I}=X_{p}/X_{p}I\; \text{for all}\;p\in P.\] _Then \([X]_{I}\) carries a canonical structure as a product system over \(P\) with coefficients in \([A]_{I}\), given by the multiplication maps_ \[[X_{p}]_{I}\otimes_{[A]_{I}}[X_{q}]_{I}\to[X_{pq}]_{I};[\xi_{p}]_{I}\otimes[ \xi_{q}]_{I}\mapsto[\xi_{p}\xi_{q}]_{I}\;\text{for all}\;p,q\in P.\] **Proof.** We will denote the multiplication maps of \(X\) by \(\{u_{p,q}\}_{p,q\in P}\). The quotient construction for C*-correspondences covered in Subsection 2.2 renders a canonical structure on \([X_{p}]_{I}\) as a C*-correspondence over \([A]_{I}\) for all \(p\in P\backslash\{e\}\). Thus \([X]_{I}\) constitutes a family of C*-correspondences over \([A]_{I}\). It remains to show that \([X]_{I}\), together with the maps of the statement, satisfies axioms (i)-(v) of a product system. First we show well-definedness of the multiplication maps for \([X]_{I}\). This is immediate for \(p=e\) or \(q=e\). So fix \(p,q\in P\setminus\{e\}\) and define the map \[v_{p,q}\colon[X_{p}]_{I}\times[X_{q}]_{I}\to[X_{pq}]_{I};([\xi_{p}]_{I},[\xi_ {q}]_{I})\mapsto[\xi_{p}\xi_{q}]_{I}\equiv[u_{p,q}(\xi_{p}\otimes\xi_{q})]_{I} \;\text{for all}\;\xi_{p}\in X_{p},\xi_{q}\in X_{q}.\] To see that \(v_{p,q}\) is well-defined, fix \(\xi_{p}\in X_{p},\xi_{q}\in X_{q}\) and suppose that \[[\xi_{p}]_{I}=[\eta_{p}]_{I}\quad\text{and}\quad[\xi_{q}]_{I}=[\eta_{q}]_{I}\] for some \(\eta_{p}\in X_{p}\) and \(\eta_{q}\in X_{q}\). Then \(\xi_{p}=\eta_{p}+\zeta_{p}a\) and \(\xi_{q}=\eta_{q}+\zeta_{q}b\) for some \(\zeta_{p}\in X_{p}\), \(\zeta_{q}\in X_{q}\) and \(a,b\in I\). A direct computation yields that \[[\xi_{p}\xi_{q}]_{I}=[(\eta_{p}+\zeta_{p}a)(\eta_{q}+\zeta_{q}b)]_{I}=[\eta_{p }\eta_{q}+\eta_{p}(\zeta_{q}b)+(\zeta_{p}a)\eta_{q}+(\zeta_{p}a)(\zeta_{q}b)]_ {I}.\] Observe that \[\eta_{p}(\zeta_{q}b)=(\eta_{p}\zeta_{q})b\in X_{pq}I\quad\text{and}\quad(\zeta _{p}a)(\zeta_{q}b)=((\zeta_{p}a)\zeta_{q})b\in X_{pq}I.\] Moreover, by Lemma 2.2.6 we have that \[(\zeta_{p}a)\eta_{q}=\zeta_{p}(a\eta_{q})\in u_{p,q}(X_{p}\otimes_{A}IX_{q}) \subseteq u_{p,q}(X_{p}\otimes_{A}X_{q}I)=X_{pq}I\] using positive invariance of \(I\). Hence \[[\eta_{p}(\zeta_{q}b)]_{I}=[(\zeta_{p}a)\eta_{q}]_{I}=[(\zeta_{p}a)(\zeta_{q}b )]_{I}=0,\] and thus \([\xi_{p}\xi_{q}]_{I}=[\eta_{p}\eta_{q}]_{I}\), as required. Next observe that \(v_{p,q}\) is bilinear and \([A]_{I}\)-balanced, and hence it induces a unique linear map \[v_{p,q}\colon[X_{p}]_{I}\odot_{[A]_{I}}[X_{q}]_{I}\to[X_{pq}]_{I};[\xi_{p}]_{I }\otimes[\xi_{q}]_{I}\mapsto[\xi_{p}\xi_{q}]_{I}\;\text{for all}\;\xi_{p}\in X_{p}, \xi_{q}\in X_{q}.\] For \(\xi_{p},\eta_{p}\in X_{p}\) and \(\xi_{q},\eta_{q}\in X_{q}\), we have that \[\langle v_{p,q}([\xi_{p}]_{I}\otimes[\xi_{q}]_{I}),v_{p,q}([\eta _{p}]_{I}\otimes[\eta_{q}]_{I})\rangle =\langle[\xi_{p}\xi_{q}]_{I},[\eta_{p}\eta_{q}]_{I}\rangle=[ \langle\xi_{p}\otimes\xi_{q},\eta_{p}\otimes\eta_{q}\rangle]_{I}\] \[=[\langle\xi_{q},\phi_{q}(\langle\xi_{p},\eta_{p}\rangle)\eta_{q }\rangle]_{I}=\langle[\xi_{p}]_{I}\otimes[\xi_{q}]_{I},[\eta_{p}]_{I}\otimes[ \eta_{q}]_{I}\rangle\,,\] and thus \(\langle v_{p,q}(\zeta),v_{p,q}(\zeta^{\prime})\rangle=\langle\zeta,\zeta^{ \prime}\rangle\) for all \(\zeta,\zeta^{\prime}\in[X_{p}]_{I}\odot_{[A]_{I}}[X_{q}]_{I}\). In particular, \(v_{p,q}\) is bounded with respect to the norm on \([X_{p}]_{I}\odot_{[A]_{I}}[X_{q}]_{I}\) induced by the \([A]_{I}\)-valued inner product. Hence it extends to a bounded linear map given by \[v_{p,q}\colon[X_{p}]_{I}\otimes_{[A]_{I}}[X_{q}]_{I}\to[X_{pq}]_{I};[\xi_{p}]_{ I}\otimes[\xi_{q}]_{I}\mapsto[\xi_{p}\xi_{q}]_{I}\;\text{for all}\;\xi_{p}\in X_{p},\xi_{q}\in X_{q}.\] From the preceding calculations it also follows that each \(v_{p,q}\) is isometric, and similar reasoning shows that each \(v_{p,q}\) is an \([A]_{I}\)-bimodule map. Surjectivity of \(v_{p,q}\) onto the corresponding space follows from surjectivity of \(u_{p,q}\). Associativity of the multiplication maps \(\{v_{p,q}\}_{p,q\in P}\) follows from associativity of \(\{u_{p,q}\}_{p,q\in P}\), and the proof is complete. We will use the notation \([\cdot]_{I}\colon X\to[X]_{I}\) as shorthand for the family of quotient maps \([\cdot]_{I}\colon X_{p}\to[X_{p}]_{I}\) for all \(p\in P\). ### Product systems over right LCM semigroups A unital semigroup \(P\) is said to be a _right LCM semigroup_ if it is left cancellative and satisfies Clifford's condition, that is: for every \[p,q\in P\] with \[pP\cap qP\neq\emptyset\] , there exists \[w\in P\] such that \[pP\cap qP=wP.\] The element \(w\) is referred to as a _right least common multiple_ or _right LCM_ of \(p\) and \(q\). Right LCM semigroups include as a special case the quasi-lattice ordered semigroups considered in [42]. Indeed, quasi-lattice ordered semigroups are right LCM semigroups with the property that the only invertible element in \(P\) is the unit. Further examples include the Artin monoids [5], the Baumslag-Solitar monoids \(B(m,n)^{+}\)[26, 40, 52], and the semigroup \(R\rtimes R^{\times}\) of affine transformations of an integral domain \(R\) that satisfies the GCD condition [39, 43]. Product systems over right LCM semigroups were introduced and studied by Kwasniewski and Larsen [35, 36], extending the construction of Fowler [21]. They have been investigated further in [17, 31]. The interest lies in that they retain several of the structural properties from the single C*-correspondence case. Let \(A\) be a C*-algebra and let \(P\) be a unital right LCM subsemigroup of a discrete group \(G\). Let \(X\) be a product system over \(P\) with coefficients in \(A\). We say that \(X\) is _compactly aligned_ if, for all \(p,q\in P\setminus\{e\}\) with the property that \(pP\cap qP=wP\) for some \(w\in P\), we have that \[\iota_{p}^{w}(\mathcal{K}(X_{p}))\iota_{q}^{w}(\mathcal{K}(X_{q}))\subseteq \mathcal{K}(X_{w}).\] This condition is independent of the choice of right LCM \(w\in P\) of \(p,q\in P\), e.g., [17, p. 11]. Notice that we disregard the case where \(p\) or \(q\) equals \(e\), as the compact alignment condition holds automatically in this case. When \((G,P)\) is totally ordered, \(X\) is automatically compactly aligned, e.g., when \(P=\mathbb{Z}_{+}\). Moreover, it is a standard fact that if \(\phi_{p}(A)\subseteq\mathcal{K}(X_{p})\) for all \(p\in P\), then \(X\) is automatically compactly aligned. We provide a short proof. **Proposition 2.4.1**.: _Let \(P\) be a unital right LCM subsemigroup of a discrete group \(G\). Let \(X\) be a product system over \(P\) with coefficients in a C*-algebra \(A\). If \(\phi_{p}(A)\subseteq\mathcal{K}(X_{p})\) for all \(p\in P\), then \(\iota_{p}^{pq}(\mathcal{K}(X_{p}))\subseteq\mathcal{K}(X_{pq})\) for all \(p,q\in P\), and thus \(X\) is compactly aligned._ Proof.: First note that \(\iota_{e}^{q}(\phi_{e}(a))=\phi_{q}(a)\in\mathcal{K}(X_{q})\) for all \(q\in P\) and \(a\in A\). Now fix \(p\in P\setminus\{e\},q\in P\) and \(k_{p}\in K(X_{p})\). Since \(\phi_{q}(A)\subseteq\mathcal{K}(X_{q})\) and \(k_{p}\in\mathcal{K}(X_{p})\), an application of [38, Proposition 4.7] gives that \(k_{p}\otimes\operatorname{id}_{X_{q}}\in\mathcal{K}(X_{p}\otimes_{A}X_{q})\). Hence an application of [38, p. 9, (1.6)] yields that \(\iota_{p}^{pq}(k_{p})=u_{p,q}(k_{p}\otimes\operatorname{id}_{X_{q}})u_{p,q}^{ *}\in\mathcal{K}(X_{pq})\), as required. Compact alignment is preserved by unitary equivalence. **Proposition 2.4.2**.: _Let \(P\) be a unital right LCM subsemigroup of a discrete group \(G\) and let \(A\) and \(B\) be C*-algebras. Let \(X\) and \(Y\) be unitarily equivalent product systems over \(P\) with coefficients in \(A\) and \(B\), respectively. Then \(X\) is compactly aligned if and only if \(Y\) is compactly aligned._ Proof.: This is immediate by applying (2.2) and (2.4). We say that a representation \((\pi,t)\) of a compactly aligned product system \(X\) is _Nica-covariant_ if for all \(p,q\in P\setminus\{e\},k_{p}\in\mathcal{K}(X_{p})\) and \(k_{q}\in\mathcal{K}(X_{q})\) we have that \[\psi_{p}(k_{p})\psi_{q}(k_{q})=\begin{cases}\psi_{w}(\iota_{p}^{w}(k_{p})\iota _{q}^{w}(k_{q}))&\text{if $pP\cap qP=wP$ for some $w\in P$},\\ 0&\text{otherwise}.\end{cases} \tag{2.5}\] This condition is independent of the choice of right LCM \(w\in P\) of \(p,q\in P\), e.g., [17, Proposition 2.4]. We disregard the case where \(p\) or \(q\) equals \(e\), as (2.5) holds automatically in this case. It is straightforward to see that the Fock representation is Nica-covariant. The Nica-covariance condition induces a Wick ordering on \(\mathrm{C}^{*}(\pi,t)\), e.g, [17, 21, 35, 36]. Let \(p,q\in P\) and suppose firstly that \(pP\cap qP=wP\) for some \(w\in P\). Then \[t_{p}(X_{p})^{*}t_{q}(X_{q})\subseteq[t_{p^{\prime}}(X_{p^{\prime}})t_{q^{ \prime}}(X_{q^{\prime}})^{*}],\text{ where $p^{\prime}=p^{-1}w,q^{\prime}=q^{-1}w$}.\] On the other hand, if \(pP\cap qP=\emptyset\), then \(t_{p}(X_{p})^{*}t_{q}(X_{q})=\{0\}\). From this it follows that \[\mathrm{C}^{*}(\pi,t)=\overline{\mathrm{span}}\{t_{p}(X_{p})t_{q}(X_{q})^{*}\ |\ p,q\in P\}.\] We write \(\mathcal{N}\mathcal{T}_{X}\) for the universal C*-algebra with respect to the Nica-covariant representations of \(X\), and refer to it as the _Toeplitz-Nica-Pimsner algebra (of \(X\))_. Since the Nica-covariance relations are graded, the existence of \(\mathcal{N}\mathcal{T}_{X}\) and its universal property follow from Proposition 2.3.3. We write \((\overline{\pi}_{X},\overline{t}_{X})\) for the _universal Nica-covariant representation (of \(X\))_. If \((\pi,t)\) is a Nica-covariant representation of \(X\), we will write (in a slight abuse of notation) \(\pi\times t\) for the canonical \(*\)-epimorphism \(\mathcal{N}\mathcal{T}_{X}\to\mathrm{C}^{*}(\pi,t)\). Injectivity of the Fock representation \((\overline{\pi},\overline{t})\) implies that \((\overline{\pi}_{X},\overline{t}_{X})\) is injective. In fact, when \(P\) is contained in an amenable discrete group, the \(*\)-representation \(\overline{\pi}\times\overline{t}\) is faithful, e.g., [31]. If \(X\) and \(Y\) are unitarily equivalent product systems, then \(\mathcal{T}_{X}\cong\mathcal{T}_{Y}\) by Proposition 2.3.4. When \(X\) or \(Y\) is compactly aligned (and hence both are by Proposition 2.4.2), we moreover have that \(\mathcal{N}\mathcal{T}_{X}\cong\mathcal{N}\mathcal{T}_{Y}\) canonically. **Proposition 2.4.3**.: _Let \(P\) be a unital right LCM subsemigroup of a discrete group \(G\) and let \(A\) and \(B\) be C*-algebras. Let \(X\) and \(Y\) be unitarily equivalent compactly aligned product systems over \(P\) with coefficients in \(A\) and \(B\), respectively. Then the bijection of Proposition 2.3.4 preserves the Nica-covariant representations._ **Proof.** This follows by Remark 2.3.2. **Proposition 2.4.4**.: _Let \(P\) be a unital right LCM subsemigroup of a discrete group \(G\). Let \(X\) be a compactly aligned product system over \(P\) with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal that is positively invariant for \(X\). Let \(\{t_{p}^{pq}\}_{p,q\in P}\) denote the connecting \(*\)-homomorphisms of \(X\) and \(\{j_{p}^{pq}\}_{p,q\in P}\) denote the connecting \(*\)-homomorphisms of \([X]_{I}\). Then_ \[j_{e}^{q}([\phi_{e}(a)]_{I})=[\iota_{e}^{q}(\phi_{e}(a))]_{I},\ \text{for all}\ a\in A,\quad\text{and}\quad j_{p}^{pq}([S_{p}]_{I})=[\iota_{p}^{ pq}(S_{p})]_{I},\ \text{for all}\ S_{p}\in\mathcal{L}(X_{p}),\] _for all \(p\in P\setminus\{e\}\) and \(q\in P\), and thus \([X]_{I}\) is compactly aligned._ **Proof.** Fix \(q\in P\) and \(a\in A\), and observe that \[j_{e}^{q}([\phi_{e}(a)]_{I})=[\phi_{q}]_{I}([a]_{I})=[\phi_{q}(a)]_{I}=[\iota_{ e}^{q}(\phi_{e}(a))]_{I}.\] Next, fix \(p\in P\setminus\{e\},q\in P\) and \(S_{p}\in\mathcal{L}(X_{p})\). For \(\xi_{p}\in X_{p}\) and \(\xi_{q}\in X_{q}\), we have that \[j_{p}^{pq}([S_{p}]_{I})([\xi_{p}]_{I}[\xi_{q}]_{I})=[(S_{p}\xi_{p})\xi_{q}]_{I} =[\iota_{p}^{pq}(S_{p})(\xi_{p}\xi_{q})]_{I}=[\iota_{p}^{pq}(S_{p})]_{I}([\xi_ {p}]_{I}[\xi_{q}]_{I}),\] from which it follows that \(j_{p}^{pq}([S_{p}]_{I})=[\iota_{p}^{pq}(S_{p})]_{I}\). This proves the first claim. Finally, take \(p,q\in P\setminus\{e\}\) such that \(pP\cap qP=wP\) for some \(w\in P\), and fix \(\dot{k}_{p}\in\mathcal{K}([X_{p}]_{I})\) and \(\dot{k}_{q}\in\mathcal{K}([X_{q}]_{I})\). By Lemma 2.2.2, we have that \[\dot{k}_{p}=[k_{p}]_{I}\ \text{and}\ \dot{k}_{q}=[k_{q}]_{I}\ \text{for some}\ k_{p}\in\mathcal{K}(X_{p})\ \text{and}\ k_{q}\in\mathcal{K}(X_{q}),\] and therefore we obtain that \[j_{p}^{w}(\dot{k}_{p})j_{q}^{w}(\dot{k}_{q})=j_{p}^{w}([k_{p}]_{I})j_{q}^{w}([ k_{q}]_{I})=[\iota_{p}^{w}(k_{p})]_{I}[\iota_{q}^{w}(k_{q})]_{I}=[\iota_{p}^{w}(k_{ p})\iota_{q}^{w}(k_{q})]_{I}.\] Since \(X\) is compactly aligned, we have that \(\iota_{p}^{w}(k_{p})\iota_{q}^{w}(k_{q})\in\mathcal{K}(X_{w})\). Another application of Lemma 2.2.2 then gives that \(j_{p}^{w}(\dot{k}_{p})j_{q}^{w}(\dot{k}_{q})\in\mathcal{K}([X_{w}]_{I})\). Hence \([X]_{I}\) is compactly aligned and the proof is complete. ### Product systems over \(\mathbb{Z}_{+}^{d}\) For \(d\in\mathbb{N}\), we write \([d]:=\{1,2,\ldots,d\}\). We denote the usual free generators of \(\mathbb{Z}_{+}^{d}\) by \(\underline{1},\ldots,\underline{d}\), and we set \(\underline{0}=(0,\ldots,0)\). For \(\underline{n}=(n_{1},\ldots,n_{d})\in\mathbb{Z}_{+}^{d}\), we define the _length_ of \(\underline{n}\) by \[|\underline{n}|:=\sum\{n_{i}\ |\ i\in[d]\}.\] For \(\emptyset\neq F\subseteq[d]\), we write \[\underline{1}_{F}:=\sum\{\underline{i}\ |\ i\in F\}\ \text{and}\ \underline{1}_{0}:= \underline{0}.\] We consider the lattice structure on \(\mathbb{Z}_{+}^{d}\) given by \[\underline{n}\vee\underline{m}:=(\max\{n_{i},m_{i}\})_{i=1}^{d}\quad\text{and} \quad\underline{n}\wedge\underline{m}:=(\min\{n_{i},m_{i}\})_{i=1}^{d}.\] The semigroup \(\mathbb{Z}_{+}^{d}\) imposes a partial order on \(\mathbb{Z}^{d}\) that is compatible with the lattice structure. Specifically, we say that \(\underline{n}\leq\underline{m}\) (resp. \(\underline{n}<\underline{m}\)) if and only if \(n_{i}\leq m_{i}\) for all \(i\in[d]\) (resp. \(\underline{n}\leq\underline{m}\) and \(\underline{n}\neq\underline{m}\)). We denote the _support_ of \(\underline{n}\) by \[\operatorname{supp}\underline{n}:=\{i\in[d]\mid n_{i}\neq 0\},\] and we write \[\underline{n}\perp\underline{m}\iff\operatorname{supp}\underline{n}\bigcap \operatorname{supp}\underline{m}=\emptyset.\] We write \(\underline{n}\perp F\) if \(\operatorname{supp}\underline{n}\bigcap F=\emptyset\). Notice that the set \(\{\underline{n}\in\mathbb{Z}_{+}^{d}\mid\underline{n}\perp F\}\) is directed; indeed, if \(\underline{n},\underline{m}\perp F\) then \(\underline{n},\underline{m}\leq\underline{n}\vee\underline{m}\perp F\). Therefore we can make sense of limits with respect to \(\underline{n}\perp F\). Many desirable properties of the fibres of a product system \(X=\{X_{\underline{n}}\}_{\underline{n}\in\mathbb{Z}_{+}^{d}}\) are inherited from the corresponding properties of the fibres \(X_{\underline{i}}\), where \(i\in[d]\). **Proposition 2.5.1**.: _Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\). Then the following hold:_ * \(X_{\underline{i}}\) _is injective for all_ \(i\in[d]\) _if and only if_ \(X_{\underline{n}}\) _is injective for all_ \(\underline{n}\in\mathbb{Z}_{+}^{d}\)_._ * \(\phi_{\underline{i}}(A)\subseteq\mathcal{K}(X_{\underline{i}})\) _for all_ \(i\in[d]\) _if and only if_ \(\phi_{\underline{n}}(A)\subseteq\mathcal{K}(X_{\underline{n}})\) _for all_ \(\underline{n}\in\mathbb{Z}_{+}^{d}\)_._ _In particular, \(X_{\underline{i}}\) is regular for all \(i\in[d]\) if and only if \(X\) is regular._ **Proof.** Both items follow by induction on \(|\underline{n}|\). Specifically, item (i) follows by [38, p. 42], and item (ii) follows by [38, Proposition 4.7 and p. 9, (1.6)]. The following proposition will be used frequently in the sequel. **Proposition 2.5.2**.: _Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\). Fix \(F\subseteq[d]\) and let \(I\subseteq A\) be an ideal such that_ \[I\subseteq\bigcap\{\phi_{\underline{i}}^{-1}(\mathcal{K}(X_{\underline{i}})) \mid i\in F^{c}\}\quad\text{and}\quad\big{\langle}X_{\underline{n}},IX_{ \underline{n}}\big{\rangle}\subseteq I\text{ for all }\underline{n}\perp F.\] _Then_ \[\phi_{\underline{n}}(I)\subseteq\mathcal{K}(X_{\underline{n}}I)\text{ for all }\underline{n}\perp F.\] **Proof.** We will proceed by induction on \(|\underline{n}|\). The cases where \(|\underline{n}|=0\) and \(F=[d]\) follow trivially. The statement holds when \(|\underline{n}|=1\) by (2.1). Now assume that the claim holds for all \(\underline{n}\perp F\) with \(|\underline{n}|=N\), where \(N\geq 1\). Fix \(\underline{m}\perp F\) with \(|\underline{m}|=N+1\), so that \(\underline{m}=\underline{n}+\underline{i}\) for some \(\underline{n}\perp F\) and \(i\in F^{c}\) such that \(|\underline{n}|=N\). We must show that \(\phi_{\underline{m}}(a)\in\mathcal{K}(X_{\underline{m}}I)\) for all \(a\in I\). By definition we have that \[\phi_{\underline{m}}(a)=\phi_{\underline{n}+\underline{i}}(a)=\epsilon_{ \underline{n}}^{\underline{n}+\underline{i}}(\phi_{\underline{n}}(a))=u_{ \underline{n},\underline{i}}(\phi_{\underline{n}}(a)\otimes\operatorname{id}_{ X_{\underline{i}}})u_{\underline{n},\underline{i}}^{*}.\] Consider the element \(k_{\underline{n}}\otimes\operatorname{id}_{X_{\underline{i}}}\) for some \(k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}}I)\). From the length \(1\) case, we have that \(\phi_{\underline{i}}(I)\subseteq\mathcal{K}(X_{\underline{i}}I)\). An application of Corollary 2.2.5 then yields that \(k_{\underline{n}}\otimes\operatorname{id}_{X_{\underline{i}}}\in\mathcal{K}((X _{\underline{n}}\otimes_{A}X_{\underline{i}})I)\), and thus \(u_{\underline{n},\underline{i}}(k_{\underline{n}}\otimes\operatorname{id}_{ X_{\underline{i}}})u_{\underline{n},\underline{i}}^{*}\in\mathcal{K}(X_{ \underline{m}}I)\). Returning to the proof, by the inductive hypothesis we have that \(\phi_{\underline{n}}(a)\in\mathcal{K}(X_{\underline{n}}I)\) and hence, applying the preceding comment for \(k_{\underline{n}}=\phi_{\underline{n}}(a)\), we deduce that \(\phi_{\underline{m}}(a)\in\mathcal{K}(X_{\underline{m}}I)\). By induction, the proof is complete. Let \(X\) be a compactly aligned product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\) and let \((\pi,t)\) be a Nica-covariant representation of \(X\). We say that \((\pi,t)\)_admits a gauge action_\(\gamma\colon\mathbb{T}^{d}\to\operatorname{Aut}(\mathrm{C}^{*}(\pi,t))\) if \(\gamma\) is a group homomorphism, \(\{\gamma_{\underline{z}}\}_{\underline{z}\in\mathbb{T}^{d}}\) is point-norm continuous and \[\gamma_{\underline{z}}(\pi(a))=\pi(a)\text{ for all }a\in A\text{ and }\gamma_{ \underline{z}}(t_{\underline{n}}(\xi_{\underline{n}}))=\underline{z}^{ \underline{n}}t_{\underline{n}}(\xi_{\underline{n}})\text{ for all }\xi_{\underline{n}}\in X_{ \underline{n}},\underline{n}\in\mathbb{Z}_{+}^{d}\setminus\{\underline{0}\},\] for each \(\underline{z}\in\mathbb{T}^{d}\). If \(\underline{z}=(z_{1},\ldots,z_{d})\in\mathbb{T}^{d}\) and \(\underline{n}=(n_{1},\ldots,n_{d})\in\mathbb{Z}_{+}^{d}\), then we have that \(\underline{z}^{\underline{n}}:=z_{1}^{n_{1}}\ldots z_{d}^{n_{d}}\). When such a gauge action \(\gamma\) exists, it is necessarily unique. The Fock representation and the universal Nica-covariant representation admit gauge actions. We say that an ideal \(\mathfrak{J}\subseteq\mathrm{C}^{*}(\pi,t)\) is _gauge-invariant_ or _equivariant_ if \(\gamma_{\underline{z}}(\mathfrak{J})\subseteq\mathfrak{J}\) for all \(\underline{z}\in\mathbb{T}^{d}\) (and so \(\gamma_{\underline{z}}(\mathfrak{J})=\mathfrak{J}\) for all \(\underline{z}\in\mathbb{T}^{d}\)). Given \(\underline{m},\underline{m}^{\prime}\in\mathbb{Z}_{+}^{d}\) with \(\underline{m}\leq\underline{m}^{\prime}\), we write \[B^{(\pi,t)}_{[\underline{m},\underline{m}^{\prime}]}:=\mathrm{span}\{\psi_{ \underline{n}}(\mathcal{K}(X_{\underline{n}}))\mid\underline{m}\leq\underline {n}\leq\underline{m}^{\prime}\}\quad\text{and}\quad B^{(\pi,t)}_{(\underline{m },\underline{m}^{\prime}]}:=\mathrm{span}\{\psi_{\underline{n}}(\mathcal{K}(X _{\underline{n}}))\mid\underline{m}<\underline{n}\leq\underline{m}^{\prime}\}.\] These spaces are in fact C*-subalgebras of \(\mathrm{C}^{*}(\pi,t)\), e.g, [10]. By convention we take the linear span of \(\emptyset\) to be \(\{0\}\), so that \(B^{(\pi,t)}_{(\underline{m},\underline{m})}=\{0\}\) for all \(\underline{m}\in\mathbb{Z}_{+}^{d}\). We also define \[B^{(\pi,t)}_{[\underline{m},\infty]}:=\overline{\mathrm{span}}\{\psi_{ \underline{n}}(\mathcal{K}(X_{\underline{n}}))\mid\underline{m}\leq\underline{ n}\}\quad\text{and}\quad B^{(\pi,t)}_{(\underline{m},\infty]}:=\overline{\mathrm{ span}}\{\psi_{\underline{n}}(\mathcal{K}(X_{\underline{n}}))\mid\underline{m}< \underline{n}\}.\] We refer to these C*-algebras as the _cores_ of \((\pi,t)\). When \((\pi,t)\) admits a gauge action \(\gamma\), we have that \[B^{(\pi,t)}_{[\underline{0},\infty]}=\mathrm{C}^{*}(\pi,t)^{\gamma}:=\{f\in \mathrm{C}^{*}(\pi,t)\mid\gamma_{\underline{z}}(f)=f\text{ for all }\underline{z}\in \mathbb{T}^{d}\},\] where \(\mathrm{C}^{*}(\pi,t)^{\gamma}\) is the _fixed point algebra_ of \(\mathrm{C}^{*}(\pi,t)\) (with respect to \(\gamma\)). In the current work we will consider a subclass of compactly aligned product systems over \(\mathbb{Z}_{+}^{d}\) introduced by Dor-On and the second-named author [16]. Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\). We say that \(X\) is _strong compactly aligned_ if it is compactly aligned and satisfies: \[\iota_{\underline{n}}^{\underline{n}+\underline{i}}(\mathcal{K}(X_{\underline {n}}))\subseteq\mathcal{K}(X_{\underline{n}+\underline{i}})\text{ whenever }\underline{n}\perp\underline{i},\text{ where }i\in[d], \underline{n}\in\mathbb{Z}_{+}^{d}\setminus\{\underline{0}\}. \tag{2.6}\] We disallow \(\underline{n}=\underline{0}\), as then (2.6) would imply that the left action of each fibre of \(X\) is by compact operators. Note that (2.6) does not imply compact alignment (rather, a strong compactly aligned product system is _a priori_ assumed to be compactly aligned). Any C*-correspondence, when viewed as a product system over \(\mathbb{Z}_{+}\), is vacuously strong compactly aligned. Moreover, strong compactly aligned product systems include those product systems over \(\mathbb{Z}_{+}^{d}\) whose left actions are by compacts. **Corollary 2.5.3**.: _Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\). If \(\phi_{\underline{n}}(A)\subseteq\mathcal{K}(X_{\underline{n}})\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\), then \(X\) is strong compactly aligned._ **Proof.** This is immediate by Proposition 2.4.1. Strong compact alignment is preserved under unitary equivalence. **Proposition 2.5.4**.: _Let \(X\) and \(Y\) be unitarily equivalent product systems over \(\mathbb{Z}_{+}^{d}\) with coefficients in C*-algebras \(A\) and \(B\), respectively. Then \(X\) is strong compactly aligned if and only if \(Y\) is strong compactly aligned._ **Proof.** Suppose that \(X\) and \(Y\) are unitarily equivalent by a collection \(\{W_{\underline{n}}\colon X_{\underline{n}}\to Y_{\underline{n}}\}_{\underline{n }\in\mathbb{Z}_{+}^{d}}\). We will use \(\{\iota_{\underline{n}}^{\underline{n}+\underline{m}}\}_{\underline{n}, \underline{m}\in\mathbb{Z}_{+}^{d}}\) to denote the connecting \(*\)-homomorphisms of \(X\) and \(\{\underline{j}_{\underline{n}}^{\underline{n}+\underline{m}}\}_{\underline{n}, \underline{m}\in\mathbb{Z}_{+}^{d}}\) to denote the connecting \(*\)-homomorphisms of \(Y\). Suppose that \(X\) is strong compactly aligned. Then \(Y\) is compactly aligned by Proposition 2.4.2, so it remains to check strong compact alignment. Accordingly, fix \(\underline{n}\in\mathbb{Z}_{+}^{d}\setminus\{\underline{0}\}\) and \(i\in[d]\) such that \(\underline{n}\perp\underline{i}\). We must show that \[j_{\underline{n}}^{\underline{n}+\underline{i}}(\mathcal{K}(Y_{\underline{n}})) \subseteq\mathcal{K}(Y_{\underline{n}+\underline{i}}).\] It suffices to show this for rank-one operators. For all \(y_{\underline{n}},y_{\underline{n}}^{\prime}\in Y_{\underline{n}}\), we have that \[j_{\underline{n}}^{\underline{n}+\underline{i}}(\Theta^{Y_{\underline{n}}}_{y_{ \underline{n}},y_{\underline{n}}^{\prime}})=W_{\underline{n}+\underline{i}}\iota _{\underline{n}}^{\underline{n}+\underline{i}}(\Theta^{X_{\underline{n}-1}}_{W_ {\underline{n}}^{-1}}(y_{\underline{n}}),W_{\underline{n}}^{-1}(y_{\underline{n} }^{\prime}))W_{\underline{n}+\underline{i}}^{-1}\in W_{\underline{n}+ \underline{i}}\mathcal{K}(X_{\underline{n}+\underline{i}})W_{\underline{n}+ \underline{i}}^{-1}=\mathcal{K}(Y_{\underline{n}+\underline{i}}),\] where we have used (2.4) in the first equality, strong compact alignment of \(X\) in the second assertion, and (2.2) in the final equality. This proves that \(Y\) is strong compactly aligned. The converse is proven in a dual way by using the collection \(\{W_{\underline{n}}^{-1}\}_{\underline{n}\in\mathbb{Z}_{+}^{d}}\). The quotient construction of Proposition 2.3.6 also preserves strong compact alignment. **Proposition 2.5.5**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal that is positively invariant for \(X\). Then \([X]_{I}\) is strong compactly aligned._ **Proof.** We use \(\{\underline{n}+\underline{m}\}_{\underline{n},\underline{m}\in\mathbb{Z}_{+}^ {d}}\) (resp. \(\{\underline{j}_{\underline{n}}^{\underline{n}+\underline{m}}\}_{\underline{n},\underline{m}\in\mathbb{Z}_{+}^{d}}\)) to denote the connecting \(*\)-homomorphisms of \(X\) (resp. \([X]_{I}\)). Proposition 2.4.4 guarantees that \([X]_{I}\) is compactly aligned, so it remains to check that \([X]_{I}\) is strong compactly aligned. To this end, fix \(\underline{n}\in\mathbb{Z}_{+}^{d}\setminus\{\underline{0}\},i\in[d]\) and \(\dot{k}_{\underline{n}}\in\mathcal{K}([X_{\underline{n}}]_{I})\), and suppose that \(\underline{n}\perp\underline{i}\). By Lemma 2.2.2, we have that \(\dot{k}_{\underline{n}}=[k_{\underline{n}}]_{I}\) for some \(k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}})\). Thus, by Proposition 2.4.4, we have that \[j_{\underline{n}}^{\underline{n}+\underline{i}}(\dot{k}_{\underline{n}})=j_{ \underline{n}}^{\underline{n}+\underline{i}}([k_{\underline{n}}]_{I})=[ \underline{\iota}_{\overline{n}}^{\underline{n}+\underline{i}}(k_{\underline{ n}})]_{I}\in\mathcal{K}([X_{\underline{n}+\underline{i}}]_{I}),\] where we have used the strong compact alignment of \(X\) together with Lemma 2.2.2 in the final assertion. Hence \(j_{\underline{n}}^{\underline{n}+\underline{i}}(\mathcal{K}([X_{\underline{ n}}]_{I}))\subseteq\mathcal{K}([X_{\underline{n}+\underline{i}}]_{I})\), completing the proof. We will require some notation and results from [16]. Henceforth we assume that \(X\) is strong compactly aligned. For each \(\emptyset\neq F\subseteq[d]\), we define \[\mathcal{J}_{F}:=(\bigcap_{i\in F}\ker\phi_{\underline{i}})^{\perp}\cap( \bigcap_{i\in[d]}\phi_{\underline{i}}^{-1}(\mathcal{K}(X_{\underline{i}}))) \quad\text{and}\quad\mathcal{J}_{\emptyset}:=\{0\},\] which are ideals of \(A\). Notice that strong compact alignment of \(X\) implies that \[\bigcap_{i\in F}\phi_{\underline{i}}^{-1}(\mathcal{K}(X_{\underline{i}}))= \bigcap\{\phi_{\underline{n}}^{-1}(\mathcal{K}(X_{\underline{n}}))\mid \underline{0}\leq\underline{n}\leq\underline{1}_{F}\}\text{ for all }\emptyset\neq F\subseteq[d].\] In turn, for each \(F\subseteq[d]\), we define \[\mathcal{I}_{F}:=\{a\in A\mid\big{\langle}X_{\underline{n}},aX_{\underline{n} }\big{\rangle}\subseteq\mathcal{J}_{F}\text{ for all }\underline{n}\perp F\}=\bigcap\{X_{\underline{n}}^{-1}( \mathcal{J}_{F})\mid\underline{n}\perp F\}.\] In particular, we have that \(\mathcal{I}_{\emptyset}=\{0\}\) and \(\mathcal{I}_{F}\subseteq\mathcal{J}_{F}\) for all \(F\subseteq[d]\). The ideal \(\mathcal{I}_{F}\) is the largest ideal in \(\mathcal{J}_{F}\) that is \(F^{\perp}\)-invariant, i.e, \(\big{\langle}X_{\underline{n}},\mathcal{I}_{F}X_{\underline{n}}\big{\rangle} \subseteq\mathcal{I}_{F}\) for all \(\underline{n}\perp F\)[16, Proposition 2.7]. To avoid ambiguity, given two strong compactly aligned product systems \(X\) and \(Y\), we will denote the ideals \(\mathcal{J}_{F}\) for \(X\) and \(Y\) by \(\mathcal{J}_{F}(X)\) and \(\mathcal{J}_{F}(Y)\), respectively. We will use the same convention for the ideals \(\mathcal{I}_{F}\). The ideals \(\mathcal{J}_{F}\) and \(\mathcal{I}_{F}\) are preserved under unitary equivalence. **Proposition 2.5.6**.: _Let \(X\) and \(Y\) be strong compactly aligned product systems over \(\mathbb{Z}_{+}^{d}\) with coefficients in C*-algebras \(A\) and \(B\), respectively. If \(X\) and \(Y\) are unitarily equivalent by a collection \(\{W_{\underline{n}}\colon X_{\underline{n}}\to Y_{\underline{n}}\}_{ \underline{n}\in\mathbb{Z}_{+}^{d}}\), then_ \[W_{\underline{0}}(\mathcal{J}_{F}(X))=\mathcal{J}_{F}(Y)\quad\text{and}\quad W _{\underline{0}}(\mathcal{I}_{F}(X))=\mathcal{I}_{F}(Y)\text{ for all }F\subseteq[d].\] **Proof.** This is a direct consequence of the properties of \(\{W_{\underline{n}}\}_{\underline{n}\in\mathbb{Z}_{+}^{d}}\) from Remark 2.3.2. The ideals \(\mathcal{I}_{F}\) emerge naturally when solving polynomial equations, originating in [14] in the case of C*-dynamical systems. In order to make this precise, we require the following notation. Following the conventions of [16, Section 3], we introduce an approximate unit \((k_{\underline{i},\lambda})_{\lambda\in\Lambda}\) of \(\mathcal{K}(X_{\underline{i}})\) for each generator \(\underline{i}\) of \(\mathbb{Z}_{+}^{d}\). Without loss of generality, we may assume that these approximate units are indexed by the same directed set \(\Lambda\), by replacing with their product. **Proposition 2.5.7**.: _[_16_, Proposition 2.4]_ _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \((k_{\underline{i},\lambda})_{\lambda\in\Lambda}\) be an approximate unit of \(\mathcal{K}(X_{\underline{i}})\) for all \(i\in[d]\). Fix \(\emptyset\neq F\subseteq[d]\) and \(\underline{0}\neq\underline{n}\in\mathbb{Z}_{+}^{d}\), and set \(\underline{m}=\underline{n}\vee\underline{1}_{F}\). Then the net \((e_{F,\lambda})_{\lambda\in\Lambda}\) defined by_ \[e_{F,\lambda}:=\prod\{\iota_{\underline{i}}^{1_{F}}(k_{\underline{i},\lambda}) \mid i\in F\}\text{ for all }\lambda\in\Lambda,\] _is contained in \(\mathcal{K}(X_{\underline{1}_{F}})\), and we have that_ \[\|\cdot\|\cdot\lim_{\lambda}\iota_{\underline{1}_{F}}^{\underline{m}}(e_{F, \lambda})_{\underline{m}}^{\underline{m}}(k_{\underline{n}})=\iota_{\underline{ m}}^{\underline{m}}(k_{\underline{n}})\text{ for all }k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}}). \tag{2.7}\] _Moreover, it follows that \(\iota_{\underline{m}}^{\underline{m}}(k_{\underline{n}})\in\mathcal{K}(X_{ \underline{m}})\) for all \(k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}})\)._ Note that (2.7) holds independently of the order of the product defining \(e_{F,\lambda}\). Let \((\pi,t)\) be a Nica-covariant representation of \(X\). Fixing \(i\in[d]\) and an approximate unit \((k_{\underline{i},\lambda})_{\lambda\in\Lambda}\) of \(\mathcal{K}(X_{\underline{i}})\), we define \[p_{\underline{i},\lambda}:=\psi_{\underline{i}}(k_{\underline{i},\lambda}) \text{ for all }\lambda\in\Lambda,\text{ and }p_{\underline{i}}:=\text{w*-}\lim_{\lambda}p_{ \underline{i},\lambda}; \tag{2.8}\] i.e., \(p_{\underline{i}}\) is the projection on the space \([\psi_{\underline{i}}(\mathcal{K}(X_{\underline{i}}))H]\) for the Hilbert space \(H\) on which \((\pi,t)\) acts. In turn, we set \[q_{\emptyset}:=I,q_{\underline{i}}:=I-p_{\underline{i}},\text{ and }q_{F}:= \prod_{i\in F}(I-p_{\underline{i}})\text{ for all }\emptyset\neq F\subseteq[d]. \tag{2.9}\] **Remark 2.5.8**.: Note that the projections \(p_{\underline{i}}\) can be defined for a (just) compactly aligned product system \(X\). When \(X\) is in addition strong compactly aligned, the projections \(p_{\underline{i}}\) commute, so there is no ambiguity regarding the order of the product defining each \(q_{F}\). In particular, we claim that \(p_{\underline{i}}p_{\underline{j}}\) coincides with the projection on the space \([\psi_{\underline{i}+\underline{j}}(\mathcal{K}(X_{\underline{i}+\underline{ j}}))H]\) whenever \(i\neq j\); due to symmetry, we deduce that the same holds for \(p_{\underline{i}}p_{\underline{j}}\) and thus \(p_{\underline{i}}p_{\underline{j}}=p_{\underline{i}}p_{\underline{i}}\). Indeed, for \(i,j\in[d]\) with \(i\neq j\), consider the approximate units \((k_{\underline{i},\lambda})_{\lambda\in\Lambda}\) and \((k_{\underline{j},\lambda})_{\lambda\in\Lambda}\) of \(\mathcal{K}(X_{\underline{i}})\) and \(\mathcal{K}(X_{\underline{j}})\), respectively. We see that the nets \[\big{(}\iota_{\underline{i}}^{\underline{i}+\underline{j}}(k_{\underline{i}, \lambda})\big{)}_{\lambda\in\Lambda}\quad\text{and}\quad\big{(}\iota_{ \underline{j}}^{\underline{i}+\underline{j}}(k_{\underline{j},\lambda})\big{)} _{\lambda\in\Lambda}\] are contained in \(\mathcal{K}(X_{\underline{i}+\underline{j}})\) due to strong compact alignment. In particular, we claim that they provide approximate units for \(\mathcal{K}(X_{\underline{i}+\underline{j}})\). Indeed, we have that \(\|\iota_{\underline{i}}^{\underline{i}+\underline{j}}(k_{\underline{i}, \lambda})\|\leq\|k_{\underline{i},\lambda}\|\leq 1\) for all \(\lambda\in\Lambda\), and for \(\xi_{\underline{i}},\eta_{\underline{i}}\in X_{\underline{i}}\) and \(\xi_{\underline{j}},\eta_{\underline{j}}\in X_{\underline{j}}\), we have that \[\|\iota_{\underline{i}}^{\underline{i}+\underline{j}}(k_{\underline{i}, \lambda})\Theta_{\xi_{\underline{i}}\xi_{\underline{i}}\eta_{\underline{i}} \eta_{\underline{i}}}^{X_{\underline{i}+\underline{j}}}-\Theta_{\xi_{ \underline{i}}\xi_{\underline{j}}\eta_{\underline{i}}\eta_{\underline{i}}}^{X_ {\underline{i}+\underline{j}}}\|=\|\Theta_{(k_{\underline{i},\lambda}\xi_{ \underline{i}}-\xi_{\underline{i}})\otimes\xi_{\underline{i}}\eta_{\underline {i}}\otimes\eta_{\underline{i}}}^{X_{\underline{i}}\otimes\lambda_{\underline{ j}}}\|\leq\|k_{\underline{i},\lambda}\xi_{\underline{i}}-\xi_{\underline{i}}\|\cdot\|\xi_{ \underline{j}}\|\cdot\|\eta_{\underline{i}}\|\cdot\|\eta_{\underline{j}}\| \stackrel{{\lambda}}{{\to}}0.\] Taking finite linear combinations of rank-one operators and their norm-limits establishes the claim for \(\underline{i}\), and the case of \(\underline{j}\) is dealt with by symmetry. Therefore, due to Nica-covariance, we have that \[\|\!\cdot\!\|-\lim_{\lambda}\psi_{\underline{i}}(k_{\underline{i},\lambda}) \psi_{\underline{i}+\underline{j}}(k_{\underline{i}+\underline{j}})=\|\!\cdot\! \|-\lim_{\lambda}\psi_{\underline{i}+\underline{j}}(\iota_{\underline{i}}^{ \underline{i}+\underline{j}}(k_{\underline{i},\lambda})k_{\underline{i}+ \underline{j}})=\psi_{\underline{i}+\underline{j}}(k_{\underline{i}+\underline{ j}})\text{ for all }k_{\underline{i}+\underline{j}}\in\mathcal{K}(X_{\underline{i}+ \underline{j}}),\] and likewise for \(\underline{j}\). Thus, for every \(h\in H\) and \(k_{\underline{i}+\underline{j}}\in\mathcal{K}(X_{\underline{i}+\underline{j}})\), we deduce that \[p_{\underline{i}}\psi_{\underline{i}+\underline{j}}(k_{\underline{i}+\underline {j}})h=(\text{w*-}\lim_{\lambda}\psi_{\underline{i}}(k_{\underline{i}, \lambda}))\psi_{\underline{i}+\underline{j}}(k_{\underline{i}+\underline{j}})h =\big{[}\,\|\!\cdot\!\|-\lim_{\lambda}\psi_{\underline{i}}(k_{\underline{i}, \lambda})\psi_{\underline{i}+\underline{j}}(k_{\underline{i}+\underline{j}}) \big{]}h=\psi_{\underline{i}+\underline{j}}(k_{\underline{i}+\underline{j}})h.\] Likewise for \(\underline{j}\), we have that \(p_{\underline{i}}\psi_{\underline{i}+\underline{j}}(k_{\underline{i}+ \underline{j}})h=\psi_{\underline{i}+\underline{j}}(k_{\underline{i}+\underline{ j}})h\), and therefore \[p_{\underline{i}}p_{\underline{j}}h=h\text{ for all }h\in[\psi_{\underline{i}+\underline{j}}(\mathcal{K}(X_{ \underline{i}+\underline{j}}))H].\] On the other hand, for \(h\perp[\psi_{\underline{i}+\underline{j}}(\mathcal{K}(X_{\underline{i}+ \underline{j}}))H]\) and \(h^{\prime}\in H\), we have that \[\Big{\langle}p_{\underline{i}}p_{\underline{j}}h,h^{\prime}\Big{\rangle}=\Big{\langle} h,p_{\underline{j}}p_{\underline{i}}h^{\prime}\Big{\rangle}=\lim_{\lambda}\lim_{ \lambda^{\prime}}\Big{\langle}h,\psi_{\underline{j}}(k_{\underline{j},\lambda}) \psi_{\underline{i}}(k_{\underline{i},\lambda^{\prime}})h^{\prime}\Big{\rangle}=0,\] where we have used that \[\psi_{\underline{j}}(k_{\underline{j},\lambda})\psi_{\underline{i}}(k_{\underline{i},\lambda^{\prime}})h^{\prime}=\psi_{\underline{i}+\underline{j}}(\iota_{ \underline{j}}^{\underline{i}+\underline{j}}(k_{\underline{j},\lambda})\iota_{ \underline{i}}^{\underline{i}+\underline{j}}(k_{\underline{i},\lambda^{\prime}}))h^{ \prime}\in\psi_{\underline{i}+\underline{j}}(\mathcal{K}(X_{\underline{i}+ \underline{j}}))H\text{ for all }\lambda,\lambda^{\prime}\in\Lambda,\] due to Nica-covariance. Hence we have that \[p_{\underline{i}}p_{\underline{j}}h=0\text{ for all }h\perp[\psi_{\underline{i}+\underline{j}}( \mathcal{K}(X_{\underline{i}+\underline{j}}))H].\] Consequently \(p_{\underline{i}}p_{\underline{j}}\) coincides with the projection on the space \([\psi_{\underline{i}+\underline{j}}(\mathcal{K}(X_{\underline{i}+\underline{j}}))H]\), as required. We gather some useful algebraic relations proved in [16]. **Proposition 2.5.9**.: **[**16**, Proposition 4.4, proof of Proposition 4.6]** _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \((\pi,t)\) be a Nica-covariant representation of \(X\) and fix \(F\subseteq[d]\). Then for all \(\underline{m}\in\mathbb{Z}_{+}^{d}\) and \(\xi_{\underline{m}}\in X_{\underline{m}}\), we have that_ \[q_{F}t_{\underline{m}}(\xi_{\underline{m}})=\begin{cases}t_{\underline{m}}(\xi_ {\underline{m}})q_{F}&\text{ if }\underline{m}\perp F,\\ 0&\text{ if }\underline{m}\not\perp F,\end{cases}\] _so that in particular \(q_{F}\in\pi(A)^{\prime}\). Moreover, for the ideal \(\mathcal{I}_{F}\) we have that_ \[\pi(\mathcal{I}_{F})t_{\underline{m}}(X_{\underline{m}})\subseteq t_{ \underline{m}}(X_{\underline{m}})\pi(\mathcal{I}_{F})\text{ for all }\underline{m}\perp F.\] **Proposition 2.5.10**.: **[**16**, Section 3]** _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \((\pi,t)\) be a Nica-covariant representation of \(X\). Let \(p_{\underline{i}\lambda}\) and \(p_{\underline{i}}\) be the associated operators of (2.8), and fix \(\emptyset\neq F\subseteq[d]\). Then_ \[\|\!\cdot\!\|\!\cdot\!\lim_{\lambda}\psi_{\underline{n}}(k_{\underline{n}}) \prod_{i\in F}p_{\underline{i}\lambda}=\psi_{\underline{n}}(k_{\underline{n}}) \prod_{i\in F}p_{\underline{i}}\text{ for all }\underline{n}\in\mathbb{Z}_{+}^{d} \setminus\{\underline{0}\},k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}}).\] _If \(a\in\bigcap\{\phi_{\underline{i}}^{-1}(\mathcal{K}(X_{\underline{i}}))\mid i \in F\}\), then_ \[\pi(a)\prod_{i\in D}p_{\underline{i}}=\psi_{\underline{1}_{D}}(\phi_{ \underline{1}_{D}}(a))\text{ for all }\emptyset\neq D\subseteq F,\] _and so_ \[\pi(a)q_{F}=\pi(a)+\sum\{(-1)^{[\underline{n}]}\psi_{\underline{n}}(\phi_{ \underline{n}}(a))\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F}\} \in\mathrm{C}^{*}(\pi,t).\] Note that the product of the \(p_{\underline{i}\lambda}\) in the first statement of Proposition 2.5.10 can be taken in any order. To elaborate on the second point, let \(a\in\bigcap\{\phi_{\underline{i}}^{-1}(\mathcal{K}(X_{\underline{i}}))\mid i \in F\}\). Then \(a\in\bigcap\{\phi_{\underline{i}}^{-1}(\mathcal{K}(X_{\underline{i}}))\mid i \in D\}\) for a fixed \(\emptyset\neq D\subseteq F\), and so we have that \[\pi(a)\prod_{i\in D}p_{\underline{i}\lambda}=\pi(a)\prod_{i\in D}\psi_{ \underline{i}}(k_{\underline{i}\lambda})=\pi(a)\psi_{\underline{1}_{D}}(e_{D, \lambda})=\psi_{\underline{1}_{D}}(\phi_{\underline{1}_{D}}(a)e_{D,\lambda}) \text{ for all }\lambda\in\Lambda,\] using Nica-covariance in the second equality, as well as the notation of Proposition 2.5.7. We have that \[\|\!\cdot\!\|\!\cdot\!\lim_{\lambda}\psi_{\underline{1}_{D}}(\phi_{\underline {1}_{D}}(a)e_{D,\lambda})=\psi_{\underline{1}_{D}}(\|\!\cdot\!\|\!\cdot\!\lim_ {\lambda}\phi_{\underline{1}_{D}}(a)e_{D,\lambda})=\psi_{\underline{1}_{D}}( \phi_{\underline{1}_{D}}(a)),\] by [16, (2.4)], where we are now using the fact that \(\phi_{\underline{1}_{D}}(a)\in\mathcal{K}(X_{\underline{1}_{D}})\) by strong compact alignment. Note that the w*-limit of the net \((\prod_{i\in D}p_{\underline{i}\lambda})_{\lambda\in\Lambda}\) is the projection on the space \([\psi_{\underline{1}_{D}}(\mathcal{K}(X_{\underline{1}_{D}}))H]\) and coincides with \(\prod_{i\in D}p_{\underline{i}}\), as in Remark 2.5.8. Hence we obtain that \[\pi(a)\prod_{i\in D}p_{\underline{i}} =\text{w*-}\lim_{\lambda}\pi(a)\prod_{i\in D}p_{\underline{i} \lambda}=\text{w*-}\lim_{\lambda}\psi_{\underline{1}_{D}}(\phi_{\underline{1 }_{D}}(a)e_{D,\lambda})\] \[=\|\!\cdot\!\|\!-\!\lim_{\lambda}\psi_{\underline{1}_{D}}(\phi_{ \underline{1}_{D}}(a)e_{D,\lambda})=\|\!\cdot\!\|\!-\!\lim_{\lambda}\pi(a)\prod _{i\in D}p_{\underline{i}\lambda}=\psi_{\underline{1}_{D}}(\phi_{\underline{1 }_{D}}(a)),\] as claimed. Let \((\pi,t)\) be a Nica-covariant representation of \(X\). In the process of studying the kernel of \(\pi\times t\), one needs to solve equations of the form \[\pi(a)\in B^{(\pi,t)}_{(\underline{0},\underline{m}]}\text{ for }a\in A, \underline{m}\in\mathbb{Z}_{+}^{d}. \tag{2.10}\] Due to the structure of the cores, an element \(\pi(a)\) satisfies (2.10) if and only if there are \(k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}})\) for all \(\underline{0}\neq\underline{n}\leq\underline{m}\) such that \[\pi(a)+\sum\{\psi_{\underline{n}}(k_{\underline{n}})\mid\underline{0}\neq \underline{n}\leq\underline{m}\}=0. \tag{2.11}\] Recall that \(\psi_{\underline{n}}(k_{\underline{n}})p_{\underline{i}}=\psi_{\underline{n}}(k_{ \underline{n}})\) when \(i\in\operatorname{supp}\underline{n}\). Let \(F:=\operatorname{supp}\underline{m}\). Since the \(p_{\underline{i}}\) for \(i\in[d]\) are commuting projections, we have that \(\psi_{\underline{n}}(k_{\underline{n}})q_{F}=0\) for all \(\underline{0}\neq\underline{n}\leq\underline{m}\), and therefore (2.11) implies that \[\pi(a)q_{F}=0. \tag{2.12}\] Moreover, by [16, Proposition 3.2], if \((\pi,t)\) is injective then (under the assumption of strong compact alignment) (2.11) implies that \[\phi_{\underline{n}}(a)\in\mathcal{K}(X_{\underline{n}})\text{ for all } \underline{0}\leq\underline{n}\leq\underline{1}_{[d]}. \tag{2.13}\] Conversely, if (2.12) and (2.13) hold for some \(F\subseteq[d]\) and \(a\in A\), then Proposition 2.5.10 gives that \[\pi(a)+\sum\{(-1)^{|\underline{n}|}\psi_{\underline{n}}(\phi_{\underline{n}}(a ))\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F}\}=\pi(a)q_{F}=0, \tag{2.14}\] and so \(\pi(a)\in B^{(\pi,t)}_{(\underline{0},\underline{m}]}\) for any \(\underline{m}\geq\underline{1}_{F}\). The following proposition justifies the usage of \(\{\mathcal{I}_{F}\}_{F\subseteq[d]}\). **Proposition 2.5.11**.: _[_16_, Proposition 3.4]_ _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Suppose that \((\pi,t)\) is an injective Nica-covariant representation of \(X\) and fix \(a\in A\). If \(\pi(a)\in B^{(\pi,t)}_{(\underline{0},\underline{m}]}\) for some \(\underline{m}\in\mathbb{Z}_{+}^{d}\), then \(a\in\mathcal{I}_{F}\) for \(F=\operatorname{supp}\underline{m}\)._ We define the _ideal of the CNP-relations with respect to \((\pi,t)\)_ as follows: \[\mathfrak{J}_{\mathcal{I}}^{(\pi,t)}:=\langle\pi(\mathcal{I}_{F})q_{F}\mid F \subseteq[d]\rangle\subseteq\operatorname{C}^{*}(\pi,t).\] We will say that \((\pi,t)\) is a _CNP-representation (of \(X\))_ if it satisfies \[\pi(a)q_{F}=\pi(a)+\sum\{(-1)^{|\underline{n}|}\psi_{\underline{n}}(\phi_{ \underline{n}}(a))\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F}\}=0 \text{ for all }a\in\mathcal{I}_{F},F\subseteq[d].\] It follows that \(\mathfrak{J}_{\mathcal{I}}^{(\pi,t)}=\{0\}\) if and only if \((\pi,t)\) is a CNP-representation. We write \(\mathcal{NO}_{X}\) for the universal C*-algebra with respect to the CNP-representations, and refer to it as the _Cuntz-Nica-Pimsner algebra of \(X\)_, i.e., \[\mathcal{NO}_{X}\equiv\mathcal{NT}_{X}/\mathfrak{J}_{\mathcal{I}}^{(\pi_{X}, \mathfrak{I}_{X})}.\] We write \((\pi_{X}^{\mathcal{I}},t_{X}^{\mathcal{I}})\) for the _universal CNP-representation (of \(X\))_, i.e., \((\pi_{X}^{\mathcal{I}},t_{X}^{\mathcal{I}})=(Q\circ\overline{\pi}_{X},Q\circ \overline{t}_{X})\), where \(Q\colon\mathcal{NT}_{X}\to\mathcal{NO}_{X}\) is the canonical quotient map. Since \(\mathcal{NO}_{X}\) is an equivariant quotient of \(\mathcal{NT}_{X}\), the representation \((\pi_{X}^{\mathcal{I}},t_{X}^{\mathcal{I}})\) admits a gauge action. In [16] it is shown that \(\mathcal{NO}_{X}\) coincides with the Cuntz-Nica-Pimsner algebra of Sims and Yeend [51], and thus with the strong covariance algebra of Sehmen [48]. In particular, \((\pi_{X}^{\mathcal{I}},t_{X}^{\mathcal{I}})\) is injective by [51, Theorem 4.1], since \((\mathbb{Z}^{d},\mathbb{Z}_{+}^{d})\) satisfies [51, Condition (3.5)]. Moreover, \(\mathcal{NO}_{X}\) is co-universal with respect to the injective Nica-covariant representations of \(X\) that admit a gauge action [51]. The co-universal property of \(\mathcal{NO}_{X}\) has been verified in several works [10, 16, 17, 49] in more general contexts. Moreover, unitarily equivalent strong compactly aligned product systems \(X\) and \(Y\) satisfy \(\mathcal{NO}_{X}\cong\mathcal{NO}_{Y}\) canonically. **Proposition 2.5.12**.: _Let \(X\) and \(Y\) be unitarily equivalent strong compactly aligned product systems with coefficients in C*-algebras \(A\) and \(B\), respectively. Then the bijection of Proposition 2.3.4 preserves the CNP-representations._ Proof.: This is immediate by Propositions 2.4.3 and 2.5.6, and Remark 2.3.2. We finish the subsection with a proposition concerning canonical \(*\)-epimorphisms arising from injective Nica-covariant representations. This trick was used implicitly in [16, 31]. **Proposition 2.5.13**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \((\tilde{\pi},\tilde{t})\) and \((\pi,t)\) be Nica-covariant representations of \(X\) such that \((\pi,t)\) is injective, and suppose there exists a canonical \(*\)-epimorphism \(\Phi\colon\operatorname{C}^{*}(\tilde{\pi},\tilde{t})\to\operatorname{C}^{*}( \pi,t)\). Then the following are equivalent:_ * \(\ker\Phi\cap B^{(\tilde{\pi},\tilde{t})}_{[\underline{0},\infty]}=\{0\}\)_;_ * \(\ker\Phi\cap B^{(\tilde{\pi},\tilde{t})}_{[\underline{0},\underline{m}]}=\{0\}\) _for all_ \(\underline{m}\in\mathbb{Z}_{+}^{d}\)_;_ * \(\ker\Phi\cap B^{(\tilde{\pi},\tilde{t})}_{[\underline{0},\underline{1}_{[d]}]}=\{0\}\)_;_ * \(\ker\Phi\cap B^{(\tilde{\pi},\tilde{t})}_{[\underline{0},\underline{1}_{[d]}]}=\{0\}\) _for all_ \(F\subseteq[d]\) **Proof.** First note that \((\tilde{\pi},\tilde{t})\) is injective since \((\pi,t)\) is injective. By definition we see that \(B^{(\tilde{\pi},\tilde{t})}_{[\underline{0},\infty]}\) is the inductive limit of the cores \(B^{(\tilde{\pi},\tilde{t})}_{[\underline{0},\underline{m}]}\) for \(\underline{m}\in\mathbb{Z}_{+}^{d}\), and thus [13, Lemma III.4.1] yields the equivalence of item (i) with item (ii). If item (ii) holds then it implies item (iii) by applying for \(\underline{m}=\underline{1}_{[d]}\). If item (iii) holds then it implies item (iv) since \(B^{(\tilde{\pi},\tilde{t})}_{[\underline{0},\underline{1}_{F}]}\subseteq B^{( \tilde{\pi},\tilde{t})}_{[\underline{0},\underline{1}_{d]}}\) for all \(F\subseteq[d]\). It now suffices to show that item (iv) implies item (ii), so fix \(\underline{m}\in\mathbb{Z}_{+}^{d}\). Without loss of generality, assume that \(\underline{m}\) has at least one entry greater than \(1\) (the assumption deals with the case where \(\underline{m}\) has no entries greater than \(1\)). To reach contradiction, assume that \(\ker\Phi\cap B^{(\tilde{\pi},\tilde{t})}_{[\underline{0},\underline{m}]}\neq \{0\}\). Take \(0\neq f\in\ker\Phi\cap B^{(\tilde{\pi},\tilde{t})}_{[\underline{0},\underline{m }]}\), so that we may write \[f=\tilde{\pi}(a)+\sum\{\tilde{\psi}_{\underline{n}}(k_{\underline{n}})\mid \underline{0}\neq\underline{n}\leq\underline{m}\},\] for some \(a\in A\) and \(k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}})\) for all \(\underline{0}\neq\underline{n}\leq\underline{m}\). Note that we can write \(\tilde{\pi}(a)=\tilde{\psi}_{\underline{0}}(k_{\underline{0}})\) for \(k_{\underline{0}}:=\phi_{\underline{0}}(a)\). Without loss of generality, we may assume that \(f\) is written irreducibly, so that we may choose a minimal \(\underline{0}\leq\underline{r}\leq\underline{m}\) such that \(k_{\underline{r}}\neq 0\), and \(\tilde{\psi}_{\underline{r}}(k_{\underline{r}})\notin B^{(\tilde{\pi},\tilde{t })}_{(\underline{r},\underline{m}]}\). The minimality of \(\underline{r}\) means that if we have \(\underline{0}\leq\underline{n}\leq\underline{m}\) such that \(k_{\underline{n}}\neq 0\) and \(\underline{n}\leq\underline{r}\), then \(\underline{n}=\underline{r}\). If \(\underline{r}=\underline{m}\), then \(f=\tilde{\psi}_{\underline{m}}(k_{\underline{m}})\) and \(\Phi(f)=\psi_{\underline{m}}(k_{\underline{m}})=0\). Injectivity of \((\pi,t)\) then implies that \(k_{\underline{m}}=0\) and hence \(f=0\), a contradiction. So without loss of generality assume that \(\underline{r}<\underline{m}\). Fixing \(\xi_{\underline{r}},\eta_{\underline{r}}\in X_{\underline{r}}\), we have that \[\tilde{t}_{\underline{r}}(\xi_{\underline{r}})^{*}f\tilde{t}_{\underline{r}}( \eta_{\underline{r}})=\tilde{\pi}(b)+\sum\{\tilde{\psi}_{\underline{n}}(k^{ \prime}_{\underline{n}})\mid\underline{0}\neq\underline{n}\leq\underline{m}- \underline{r}\},\] where \(k^{\prime}_{\underline{n}}\) is a suitably defined element of \(\mathcal{K}(X_{\underline{n}})\) for all \(\underline{0}\neq\underline{n}\leq\underline{m}-\underline{r}\) and \(b:=\left\langle\xi_{\underline{r}},k_{\underline{r}}\eta_{\underline{r}}\right\rangle\), due to the minimality of \(\underline{r}\). Applying \(\Phi\), we obtain that \[\pi(b)+\sum\{\psi_{\underline{n}}(k^{\prime}_{\underline{n}})\mid\underline{0 }\neq\underline{n}\leq\underline{m}-\underline{r}\}=0,\] which in turn implies that \(b\in\pi^{-1}(B^{(\pi,t)}_{(\underline{0},\underline{m}-\underline{r}]})\). Let \(F:=\operatorname{supp}(\underline{m}-\underline{r})\), which is non-empty since \(\underline{m}\neq\underline{r}\). An application of (2.12) then yields that \(\pi(b)q_{F}=0\). Note also that \(\phi_{\underline{n}}(b)\in\mathcal{K}(X_{\underline{n}})\) for all \(\underline{0}\leq\underline{n}\leq\underline{1}_{[d]}\) using (2.13), which applies since \((\pi,t)\) is injective. Therefore we obtain that \[\tilde{\pi}(b)\tilde{q}_{F}=\tilde{\pi}(b)+\sum\{(-1)^{[\underline{m}]}\tilde {\psi}_{\underline{n}}(\phi_{\underline{n}}(b))\mid\underline{0}\neq\underline{ n}\leq\underline{1}_{F}\}\in B^{(\tilde{\pi},\tilde{t})}_{[\underline{0}, \underline{1}_{F}]}.\] It then follows that \[\Phi(\tilde{\pi}(b)\tilde{q}_{F}) =\Phi(\tilde{\pi}(b)+\sum\{(-1)^{[\underline{m}]}\tilde{\psi}_{ \underline{n}}(\phi_{\underline{n}}(b))\mid\underline{0}\neq\underline{n}\leq \underline{1}_{F}\})\] \[=\pi(b)+\sum\{(-1)^{[\underline{m}]}\psi_{\underline{n}}(\phi_{ \underline{n}}(b))\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F}\}=\pi(b )q_{F}=0.\] Hence \(\tilde{\pi}(b)\tilde{q}_{F}\in\ker\Phi\cap B^{(\tilde{\pi},\tilde{t})}_{[ \underline{0},\underline{1}_{F}]}\) and so \(\tilde{\pi}(b)\tilde{q}_{F}=0\) by assumption. So we have that \[\tilde{t}_{\underline{r}}(\xi_{\underline{r}})^{*}\tilde{\psi}_{\underline{r}}( k_{\underline{r}})\tilde{t}_{\underline{r}}(\eta_{\underline{r}})=\tilde{\pi}(b) \in B^{(\tilde{\pi},\tilde{t})}_{(\underline{0},\underline{1}_{F}]}\] for all \(\xi_{\underline{r}},\eta_{\underline{r}}\in X_{\underline{r}}\). Consequently, we have that \[\tilde{\psi}_{\underline{r}}(\mathcal{K}(X_{\underline{r}}))\tilde{\psi}_{ \underline{r}}(k_{\underline{r}})\tilde{\psi}_{\underline{r}}(\mathcal{K}(X_{ \underline{r}}))\subseteq[\tilde{t}_{\underline{r}}(X_{\underline{r}})B^{( \tilde{\pi},\tilde{t})}_{(\underline{0},\underline{1}_{F}]}\tilde{t}_{ \underline{r}}(X_{\underline{r}})^{*}]\subseteq[\tilde{t}_{\underline{r}}(X_{ \underline{r}})B^{(\tilde{\pi},\tilde{t})}_{(\underline{0},\underline{m}-\underline{ r}]}\tilde{t}_{\underline{r}}(X_{\underline{r}})^{*}]\subseteq B^{(\tilde{\pi}, \tilde{t})}_{(\underline{r},\underline{m}]}.\] By using an approximate unit of \(\tilde{\psi}_{\underline{r}}(\mathcal{K}(X_{\underline{r}}))\) and the fact that \(B^{(\tilde{\pi},\tilde{t})}_{(\underline{r},\underline{m}]}\) is closed in \(\mathrm{C}^{*}(\tilde{\pi},\tilde{t})\), we deduce that \(\tilde{\psi}_{\underline{r}}(k_{\underline{r}})\in B^{(\tilde{\pi},\tilde{t })}_{(\underline{r},\underline{m}]}\), which contradicts the irreducibility assumption. Therefore we obtain that \(\ker\Phi\cap B^{(\tilde{\pi},\tilde{t})}_{[\underline{0},\underline{m}]}=\{0\}\) for all \(\underline{m}\in\mathbb{Z}_{+}^{d}\), as required. ### The \(Ixii\) product system In this subsection we present a product system construction that will be useful in Section 5, and which extends the results of [34, Section 9]. Let \(X\) be a C*-correspondence over a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal. Consider \(IXI\subseteq XI\) and recall that \(XI\) is closed. An application of the Hewitt-Cohen Factorisation Theorem gives that \(IXI\) is a closed linear subspace of \(XI\), i.e., \(IXI=[IXI]\). It is clear that \(IXI\) is also closed under the right action of \(A\), and hence \(IXI\) carries the structure of a right Hilbert \(A\)-module. Next, we define a left action \(\phi_{IXI}\) of \(A\) on \(IXI\) by \[\phi_{IXI}\colon A\to\mathcal{L}(IXI);\phi_{IXI}(a)=\phi_{X}(a)|_{IXI}\text{ for all }a\in A.\] Hence \(IXI\) inherits the structure of a C*-correspondence over \(A\) from \(X\). By restricting \(\phi_{IXI}\) to \(I\), we may view \(IXI\) as a C*-correspondence over \(I\). As was the case with \(\mathcal{K}(XI)\), we have a natural embedding of \(\mathcal{K}(IXI)\) in \(\mathcal{K}(X)\). The proof of the following result follows an almost identical trajectory to that of [22, Lemma 2.6 (1)], and it is left to the reader. **Lemma 2.6.1**.: _Let \(X\) be a C*-correspondence over a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal. Then there exists an embedding \(\iota\colon\mathcal{K}(IXI)\to\mathcal{K}(X)\) such that_ \[\iota(\Theta^{IXI}_{\xi,\eta})=\Theta^{X}_{\xi,\eta}\text{ for all }\xi, \eta\in IXI.\] _Consequently \(\mathcal{K}(IXI)\) is \(*\)-isomorphic to \(\overline{\operatorname{span}}\{\Theta^{X}_{\xi,\eta}\mid\xi,\eta\in IXI\}\) via the map \(\iota\), with_ \[\iota^{-1}(k)=k|_{IXI}\text{ for all }k\in\overline{\operatorname{span}}\{ \Theta^{X}_{\xi,\eta}\mid\xi,\eta\in IXI\}.\] Now let \(X\) be a product system over \(P\). Recall that if \(I\) is positively invariant for \(X\), then \(IX_{p}I=IX_{p}\) for all \(p\in P\) by Lemma 2.2.6. In this case we obtain a product system \(IXI\), with a construction that is compatible with the product system structure of \(X\). **Proposition 2.6.2**.: _Let \(P\) be a unital subsemigroup of a discrete group \(G\). Let \(X\) be a product system over \(P\) with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal that is positively invariant for \(X\). Then \(IXI:=\{IX_{p}I\}_{p\in P}\) inherits from \(X\) a canonical structure as a product system over \(P\) with coefficients in \(I\), identifying each \(IX_{p}I\) as a sub-C*-correspondence of \(X_{p}\)._ **Proof.** We will denote the multiplication maps of \(X\) by \(\{u^{X}_{p,q}\}_{p,q\in P}\). The discussion preceding Lemma 2.6.1 gives that \(IX_{p}I\) is a C*-correspondence over \(I\) for all \(p\in P\). For \(p,q\in P\), let the unitary map \(\iota_{p,q}\) be induced by the canonical identifications \[IX_{p}I\otimes_{I}IX_{q}I\cong IX_{p}I\otimes_{A}IX_{q}I\cong IX_{p}\otimes _{A}X_{q}I,\] where we use Lemma 2.2.6 in the second identification and write (in an abuse of notation) \(IX_{p}I\otimes_{A}IX_{q}I\) for the closed linear space generated by \(IX_{p}I\odot_{A}IX_{q}I\) in \(X_{p}\otimes_{A}X_{q}\). We then define the multiplication map \(u^{IXI}_{p,q}\) for \(IXI\) by \(u^{XII}_{p,q}:=u^{X}_{p,q}\circ_{p,q}\). It is clear that \(u^{IXI}_{p,q}\) is a linear isometry and an \(I\)-bimodule map onto \(u^{X}_{p,q}(IX_{p}\otimes_{A}X_{q}I)=IX_{pq}I\). It follows that \(IXI\) together with the multiplication maps \(\{u^{IXI}_{p,q}\}_{p,q\in P}\) becomes a product system, using the product system axioms of \(X\). **Proposition 2.6.3**.: _Let \(P\) be a unital subsemigroup of a discrete group \(G\). Let \(X\) be a product system over \(P\) with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal that is positively invariant for \(X\). Let \(\{\iota^{pq}_{p}\}_{p,q\in P}\) denote the connecting \(*\)-homomorphisms of \(X\) and let \(\{j^{pq}_{p}\}_{p,q\in P}\) denote the connecting \(*\)-homomorphisms of \(IXI\). Then_ \[j^{pq}_{p}(k_{p}|_{IX_{p}I})=\iota^{pq}_{p}(k_{p})|_{IX_{pq}I}\text{ for all }p,q\in P\text{ and }k_{p}\in\overline{\operatorname{span}}\{\Theta^{X_{p}}_{\xi_{p},\eta_{p}} \mid\xi_{p},\eta_{p}\in IX_{p}I\}.\] **Proof.** Without loss of generality we may assume that \(p\neq e\) and \(q\neq e\), as the claim is straightforward otherwise. By Lemma 2.6.1 we have that \(k_{p}|_{IX_{p}I}\in\mathcal{K}(IX_{p}I)\). Recalling that \(IX_{pq}I\) is a closed linear subspace of \(X_{pq}\), it suffices to show that \[j^{pq}_{p}(k_{p}|_{IX_{p}I})u^{IXI}_{p,q}(\xi_{p}\otimes\xi_{q})=\iota^{pq}_{p }(k_{p})u^{IXI}_{p,q}(\xi_{p}\otimes\xi_{q}),\text{ for all }\xi_{p}\in IX_{p}I,\xi_{q}\in IX_{q}I.\] Indeed, fixing \(\xi_{p}\in IX_{p}I\) and \(\xi_{q}\in IX_{q}I\), we have that \[j_{p}^{pq}(k_{p}|_{IX_{p}I})u_{p,q}^{IXI}(\xi_{p}\otimes\xi_{q}) =u_{p,q}^{IXI}((k_{p}\xi_{p})\otimes\xi_{q})=u_{p,q}^{X}((k_{p}\xi_ {p})\otimes\xi_{q})\] \[=\iota_{p}^{pq}(k_{p})u_{p,q}^{X}(\xi_{p}\otimes\xi_{q})=\iota_{p} ^{pq}(k_{p})u_{p,q}^{IXI}(\xi_{p}\otimes\xi_{q}),\] as required. Next we show that the properties of compact alignment and strong compact alignment of \(IXI\) are inherited from \(X\). **Proposition 2.6.4**.: _Let \(P\) be a unital right LCM subsemigroup of a discrete group \(G\). Let \(X\) be a compactly aligned product system over \(P\) with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal that is positively invariant for \(X\). Then the product system \(IXI\) is compactly aligned._ **Proof.** Let \(\{\iota_{p}^{pq}\}_{p,q\in P}\) (resp. \(\{j_{p}^{pq}\}_{p,q\in P}\)) denote the connecting \(*\)-homomorphisms of \(X\) (resp. \(IXI\)). It suffices to show that the compact alignment condition holds for rank-one operators. To this end, let \(p,q\in P\setminus\{e\}\) be such that \(pP\cap qP=wP\) for some \(w\in P\), and fix \(\xi_{p},\eta_{p}\in IX_{p}I\) and \(\xi_{q},\eta_{q}\in IX_{q}I\). We must show that \[j_{p}^{w}(\Theta_{\xi_{p},\eta_{p}}^{IX_{p}I})j_{q}^{w}(\Theta_{\xi_{q},\eta_{ q}}^{IX_{q}I})\in\mathcal{K}(IX_{w}I).\] Recall that \[\Theta_{\xi_{p},\eta_{p}}^{IX_{p}I}=\Theta_{\xi_{p},\eta_{p}}^{X_{p}}|_{IX_{p} I}\quad\text{and}\quad\Theta_{\xi_{q},\eta_{q}}^{IX_{q}I}=\Theta_{\xi_{q},\eta_{q}}^{ X_{q}}|_{IX_{q}I},\] by Lemma 2.6.1. An application of Proposition 2.6.3 then gives that \[j_{p}^{w}(\Theta_{\xi_{p},\eta_{p}}^{IX_{p}I})j_{q}^{w}(\Theta_{\xi_{q},\eta_{ q}}^{IX_{q}I})=\iota_{p}^{w}(\Theta_{\xi_{p},\eta_{p}}^{X_{p}})|_{IX_{w}I}u_{q}^{w}( \Theta_{\xi_{q},\eta_{q}}^{X_{q}})|_{IX_{w}I}=[\iota_{p}^{w}(\Theta_{\xi_{p}, \eta_{p}}^{X_{p}})\iota_{q}^{w}(\Theta_{\xi_{q},\eta_{q}}^{X_{q}})]|_{IX_{w}I}. \tag{2.15}\] By Lemma 2.6.1, it suffices to show that \[\iota_{p}^{w}(\Theta_{\xi_{p},\eta_{p}}^{X_{p}})\iota_{q}^{w}(\Theta_{\xi_{q}, \eta_{q}}^{X_{q}})\in\overline{\operatorname{span}}\{\Theta_{\xi_{w},\eta_{w}} ^{X_{w}}\mid\xi_{w},\eta_{w}\in IX_{w}I\}.\] Compact alignment of \(X\) gives that \[\iota_{p}^{w}(\Theta_{\xi_{p},\eta_{p}}^{X_{p}})\iota_{q}^{w}(\Theta_{\xi_{q}, \eta_{q}}^{X_{q}})\in\mathcal{K}(X_{w}).\] Next, let \((u_{\lambda})_{\lambda\in\Lambda}\) denote an approximate unit of \(I\). For each \(\lambda\in\Lambda\), we have that \[\phi_{w}(u_{\lambda})\iota_{p}^{w}(\Theta_{\xi_{p},\eta_{p}}^{X_{p}})=\iota_{ p}^{w}(\phi_{p}(u_{\lambda})\Theta_{\xi_{p},\eta_{p}}^{X_{p}})=\iota_{p}^{w}( \Theta_{\phi_{p}(u_{\lambda})\xi_{p},\eta_{p}}^{X_{p}}).\] Using this and the fact that \(\left\|\cdot\right\|\cdot\lim_{\lambda}\phi_{p}(u_{\lambda})\xi_{p}=\xi_{p}\) (as \(\xi_{p}\in IX_{p}I\)), we obtain that \[\left\|\cdot\right\|\cdot\lim_{\lambda}\phi_{w}(u_{\lambda})\iota_{p}^{w}( \Theta_{\xi_{p},\eta_{p}}^{X_{p}})=\iota_{p}^{w}(\Theta_{\xi_{p},\eta_{p}}^{X_ {p}}).\] By analogous reasoning, we have that \[\left\|\cdot\right\|\cdot\lim_{\lambda}\iota_{q}^{w}(\Theta_{\xi_{q},\eta_{q}} ^{X_{q}})\phi_{w}(u_{\lambda})=\iota_{q}^{w}(\Theta_{\xi_{q},\eta_{q}}^{X_{q}}).\] Therefore, we obtain that \[\iota_{p}^{w}(\Theta_{\xi_{p},\eta_{p}}^{X_{p}})\iota_{q}^{w}(\Theta_{\xi_{q}, \eta_{q}}^{X_{q}})=\left\|\cdot\right\|\cdot\lim_{\lambda}\phi_{w}(u_{\lambda}) \iota_{p}^{w}(\Theta_{\xi_{p},\eta_{p}}^{X_{p}})\iota_{q}^{w}(\Theta_{\xi_{q}, \eta_{q}}^{X_{q}})\phi_{w}(u_{\lambda}).\] Consequently, we have that \(\iota_{p}^{w}(\Theta_{\xi_{p},\eta_{p}}^{X_{p}})\iota_{q}^{w}(\Theta_{\xi_{q}, \eta_{q}}^{X_{q}})\) is expressible as the norm-limit of a net that is contained in \(\phi_{w}(I)\mathcal{K}(X_{w})\phi_{w}(I)\). We also have that \[\phi_{w}(I)\mathcal{K}(X_{w})\phi_{w}(I)\subseteq\overline{\operatorname{ span}}\{\Theta_{\xi_{w},\eta_{w}}^{X_{w}}\mid\xi_{w},\eta_{w}\in IX_{w}I\}\] by Corollary 2.2.7. Thus \[\iota_{p}^{w}(\Theta_{\xi_{p},\eta_{p}}^{X_{p}})\iota_{q}^{w}(\Theta_{\xi_{q}, \eta_{q}}^{X_{q}})\in\overline{\operatorname{span}}\{\Theta_{\xi_{w},\eta_{w}}^{X_ {w}}\mid\xi_{w},\eta_{w}\in IX_{w}I\}, \tag{2.16}\] as required. **Proposition 2.6.5**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal that is positively invariant for \(X\). Then the product system \(IXI\) is strong compactly aligned._ **Proof.** Let \(\{\iota_{\underline{n}}^{\underline{n}+\underline{m}}\}_{\underline{n},\underline{m} \in\mathbb{Z}_{+}^{d}}\) (resp. \(\{j_{\underline{n}}^{\underline{n}+\underline{m}}\}_{\underline{n},\underline{m} \in\mathbb{Z}_{+}^{d}}\)) denote the connecting \(*\)-homomorphisms of \(X\) (resp. \(IXI\)). Proposition 2.6.4 asserts that \(IXI\) is compactly aligned, so it remains to check that \(IXI\) satisfies \[j_{\underline{n}}^{\underline{n}+\underline{i}}(\mathcal{K}(IX_{\underline{n }}I))\subseteq\mathcal{K}(IX_{\underline{n}+\underline{i}}I)\text{ whenever }\underline{n}\perp\underline{i}\text{, where }i\in[d], \underline{n}\in\mathbb{Z}_{+}^{d}\setminus\{\underline{0}\}.\] It suffices to show that this holds for rank-one operators. To this end, fix \(\xi_{\underline{n}},\eta_{\underline{n}}\in IX_{\underline{n}}I\). An application of Proposition 2.6.3 then yields that \[j_{\underline{n}}^{\underline{n}+\underline{i}}(\Theta^{IX_{\underline{n}}I}_ {\xi_{\underline{n}},\eta_{\underline{n}}})=j_{\underline{n}}^{\underline{n}+ \underline{i}}(\Theta^{X_{\underline{n}}}_{\xi_{\underline{n}},\eta_{\underline {n}}}|_{IX_{\underline{n}}I})=\underline{n}^{\underline{n}+\underline{i}}_{ \underline{n}}(\Theta^{X_{\underline{n}}}_{\xi_{\underline{n}},\eta_{\underline {n}}})|_{IX_{\underline{n}}+\underline{i}I}.\] By strong compact alignment of \(X\), we have that \(\iota_{\underline{n}}^{\underline{n}+\underline{i}}(\Theta^{X_{\underline{n}}} _{\xi_{\underline{n}},\eta_{\underline{n}}})\in\mathcal{K}(X_{\underline{n}+ \underline{i}})\). By using an approximate unit \((u_{\lambda})_{\lambda\in\Lambda}\) of \(I\) and Corollary 2.2.7, we then obtain that \[\iota_{\underline{n}}^{\underline{n}+\underline{i}}(\Theta^{X_{ \underline{n}}}_{\xi_{\underline{n}},\eta_{\underline{n}}}) =\|\!\cdot\!\|\!-\lim_{\lambda}\phi_{\underline{n}+\underline{i}}(u _{\lambda})_{\iota}\underline{n}^{\underline{n}+\underline{i}}_{\underline{n}} (\Theta^{X_{\underline{n}}}_{\xi_{\underline{n}},\eta_{\underline{n}}})\phi_{ \underline{n}+\underline{i}}(u_{\lambda})\] \[\in[\phi_{\underline{n}+\underline{i}}(I)\mathcal{K}(X_{\underline {n}+\underline{i}})\phi_{\underline{n}+\underline{i}}(I)]\] \[\subseteq\overline{\mathrm{span}}\{\Theta^{X_{\underline{n}}+ \underline{i}}_{\xi_{\underline{n}}+\underline{i}}\mid\xi_{\underline{n}+ \underline{i}},\eta_{\underline{n}+\underline{i}}\in IX_{\underline{n}+ \underline{i}}I\}.\] An application of Lemma 2.6.1 finishes the proof. Next we study the representations of \(IXI\). Given a representation \((\pi,t)\) of \(X\), we write \((\pi|_{I},t|_{IXI})\) for the family \(\{(\pi|_{I},t_{p}|_{IX_{p}I})\}_{p\in P}\). It is routine to check that \((\pi|_{I},t|_{IXI})\) is a representation of \(IXI\). For each \(p\in P\), let \(\tilde{\psi}_{p}\colon\mathcal{K}(IX_{p}I)\to\mathrm{C}^{*}(\pi,t)\) be the \(*\)-homomorphism induced by \((\pi|_{I},t_{p}|_{IX_{p}I})\). Then for all \(\xi_{p},\eta_{p}\in IX_{p}I\), we have that \[\tilde{\psi}_{p}(\Theta^{X_{p}}_{\xi_{p},\eta_{p}}|_{IX_{p}I})=\tilde{\psi}_{p }(\Theta^{IX_{p}I}_{\xi_{p},\eta_{p}})=t_{p}|_{IX_{p}I}(\xi_{p})t_{p}|_{IX_{p} I}(\eta_{p})^{*}=t_{p}(\xi_{p})t_{p}(\eta_{p})^{*}=\psi_{p}(\Theta^{X_{p}}_{\xi_{p}, \eta_{p}}),\] from which it follows that \[\tilde{\psi}_{p}(k_{p}|_{IX_{p}I})=\psi_{p}(k_{p})\text{ for all }k_{p}\in \overline{\mathrm{span}}\{\Theta^{X_{p}}_{\xi_{p},\eta_{p}}\mid\xi_{p},\eta_{p} \in IX_{p}I\}. \tag{2.17}\] When \(P\) is a right LCM semigroup and \(X\) is compactly aligned (and thus \(IXI\) is compactly aligned by Proposition 2.6.4), Nica-covariance of \((\pi,t)\) is inherited by \((\pi|_{I},t|_{IXI})\). **Proposition 2.6.6**.: _Let \(P\) be a unital right LCM subsemigroup of a discrete group \(G\). Let \(X\) be a compactly aligned product system over \(P\) with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal that is positively invariant for \(X\). Let \((\pi,t)\) be a Nica-covariant representation of \(X\). Then \((\pi|_{I},t|_{IXI})\) is a Nica-covariant representation of \(IXI\), and_ \[\mathrm{C}^{*}(\pi|_{I},t|_{IXI})=\overline{\mathrm{span}}\{\pi(I)t_{p}(X_{p}) \pi(I)t_{q}(X_{q})^{*}\pi(I)\mid p,q\in P\}. \tag{2.18}\] **Proof.** Let \(\{t_{p}^{pq}\}_{p,q\in P}\) (resp. \(\{j_{p}^{pq}\}_{p,q\in P}\)) denote the connecting \(*\)-homomorphisms of \(X\) (resp. \(IXI\)). For each \(p\in P\), let \(\tilde{\psi}_{p}\colon\mathcal{K}(IX_{p}I)\to\mathrm{C}^{*}(\pi,t)\) be the \(*\)-homomorphism induced by \((\pi|_{I},t_{p}|_{IX_{p}I})\). Now fix \(p,q\in P\setminus\{e\},k_{p}\in\mathcal{K}(IX_{p}I)\) and \(k_{q}\in\mathcal{K}(IX_{q}I)\). We must show that \[\tilde{\psi}_{p}(k_{p})\tilde{\psi}_{q}(k_{q})=\begin{cases}\tilde{\psi}_{w}(j_{p} ^{w}(k_{p})j_{q}^{w}(k_{q}))&\text{if }pP\cap qP=wP\text{ for some }w\in P,\\ 0&\text{otherwise.}\end{cases}\] It suffices to show this for \(k_{p}=\Theta^{IX_{p}I}_{\xi_{p},\eta_{p}}\) and \(k_{q}=\Theta^{IX_{q}I}_{\xi_{q},\eta_{q}}\) for some \(\xi_{p},\eta_{p}\in IX_{p}I\) and \(\xi_{q},\eta_{q}\in IX_{q}I\). By (2.17) and Nica-covariance of \((\pi,t)\), we have that \[\tilde{\psi}_{p}(\Theta^{IX_{p}I}_{\xi_{p},\eta_{p}})\tilde{\psi}_{ q}(\Theta^{IX_{q}I}_{\xi_{q},\eta_{q}}) =\psi_{p}(\Theta^{X_{p}}_{\xi_{p},\eta_{p}})\psi_{q}(\Theta^{X_{q}}_{\xi _{q},\eta_{q}})\] \[=\begin{cases}\psi_{w}(\iota_{p}^{w}(\Theta^{X_{p}}_{\xi_{p},\eta_{ p}})\iota_{q}^{w}(\Theta^{X_{q}}_{\xi_{q},\eta_{q}}))&\text{if }pP\cap qP=wP\text{ for some }w\in P,\\ 0&\text{otherwise.}\end{cases}\] Suppose \(pP\cap qP=wP\) for some \(w\in P\). Since the assumptions of Proposition 2.6.4 are satisfied, by (2.15) we have that \[[\iota_{p}^{w}(\Theta^{X_{p}}_{\xi_{p},\eta_{p}})\iota_{q}^{w}(\Theta^{X_{q}}_{ \xi_{q},\eta_{q}})]|_{IX_{w}I}=j_{p}^{w}(\Theta^{IX_{p}I}_{\xi_{p},\eta_{p}})j_ {q}^{w}(\Theta^{IX_{q}I}_{\xi_{q},\eta_{q}}),\] and by (2.16) we have that \[\iota_{p}^{w}(\Theta^{X_{p}}_{\xi_{p},\eta_{p}})\iota_{q}^{w}(\Theta^{X_{q}}_{\xi_ {q},\eta_{q}})\in\overline{\mathrm{span}}\{\Theta^{X_{w}}_{\xi_{w},\eta_{w}}\mid \xi_{w},\eta_{w}\in IX_{w}I\}.\] Another application of (2.17) then yields that \[\tilde{\psi}_{p}(\Theta^{IX_{p}I}_{\xi_{p},\eta_{p}})\tilde{\psi}_{q}(\Theta^{ IX_{q}I}_{\xi_{q},\eta_{q}})=\begin{cases}\tilde{\psi}_{w}(j_{p}^{w}(\Theta^{ IX_{p}I}_{\xi_{p},\eta_{p}})j_{q}^{w}(\Theta^{IX_{q}I}_{\xi_{q},\eta_{q}}))&\text{if $pP\cap qP=wP$ for some $w\in P$},\\ 0&\text{otherwise},\end{cases}\] as required. The second claim is a direct consequence of Nica-covariance. The structure of \(\mathrm{C}^{*}(\pi|_{I},t|_{IXI})\) is linked with that of \(\langle\pi(I)\rangle\subseteq\mathrm{C}^{*}(\pi,t)\). **Proposition 2.6.7**.: _Let \(P\) be a unital right LCM subsemigroup of a discrete group \(G\). Let \(X\) be a compactly aligned product system over \(P\) with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal that is positively invariant for \(X\). Let \((\pi,t)\) be a Nica-covariant representation of \(X\). Then the following hold:_ * \(\langle\pi(I)\rangle=\overline{\mathrm{span}}\{t_{p}(X_{p})\pi(I)t_{q}(X_{q}) ^{*}\mid p,q\in P\}\)_;_ * \(\mathrm{C}^{*}(\pi|_{I},t|_{IXI})=[\pi(I)\,\langle\pi(I)\rangle\,\pi(I)]\)_, and thus_ \(\mathrm{C}^{*}(\pi|_{I},t|_{IXI})\) _is a hereditary, full C*-subalgebra of_ \(\langle\pi(I)\rangle\)_._ **Proof.** Item (i) follows by Nica-covariance of \((\pi,t)\). Item (ii) follows as a standard C*-result. Suppose now that \(P=\mathbb{Z}_{+}^{d}\) and \(X\) is strong compactly aligned. An application of the Gauge-Invariant Uniqueness Theorem permits the identification of \(\mathcal{NT}_{IXI}\) and \(\mathcal{NO}_{IXI}\) with C*-subalgebras of \(\mathcal{NT}_{X}\) and \(\mathcal{NO}_{X}\), respectively. We note that the Toeplitz-Nica-Pimsner case extends to right LCM semigroups using [31, Theorem 6.4] (but we will not require it here). **Proposition 2.6.8**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal that is positively invariant for \(X\). Then the inclusion \(IXI\hookrightarrow X\) induces a canonical embedding_ \[\mathcal{NT}_{IXI}\cong\mathrm{C}^{*}(\overline{\pi}_{X}|_{I},\overline{t}_{ X}|_{IXI})\subseteq\mathcal{NT}_{X},\] _and thus \(\mathcal{NT}_{IXI}\) is a hereditary, full C*-subalgebra of \(\langle\overline{\pi}_{X}(I)\rangle\)._ **Proof.** Note that \(IXI\) is strong compactly aligned by Proposition 2.6.5, and \((\overline{\pi}_{X}|_{I},\overline{t}_{X}|_{IXI})\) is a Nica-covariant representation of \(IXI\) by Proposition 2.6.6. Injectivity of \((\overline{\pi}_{X}|_{I},\overline{t}_{X}|_{IXI})\) is inherited from \((\overline{\pi}_{X},\overline{t}_{X})\), and \((\overline{\pi}_{X}|_{I},\overline{t}_{X}|_{IXI})\) admits a gauge action by restriction. Hence it suffices to show that \(\overline{\pi}_{X}|_{I}\times\overline{t}_{X}|_{IXI}\colon\mathcal{NT}_{IXI} \to\mathrm{C}^{*}(\overline{\pi}_{X}|_{I},\overline{t}_{X}|_{IXI})\) is injective on the fixed point algebra; equivalently on the \([\underline{0},\underline{1}_{[d]}]\)-core by Proposition 2.5.13. To this end, first note that we may take \((\overline{\pi}_{X},\overline{t}_{X})\) to be the Fock representation without loss of generality. Let \(\psi_{\underline{n}}\colon\mathcal{K}(IX_{\underline{n}}I)\to\mathcal{NT}_{X}\) be the \(*\)-homomorphism induced by \((\overline{\pi}_{X}|_{I},\overline{t}_{X,\underline{n}}|_{IX_{\underline{n}}I})\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). Take \(f\in\ker\overline{\pi}_{X}|_{I}\times\overline{t}_{X}|_{IXI}\cap B^{(\overline {\pi}_{IXI},\overline{t}_{IXI})}_{[\underline{0},\underline{1}_{[d]}]}\), so that we may write \[f=\overline{\pi}_{IXI}(a)+\sum\{\overline{\psi}_{IXI},\underline{n}(k_{ \underline{n}})\mid\underline{0}\neq\underline{n}\leq\underline{1}_{[d]}\},\] for some \(a\in I\) and \(k_{\underline{n}}\in\mathcal{K}(IX_{\underline{n}}I)\) for all \(\underline{0}\neq\underline{n}\leq\underline{1}_{[d]}\). Recall that we write \(\overline{\pi}_{IXI}(a)=\overline{\psi}_{IXI,\underline{0}}(k_{\underline{0}})\) for \(k_{\underline{0}}:=\phi_{IX_{\underline{0}}I}(a)\), and in turn \(\overline{\pi}_{X}|_{I}(a)=\psi_{\underline{0}}(k_{\underline{0}})\). Towards contradiction, suppose that \(f\neq 0\). Then we may choose \(\underline{0}\leq\underline{r}\leq\underline{1}_{[d]}\) minimal such that \(k_{\underline{r}}\neq 0\). Let \(P_{\underline{r}}\colon\mathcal{F}X\to X_{\underline{r}}\) be the canonical projection. Then \[\iota(k_{\underline{r}})=P_{\underline{r}}\left(\sum\{\psi_{\underline{n}}(k_{ \underline{n}})\mid\underline{0}\leq\underline{n}\leq\underline{1}_{[d]}\} \right)P_{\underline{r}}=0,\] where \(\iota\colon\mathcal{K}(IX_{\underline{r}}I)\to\mathcal{K}(X_{\underline{r}})\) is the embedding guaranteed by Lemma 2.6.1. Hence \(k_{\underline{r}}=0\), which is a contradiction. We conclude that \(\overline{\pi}_{X}|_{I}\times\overline{t}_{X}|_{IXI}\) is injective on the \([\underline{0},\underline{1}_{[d]}]\)-core, as required. Proposition 2.6.7 then completes the proof. To establish the corresponding result for \(\mathcal{NO}_{IXI}\), we first generalise [34, Proposition 9.2]. **Proposition 2.6.9**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal that is positively invariant for \(X\). Then, for all \(F\subseteq[d]\), we have that:_ * \(\mathcal{J}_{F}(IXI)=I\cap\mathcal{J}_{F}(X)\)_;_ * \(\mathcal{I}_{F}(IXI)=I\cap\mathcal{I}_{F}(X)\)_._ Proof.: (i) The claim holds trivially for \(F=\emptyset\), so assume that \(F\neq\emptyset\). Let \(a\in\mathcal{J}_{F}(IXI)\subseteq I\). It suffices to show that \(a\in\mathcal{J}_{F}(X)\). To this end, fix \(i\in[d]\). By definition, we have that \(a\in(\phi_{IX\underline{i}})^{-1}(\mathcal{K}(IX_{\underline{i}}I))\). Since \(I\) is positively invariant for \(X_{\underline{i}}\), we may apply Lemma 2.2.6 to deduce that \(IX_{\underline{i}}I=IX_{\underline{i}}\), and an application of [34, Lemma 9.1] then yields that \[(\phi_{IX_{\underline{i}}})^{-1}(\mathcal{K}(IX_{\underline{i}}I))=I\cap\phi _{\underline{i}}^{-1}(\mathcal{K}(X_{\underline{i}})).\] Hence \(a\in\phi_{\underline{i}}^{-1}(\mathcal{K}(X_{\underline{i}}))\) for all \(i\in[d]\), as required. Moreover, since \(\ker\phi_{\underline{i}}\cap I\subseteq\ker\phi_{IX_{\underline{i}}I}\) for every \(i\in[d]\), we have that \((\bigcap_{i\in F}\ker\phi_{IX_{\underline{i}}I})^{-1}\subseteq(\bigcap_{i\in F }\ker\phi_{\underline{i}})^{\perp}\cap I\). Hence \(a\in(\bigcap_{i\in F}\ker\phi_{\underline{i}})^{\perp}\) and thus \(a\in\mathcal{J}_{F}(X)\), as required. For the reverse inclusion, let \(a\in I\cap\mathcal{J}_{F}(X)\). For each \(i\in[d]\), we have that \[a\in I\cap\phi_{\underline{i}}^{-1}(\mathcal{K}(X_{\underline{i}}))=(\phi_{IX _{\underline{i}}I})^{-1}(\mathcal{K}(IX_{\underline{i}}I))\] by Lemma 2.2.6 and [34, Lemma 9.1]. Therefore, it suffices to show that \(a\in(\bigcap_{i\in F}\ker\phi_{IX_{\underline{i}}I})^{\perp}\). To this end, fix \(b\in\bigcap_{i\in F}\ker\phi_{IX_{\underline{i}}I}\). Another application of [34, Lemma 9.1] gives that \[b\in I\cap(\bigcap_{i\in F}\ker\phi_{\underline{i}}).\] Since \(a\in(\bigcap_{i\in F}\ker\phi_{\underline{i}})^{\perp}\), we therefore have that \(ab=0\), as required. (ii) Once again, we may assume without loss of generality that \(F\neq\emptyset\). Take \(a\in\mathcal{I}_{F}(IXI)\subseteq I\) and fix \(\underline{n}\perp F\). We have that \[\big{\langle}X_{\underline{n}},aX_{\underline{n}}\big{\rangle} \subseteq[\big{\langle}X_{\underline{n}},IaIX_{\underline{n}}\big{\rangle} ]\subseteq[\big{\langle}IX_{\underline{n}},aIX_{\underline{n}}\big{\rangle}]\] \[\subseteq[\big{\langle}IX_{\underline{n}}I,\phi_{IX_{\underline{n }}I}(a)(IX_{\underline{n}}I)\big{\rangle}]\subseteq\mathcal{J}_{F}(IXI) \subseteq\mathcal{J}_{F}(X),\] by using Lemma 2.2.6 and item (i). Thus \(a\in I\cap\mathcal{I}_{F}(X)\), as required. For the reverse inclusion, fix \(a\in I\cap\mathcal{I}_{F}(X)\) and \(\underline{n}\perp F\). We have that \[\big{\langle}IX_{\underline{n}}I,\phi_{IX_{\underline{n}}I}(a)(IX_{\underline {n}}I)\big{\rangle}\subseteq\big{\langle}X_{\underline{n}},aX_{\underline{n}} \big{\rangle}\subseteq I\cap\mathcal{J}_{F}(X)=\mathcal{J}_{F}(IXI),\] using positive invariance of \(I\), the fact that \(a\in\mathcal{I}_{F}(X)\) and item (i). Thus \(a\in\mathcal{I}_{F}(IXI)\), completing the proof. **Proposition 2.6.10**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal that is positively invariant for \(X\). Then the inclusion \(IXI\hookrightarrow X\) induces a canonical embedding_ \[\mathcal{NO}_{IXI}\cong\mathrm{C}^{*}(\pi_{X}^{\mathcal{I}}|_{I},t_{X}^{ \mathcal{I}}|_{IXI})\subseteq\mathcal{NO}_{X},\] _and thus \(\mathcal{NO}_{IXI}\) is a hereditary, full C*-subalgebra of \(\big{\langle}\pi_{X}^{\mathcal{I}}(I)\big{\rangle}\)._ Proof.: Being the restriction of an injective Nica-covariant representation of \(X\) that admits a gauge action, \((\pi_{X}^{\mathcal{I}}|_{I},t_{X}^{\mathcal{I}}|_{IXI})\) is an injective Nica-covariant representation of \(IXI\) that admits a gauge action. Thus it suffices to show that \((\pi_{X}^{\mathcal{I}}|_{I},t_{X}^{\mathcal{I}}|_{IXI})\) is a CNP-representation of \(IXI\) by [16, Theorem 4.2]. To this end, fix \(\emptyset\neq F\subseteq[d]\) and \(a\in\mathcal{I}_{F}(IXI)\). For each \(\underline{n}\in\mathbb{Z}_{+}^{d}\), let \(\tilde{\psi}_{\underline{n}}\colon\mathcal{K}(IX_{\underline{n}}I)\to \mathcal{NO}_{X}\) be the \(*\)-homomorphism induced by \((\pi_{X}^{\mathcal{I}}|_{I},t_{X,\underline{n}}^{\mathcal{I}}|_{IX_{ \underline{n}}I})\). We must show that \[\pi_{X}^{\mathcal{I}}|_{I}(a)+\sum\{(-1)^{|\underline{n}|}\tilde{\psi}_{ \underline{n}}(\phi_{IX_{\underline{n}}I}(a))\mid\underline{0}\neq\underline{n }\leq\underline{1}_{F}\}=0.\] Fix \(\underline{0}\neq\underline{n}\leq\underline{1}_{F}\). We claim that \[\phi_{\underline{n}}(a)\in\overline{\mathrm{span}}\{\Theta_{\underline{n}_{ \underline{n}_{\underline{n}}}}^{X_{\underline{n}}}\mid\xi_{\underline{n}}, \eta_{\underline{n}}\in IX_{\underline{n}}I\}.\] To see this, first note that \(a\in I\cap\mathcal{I}_{F}(X)\) by item (ii) of Proposition 2.6.9, and so \(\phi_{\underline{n}}(a)\in\mathcal{K}(X_{\underline{n}})\). Let \((u_{\lambda})_{\lambda\in\Lambda}\) be an approximate unit of \(I\). Then an application of Corollary 2.2.7 yields that \[\phi_{\underline{n}}(u_{\lambda})\phi_{\underline{n}}(a)\phi_{\underline{n}}(u_ {\lambda})\in\overline{\operatorname{span}}\{\Theta^{X_{\underline{n}}}_{ \underline{\xi}_{\underline{n}},\eta_{\underline{n}}}\mid\xi_{\underline{n}}, \eta_{\underline{n}}\in IX_{\underline{n}}I\}\text{ for all }\lambda\in\Lambda.\] Since \(a\in I\), we also have that \[\|\cdot\|\cdot\|\,\text{-}\lim_{\lambda}\phi_{\underline{n}}(u_{\lambda}) \phi_{\underline{n}}(a)\phi_{\underline{n}}(u_{\lambda})=\phi_{\underline{n}} (a),\] and consequently \[\phi_{\underline{n}}(a)\in\overline{\operatorname{span}}\{\Theta^{X_{ \underline{n}}}_{\underline{\xi}_{\underline{n}},\eta_{\underline{n}}}\mid \xi_{\underline{n}},\eta_{\underline{n}}\in IX_{\underline{n}}I\},\] as claimed. Note also that \(\phi_{\underline{n}}(a)|_{IX_{\underline{n}}I}=\phi_{IX_{\underline{n}}I}(a)\) by definition. An application of (2.17) then gives that \[\tilde{\psi}_{\underline{n}}(\phi_{IX_{\underline{n}}I}(a))=\psi^{\mathcal{I }}_{X,\underline{n}}(\phi_{\underline{n}}(a))\text{ for all }\underline{0}\neq \underline{n}\leq\underline{1}_{F}.\] From this we obtain that \[\pi^{\mathcal{I}}_{X}|_{I}(a)+\sum\{(-1)^{|\underline{n}|}\tilde{ \psi}_{\underline{n}}(\phi_{IX_{\underline{n}}I}(a))\mid\underline{0}\neq \underline{n}\leq\underline{1}_{F}\}=\] \[=\pi^{\mathcal{I}}_{X}(a)+\sum\{(-1)^{|\underline{n}|}\psi^{ \mathcal{I}}_{X,\underline{n}}(\phi_{\underline{n}}(a))\mid\underline{0}\neq \underline{n}\leq\underline{1}_{F}\}=0,\] where the final equality follows from the fact that \(a\in\mathcal{I}_{F}(X)\), together with the fact that \((\pi^{\mathcal{I}}_{X},\ell^{\mathcal{I}}_{X})\) is a CNP-representation of \(X\). Thus \((\pi^{\mathcal{I}}_{X}|_{I},\ell^{\mathcal{I}}_{X}|_{IXI})\) is a CNP-representation of \(IXI\), as required. Proposition 2.6.7 then completes the proof. ## 3. Relative \(2^{d}\)-tuples and relative Cuntz-Nica-Pimsner algebras Parametrising the gauge-invariant ideals of the Toeplitz-Pimsner algebra of a C*-correspondence is facilitated using relative Cuntz-Pimsner algebras. Hence our first goal is to suitably adapt the relative Cuntz-Pimsner algebra construction for a strong compactly aligned product system \(X\). We do this by mimicking the construction of \(\mathcal{NO}_{X}\), and exploiting strong compact alignment in order to define our relative Cuntz-Nica-Pimsner algebras in terms of simple algebraic relations induced by finitely many subsets of the coefficient algebra. As an intermediate step towards the parametrisation, we study further quotients in-between \(\mathcal{NT}_{X}\) and \(\mathcal{NO}_{X}\). ### Relative \(2^{d}\)-tuples and induced ideals We start by paring down the properties of the family \(\mathcal{I}:=\{\mathcal{I}_{F}\}_{F\subseteq[d]}\) and the CNP-relations of [16] covered in Subsection 2.5. **Definition 3.1.1**.: Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). A \(2^{d}\)_-tuple (of \(X\))_ is a family \(\mathcal{L}:=\{\mathcal{L}_{F}\}_{F\subseteq[d]}\) such that \(\mathcal{L}_{F}\) is a non-empty subset of \(A\) for all \(F\subseteq[d]\). A \(2^{d}\)-tuple \(\mathcal{L}\) of \(X\) is called _relative_ if \[\mathcal{L}_{F}\subseteq\bigcap\{\phi_{\underline{1}}^{-1}(\mathcal{K}(X_{ \underline{i}}))\mid i\in F\}\text{ for all }\emptyset\neq F\subseteq[d].\] **Remark 3.1.2**.: We stipulate that the sets \(\mathcal{L}_{F}\) are non-empty for convenience. More specifically, the sets \(\mathcal{L}_{F}\) are designed to generate certain "relative CNP-relations", and so \(\mathcal{L}_{F}=\{0\}\) plays the same role as \(\mathcal{L}_{F}=\emptyset\). In the former case the relative CNP-relations are satisfied trivially, and in the latter case they are satisfied vacuously. Functionally there is no difference, and it is more convenient to take \(\mathcal{L}_{F}=\{0\}\) as in this case \(\mathcal{L}_{F}\) is an ideal. We write \(\mathcal{L}\subseteq\mathcal{L}^{\prime}\) for \(2^{d}\)-tuples \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\) if and only if \(\mathcal{L}_{F}\subseteq\mathcal{L}^{\prime}_{F}\) for all \(F\subseteq[d]\). This defines a partial order on the set of \(2^{d}\)-tuples of \(X\). We say that \(\mathcal{L}=\mathcal{L}^{\prime}\) if and only if \(\mathcal{L}\subseteq\mathcal{L}^{\prime}\) and \(\mathcal{L}^{\prime}\subseteq\mathcal{L}\). Let \((\pi,t)\) be a Nica-covariant representation of \(X\). The crucial property of a relative \(2^{d}\)-tuple \(\mathcal{L}\) is that \[\pi(a)q_{F}=\pi(a)+\sum\{(-1)^{|\underline{n}|}\psi_{\underline{n}}(\phi_{ \underline{n}}(a))\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F}\}\in \operatorname{C}^{*}(\pi,t)\text{ for all }a\in\mathcal{L}_{F},F\subseteq[d],\] using Proposition 2.5.10. This allows us to extend the ideas of [16] in a natural way. **Definition 3.1.3**.: Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be a relative \(2^{d}\)-tuple of \(X\) and let \((\pi,t)\) be a Nica-covariant representation of \(X\). We define the _ideal of the \(\mathcal{L}\)-relative CNP-relations with respect to \((\pi,t)\)_ to be \[\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}:=\sum\{\mathfrak{J}_{\mathcal{L},F}^{(\pi,t)}\mid F\subseteq[d]\}\subseteq\mathrm{C}^{*}(\pi,t),\;\text{where}\; \mathfrak{J}_{\mathcal{L},F}^{(\pi,t)}:=\langle\pi(\mathcal{L}_{F})q_{F} \rangle\;\;\text{for all}\;F\subseteq[d].\] We say that \(\mathcal{L}\)_induces_\(\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}\). Being an algebraic sum of ideals in \(\mathrm{C}^{*}(\pi,t)\), the space \(\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}\) is itself an ideal in \(\mathrm{C}^{*}(\pi,t)\). When \((\pi,t)\) admits a gauge action, the ideal \(\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}\) is gauge-invariant. By setting \(\mathcal{L}=\mathcal{I}\), we recover \(\mathfrak{J}_{\mathcal{I}}^{(\pi,t)}\) as defined in [16]. In many cases, we may assume that \(\mathcal{L}\) is a family of ideals without loss of generality. **Proposition 3.1.4**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be a relative \(2^{d}\)-tuple of \(X\) and let \((\pi,t)\) be a Nica-covariant representation of \(X\). Then \(\langle\mathcal{L}\rangle:=\{\langle\mathcal{L}_{F}\rangle\}_{F\subseteq[d]}\) is a relative \(2^{d}\)-tuple of \(X\) such that \(\mathcal{L}\subseteq\langle\mathcal{L}\rangle\) and \(\mathfrak{J}_{\mathcal{L},F}^{(\pi,t)}=\mathfrak{J}_{\langle\mathcal{L}\rangle,F}^{(\pi,t)}\) for all \(F\subseteq[d]\). In particular, we have that \(\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}=\mathfrak{J}_{\langle\mathcal{L}\rangle }^{(\pi,t)}\)._ **Proof.** It is immediate that \(\mathcal{L}\subseteq\langle\mathcal{L}\rangle\), and that \(\langle\mathcal{L}\rangle\) is a relative \(2^{d}\)-tuple of \(X\). It remains to see that \(\mathfrak{J}_{\mathcal{L},F}^{(\pi,t)}=\mathfrak{J}_{\langle\mathcal{L}\rangle,F}^{(\pi,t)}\) for all \(F\subseteq[d]\), as the final claim follows as an immediate consequence. To this end, fix \(F\subseteq[d]\). Since \(\mathcal{L}_{F}\subseteq\langle\mathcal{L}_{F}\rangle\), we have that \(\mathfrak{J}_{\mathcal{L},F}^{(\pi,t)}\subseteq\mathfrak{J}_{\langle\mathcal{ L}\rangle,F}^{(\pi,t)}\). For the reverse inclusion, for \(b,c\in A\) and \(a\in\mathcal{L}_{F}\), we have that \[\pi(bac)q_{F}=\pi(b)\pi(a)\pi(c)q_{F}=\pi(b)(\pi(a)q_{F})\pi(c)\in\mathfrak{J }_{\mathcal{L},F}^{(\pi,t)},\] using Proposition 2.5.9 in the second equality. It follows that \(\mathfrak{J}_{\langle\mathcal{L}\rangle,F}^{(\pi,t)}\subseteq\mathfrak{J}_{ \mathcal{L},F}^{(\pi,t)}\), as required. Let \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) be relative \(2^{d}\)-tuples of \(X\). It is straightforward to see that their sum \(\mathcal{L}_{1}+\mathcal{L}_{2}\), defined by \((\mathcal{L}_{1}+\mathcal{L}_{2})_{F}:=\mathcal{L}_{1,F}+\mathcal{L}_{2,F}\) for all \(F\subseteq[d]\), is also a relative \(2^{d}\)-tuple. When \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) consist of ideals, summing respects the induced ideals. **Proposition 3.1.5**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) be relative \(2^{d}\)-tuples of \(X\) that consist of ideals and let \((\pi,t)\) be a Nica-covariant representation of \(X\). Then_ \[\mathfrak{J}_{\mathcal{L}_{1}+\mathcal{L}_{2},F}^{(\pi,t)}=\mathfrak{J}_{ \mathcal{L}_{1},F}^{(\pi,t)}+\mathfrak{J}_{\mathcal{L}_{2},F}^{(\pi,t)}\;\text {for all}\;F\subseteq[d],\] _and thus \(\mathfrak{J}_{\mathcal{L}_{1}+\mathcal{L}_{2}}^{(\pi,t)}=\mathfrak{J}_{ \mathcal{L}_{1}}^{(\pi,t)}+\mathfrak{J}_{\mathcal{L}_{2}}^{(\pi,t)}\)._ **Proof.** This is immediate since \(0\in\mathcal{L}_{1,F},\mathcal{L}_{2,F}\) for all \(F\subseteq[d]\). A first approach towards the parametrisation of the gauge-invariant ideals of \(\mathcal{N}\mathcal{T}_{X}\) would be to establish a correspondence between the relative \(2^{d}\)-tuples of \(X\) and the gauge-invariant ideals of \(\mathcal{N}\mathcal{T}_{X}\) that they induce. However, this is insufficient as different relative \(2^{d}\)-tuples may induce the same gauge-invariant ideal of \(\mathcal{N}\mathcal{T}_{X}\). We provide an example to this effect. **Remark 3.1.6**.: Let \(\mathcal{L}\) be a relative \(2^{d}\)-tuple of \(X\) that consists of ideals and \((\pi,t)\) be a Nica-covariant representation. Then the relative \(2^{d}\)-tuple \(\mathcal{L}^{\prime}\) defined by \[\mathcal{L}^{\prime}_{F}:=\begin{cases}\mathcal{L}_{\emptyset}&\text{if }F=\emptyset,\\ \mathcal{L}_{F}+\mathcal{L}_{\emptyset}\cap(\bigcap\{\phi_{\underline{i}}^{-1}( \mathcal{K}(X_{\underline{i}}))\mid i\in F\})&\text{if }0\neq F\subseteq[d],\end{cases}\] satisfies \(\mathfrak{J}_{\mathcal{L}^{\prime}}^{(\pi,t)}=\mathfrak{J}_{\mathcal{L}^{\prime}}^ {(\pi,t)}\). Indeed, on one hand we have that \(\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}\subseteq\mathfrak{J}_{\mathcal{L}^{\prime}}^ {(\pi,t)}\) since \(\mathcal{L}\subseteq\mathcal{L}^{\prime}\). On the other hand, fix \(\emptyset\neq F\subseteq[d]\) and \(a\in\mathcal{L}_{\emptyset}\) satisfying \(\phi_{\underline{i}}(a)\in\mathcal{K}(\underline{i}_{\underline{j}})\) for all \(i\in F\). We claim that \[\psi_{\underline{n}}(\phi_{\underline{n}}(a))\in\langle\pi(\mathcal{L}_{ \emptyset})\rangle=\mathfrak{J}_{\mathcal{L},\emptyset}^{(\pi,t)}\;\text{for all}\; \underline{0}\neq\underline{n}\leq\underline{1}_{F}.\] We will prove this by induction on \(|\underline{n}|\). When \(|\underline{n}|=1\), we must have that \(\underline{n}=\underline{i}\) for some \(i\in F\). In turn, for an approximate unit \((k_{\underline{i},\lambda})_{\lambda\in\Lambda}\subseteq\mathcal{K}(X_{ \underline{i}})\), we have that \[\psi_{\underline{i}}(\phi_{\underline{i}}(a))=\left\|\cdot\right\|\!-\lim_{ \lambda}\psi_{\underline{i}}(\phi_{\underline{i}}(a)k_{\underline{i},\lambda} )=\left\|\cdot\right\|\!-\lim_{\lambda}\pi(a)\psi_{\underline{i}}(k_{ \underline{i},\lambda})\in\left\langle\pi(\mathcal{L}_{\emptyset})\right\rangle,\] as required. Now suppose that the claim holds for all \(\underline{n}\) such that \(\underline{0}\neq\underline{n}\leq\underline{1}_{F}\) and \(|\underline{n}|=N\) for some \(1\leq N<|F|\). Fix \(\underline{m}\) such that \(\underline{0}\neq\underline{m}\leq\underline{1}_{F}\) and \(|\underline{m}|=N+1\). We may express \(\underline{m}\) in the form \(\underline{m}=\underline{n}+\underline{i}\) for some \(\underline{0}\neq\underline{n}\leq\underline{1}_{F}\) and \(i\in F\) such that \(\underline{n}\perp\underline{i}\) and \(|\underline{n}|=N\). We have that \[\psi_{\underline{m}}(\phi_{\underline{m}}(a))=\left\|\cdot\right\|\!-\lim_{ \lambda}\psi_{\underline{m}}((\underline{m}^{m}(\phi_{\underline{n}}(a))\iota ^{\underline{m}}_{\underline{i}}(k_{\underline{i},\lambda}))=\left\|\cdot \right\|\!-\lim_{\lambda}\psi_{\underline{n}}(\phi_{\underline{n}}(a))\psi_{ \underline{i}}(k_{\underline{i},\lambda})\in\left\langle\pi(\mathcal{L}_{ \emptyset})\right\rangle,\] by the inductive hypothesis, Proposition 2.5.7, and Nica-covariance (noting that \(\underline{n}\vee\underline{i}=\underline{n}+\underline{i}=\underline{m}\) since \(\underline{n}\perp\underline{i}\)). This finishes the proof of the claim. Therefore, we have that \[\pi(a)q_{F}=\pi(a)+\sum\{(-1)^{|\underline{n}|}\psi_{\underline{n}}(\phi_{ \underline{n}}(a))\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F}\} \in\left\langle\pi(\mathcal{L}_{\emptyset})\right\rangle\subseteq\mathfrak{J} _{\mathcal{L}}^{(\pi,t)}.\] Hence \[\pi\left(\mathcal{L}_{\emptyset}\cap\left(\bigcap\{\phi_{\underline{i}}^{-1}( \mathcal{K}(X_{\underline{i}}))\mid i\in F\}\right)\right)q_{F}\subseteq \mathfrak{J}_{\mathcal{L}}^{(\pi,t)},\] from which it follows that \(\mathfrak{J}_{\mathcal{L}^{\prime}}^{(\pi,t)}\subseteq\mathfrak{J}_{\mathcal{ L}}^{(\pi,t)}\), as required. However, notice that \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\) may differ in general. For example, assume that the left actions of the fibres of \(X\) are by compact operators. Then any \(2^{d}\)-tuple of \(X\) is automatically relative. Let \(\mathcal{L}_{\emptyset}\) be a non-zero ideal and set \(\mathcal{L}_{F}=\{0\}\) for all \(\emptyset\neq F\subseteq[d]\). Then \(\mathcal{L}\neq\mathcal{L}^{\prime}\), as claimed. To remedy this issue, we will instead look for the largest relative \(2^{d}\)-tuple inducing a fixed gauge-invariant ideal of \(\mathcal{NT}_{X}\). **Definition 3.1.7**.: Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{M}\) be a relative \(2^{d}\)-tuple of \(X\) and let \((\pi,t)\) be a Nica-covariant representation of \(X\). We say that \(\mathcal{M}\) is a _maximal \(2^{d}\)-tuple (of \(X\)) with respect to \((\pi,t)\)_ if whenever \(\mathcal{L}\) is a relative \(2^{d}\)-tuple of \(X\) such that \(\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}=\mathfrak{J}_{\mathcal{M}}^{(\pi,t)}\) and \(\mathcal{M}\subseteq\mathcal{L}\), we have that \(\mathcal{L}=\mathcal{M}\). When we replace \((\pi,t)\) by \((\overline{\pi}_{X},\overline{t}_{X})\), we will refer to \(\mathcal{M}\) simply as a _maximal \(2^{d}\)-tuple (of \(X\))_. The following proposition establishes existence and uniqueness of maximal \(2^{d}\)-tuples. **Proposition 3.1.8**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be a relative \(2^{d}\)-tuple of \(X\) and let \((\pi,t)\) be a Nica-covariant representation of \(X\). Then there exists a unique relative \(2^{d}\)-tuple \(\mathcal{M}\) of \(X\) such that:_ * \(\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}=\mathfrak{J}_{\mathcal{M}}^{(\pi,t)}\)_, and_ * \(\mathcal{L}^{\prime}\subseteq\mathcal{M}\) _for every relative_ \(2^{d}\)_-tuple_ \(\mathcal{L}^{\prime}\) _of_ \(X\) _satisfying_ \(\mathfrak{J}_{\mathcal{L}^{\prime}}^{(\pi,t)}=\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}\)_._ _In particular, \(\mathcal{M}\) is a maximal \(2^{d}\)-tuple of \(X\) with respect to \((\pi,t)\) which consists of ideals._ **Proof.** For each \(F\subseteq[d]\), define \[\mathcal{M}_{F}:=\bigcup\{\mathcal{L}^{\prime}_{F}\mid\mathcal{L}^{\prime}\text { is a relative $2^{d}$-tuple of $X$ such that }\mathfrak{J}_{\mathcal{L}^{\prime}}^{(\pi,t)}=\mathfrak{J}_{\mathcal{L}}^{(\pi,t) }\}.\] The union is well-defined, since it includes \(\mathcal{L}_{F}\) by assumption, and the index takes values in the set \(\mathcal{P}(A)^{2^{d}}\). Each \(\mathcal{M}_{F}\) is a non-empty subset of \(A\) by construction, so \(\mathcal{M}\) is a \(2^{d}\)-tuple of \(X\). Fix \(\emptyset\neq F\subseteq[d]\) and take \(a\in\mathcal{M}_{F}\). Then \(a\in\mathcal{L}^{\prime}_{F}\) for some relative \(2^{d}\)-tuple \(\mathcal{L}^{\prime}\), and hence \(\phi_{\underline{i}}(a)\in\mathcal{K}(X_{\underline{i}})\) for all \(i\in F\). Therefore, setting \(\mathcal{M}:=\{\mathcal{M}_{F}\}_{F\subseteq[d]}\), we have that \(\mathcal{M}\) is a relative \(2^{d}\)-tuple of \(X\). Because \(\mathcal{L}\subseteq\mathcal{M}\), we have that \(\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}\subseteq\mathfrak{J}_{\mathcal{M}}^{(\pi,t)}\) trivially. For the other inclusion, fix \(F\subseteq[d]\) and \(a\in\mathcal{M}_{F}\). It suffices to show that \(\pi(a)q_{F}\in\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}\). To this end, we have that \(a\in\mathcal{L}^{\prime}_{F}\) for some relative \(2^{d}\)-tuple \(\mathcal{L}^{\prime}\) with the property that \(\mathfrak{J}_{\mathcal{L}^{\prime}}^{(\pi,t)}=\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}\). By definition, we have that \(\pi(a)q_{F}\in\mathfrak{J}_{\mathcal{L}^{\prime}}^{(\pi,t)}=\mathfrak{J}_{ \mathcal{L}}^{(\pi,t)}\), as required. By construction, we have that \(\mathcal{L}^{\prime}\subseteq\mathcal{M}\) for every relative \(2^{d}\)-tuple \(\mathcal{L}^{\prime}\) satisfying \(\mathfrak{J}^{(\pi,t)}_{\mathcal{L}^{\prime}}=\mathfrak{J}^{(\pi,t)}_{\mathcal{ L}}\). It is immediate that \(\mathcal{M}\) is unique and maximal with respect to \((\pi,t)\), being the largest relative \(2^{d}\)-tuple inducing \(\mathfrak{J}^{(\pi,t)}_{\mathcal{L}}\). Finally, by Proposition 3.1.4 we have that \(\langle\mathcal{M}\rangle:=\{\langle\mathcal{M}_{F}\rangle\}_{F\subseteq[d]}\) is a relative \(2^{d}\)-tuple such that \(\mathcal{M}\subseteq\langle\mathcal{M}\rangle\) and \(\mathfrak{J}^{(\pi,t)}_{\mathcal{M}}=\mathfrak{J}^{(\pi,t)}_{\langle\mathcal{ M}\rangle}\). Applying maximality of \(\mathcal{M}\), we have that \(\mathcal{M}=\langle\mathcal{M}\rangle\). This shows that \(\mathcal{M}\) consists of ideals, finishing the proof. **Remark 3.1.9**.: The motivating example of a maximal relative \(2^{d}\)-tuple is the family \(\mathcal{I}\). To see this, first note that \(\mathcal{I}\) is a relative \(2^{d}\)-tuple by definition. Let \(\mathcal{M}\) be the maximal \(2^{d}\)-tuple inducing \(\mathfrak{J}^{(\overline{\pi}_{X},\overline{t}_{X})}_{\mathcal{I}}\), so that \(\mathcal{I}\subseteq\mathcal{M}\). If \(a\in\mathcal{M}_{F}\) for \(F\subseteq[d]\), then \[\overline{\pi}_{X}(a)\overline{q}_{X,F}\in\mathfrak{J}^{(\overline{\pi}_{X}, \overline{t}_{X})}_{\mathcal{M}}=\mathfrak{J}^{(\overline{\pi}_{X},\overline {t}_{X})}_{\mathcal{I}},\] and thus we have that \(a\in\mathcal{I}_{F}\) by [16, Proposition 3.4]. Therefore \(\mathcal{M}\subseteq\mathcal{I}\), showing that \(\mathcal{M}=\mathcal{I}\). We wish to ascertain the conditions under which a relative \(2^{d}\)-tuple is promoted to a maximal one. Using \(\mathcal{I}\) as a prototype, we abstract two of its properties. **Definition 3.1.10**.: Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be a \(2^{d}\)-tuple of \(X\). 1. We say that \(\mathcal{L}\) is \(X\)-_invariant_ if \(\big{[}\langle X_{\underline{n}},\mathcal{L}_{F}X_{\underline{n}}\rangle\big{]} \subseteq\mathcal{L}_{F}\), for all \(\underline{n}\perp F\), \(F\subseteq[d]\). 2. We say that \(\mathcal{L}\) is _partially ordered_ if \(\mathcal{L}_{F_{1}}\subseteq\mathcal{L}_{F_{2}}\) whenever \(F_{1}\subseteq F_{2}\subseteq[d]\). When the underlying product system \(X\) is clear from the context, we will abbreviate "\(X\)-invariant" as simply "invariant". Notice that when we take \(F=\emptyset\), condition (i) implies that \(\mathcal{L}_{\emptyset}\) is positively invariant for \(X\) (provided that \(\mathcal{L}_{\emptyset}\) is an ideal). If \(\mathcal{L}_{F}\) is an ideal, then we may drop the closed linear span in condition (i). If \(\mathcal{L}\) is a partially ordered relative \(2^{d}\)-tuple, then \[\mathcal{L}_{F}\subseteq\mathcal{L}_{[d]}\subseteq\bigcap\{\phi_{\underline{ i}}^{-1}(\mathcal{K}(X_{\underline{i}}))\mid i\in[d]\}\text{ for all }F\subseteq[d].\] In particular, we have that \(\pi(a)q_{F}\in\mathrm{C}^{*}(\pi,t)\) for all \(a\in\mathcal{L}_{D}\) and \(D,F\subseteq[d]\), where \((\pi,t)\) is a Nica-covariant representation. The \(2^{d}\)-tuple \(\mathcal{I}\) is invariant, partially ordered and consists of ideals. Since \(\mathcal{I}\) is maximal by Remark 3.1.9, one may be tempted to assert that invariance and partial ordering suffice to capture maximality. However, this is not true in general, as the following counterexample shows. **Example 3.1.11**.: Let \(B\) be a non-zero C*-algebra and let \(A=B\oplus\mathbb{C}\) be its unitisation. Consider the semigroup action \(\alpha\colon\mathbb{Z}_{+}^{2}\to\mathrm{End}(A)\) such that \[\alpha_{(1,0)}=\mathrm{id}_{A}\quad\text{and}\quad\alpha_{(0,1)}(b,\lambda)=(0, \lambda)\text{ for all }b\in B,\lambda\in\mathbb{C}.\] Applying the construction of Subsection 5.3 to the C*-dynamical system \((A,\alpha,\mathbb{Z}_{+}^{2})\), we obtain a strong compactly aligned product system \(X_{\alpha}\) over \(\mathbb{Z}_{+}^{2}\) with coefficients in \(A\). In particular, the left action of every fibre is by compacts, and so any \(2^{2}\)-tuple of \(X_{\alpha}\) is automatically a relative \(2^{2}\)-tuple. Next we define the relative \(2^{2}\)-tuples \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\) of \(X_{\alpha}\) by Notice that \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\) are invariant, partially ordered, consist of ideals and satisfy \(\mathcal{L}\subsetneq\mathcal{L}^{\prime}\). Let \((\pi,t)\) be a Nica-covariant representation. Since \(\mathcal{L}\subseteq\mathcal{L}^{\prime}\), it is clear that \(\mathfrak{J}^{(\pi,t)}_{\mathcal{L}}\subseteq\mathfrak{J}^{(\pi,t)}_{\mathcal{ L}^{\prime}}\). It is immediate that \(\pi(\mathcal{L}^{\prime}_{F})q_{F}\subseteq\mathfrak{J}^{(\pi,t)}_{\mathcal{L}}\) for all \(F\in\{\emptyset,\{2\},\{1,2\}\}\). Fix \((b,0)\in\mathcal{L}^{\prime}_{\{1\}}\), and observe that \[\mathfrak{J}^{(\pi,t)}_{\mathcal{L}}\ni\pi(b,0)q_{\{1,2\}} =\pi(b,0)-\psi_{(1,0)}(\phi_{(1,0)}(b,0))-\psi_{(0,1)}(\phi_{(0,1)}( b,0))+\psi_{(1,1)}(\phi_{(1,1)}(b,0))\] \[=\pi(b,0)-\psi_{(1,0)}(\phi_{(1,0)}(b,0))=\pi(b,0)q_{\{1\}},\] using the fact that \(\phi_{(0,1)}(b,0)=\phi_{(1,1)}(b,0)=0\) by definition. Therefore we conclude that \(\pi(\mathcal{L}^{\prime}_{\{1\}})q_{\{1\}}\subseteq\mathfrak{J}^{(\pi,t)}_{ \mathcal{L}}\), and thus \(\mathfrak{J}^{(\pi,t)}_{\mathcal{L}^{\prime}}\subseteq\mathfrak{J}^{(\pi,t)}_{ \mathcal{L}}\). Hence \(\mathcal{L}\) is not maximal with respect to \((\pi,t)\). To better understand maximality, we need to explore \(2^{d}\)-tuples induced by Nica-covariant representations, demonstrating the conditions of Definition 3.1.10 in action. **Definition 3.1.12**.: Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \((\pi,t)\) be a Nica-covariant representation of \(X\). We define \(\mathcal{L}^{(\pi,t)}\) to be the \(2^{d}\)-tuple of \(X\) given by \[\mathcal{L}^{(\pi,t)}_{\emptyset}:=\ker\pi\quad\text{and}\quad\mathcal{L}^{( \pi,t)}_{F}:=\pi^{-1}(B^{(\pi,t)}_{(\underline{0},\underline{1}_{F})})\text{ for all }\emptyset\neq F\subseteq[d].\] **Proposition 3.1.13**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \((\pi,t)\) be a Nica-covariant representation of \(X\). Then \(\mathcal{L}^{(\pi,t)}\) is invariant, partially ordered and consists of ideals._ _If \((\pi,t)\) is in addition injective, then \(\mathcal{L}^{(\pi,t)}\subseteq\mathcal{I}\), and thus \(\mathcal{L}^{(\pi,t)}\) is relative._ **Proof.** It is clear that \(\mathcal{L}^{(\pi,t)}\) consists of ideals by definition. For invariance, \(\operatorname{fix}F\subseteq[d],a\in\mathcal{L}^{(\pi,t)}_{F}\), and \(\underline{m}\perp F\). Due to Nica-covariance, we have that \[\pi(\big{\langle}X_{\underline{m}},aX_{\underline{m}})\big{\rangle}=t_{ \underline{m}}(X_{\underline{m}})^{*}\pi(a)t_{\underline{m}}(X_{\underline{m}} )\subseteq t_{\underline{m}}(X_{\underline{m}})^{*}B^{(\pi,t)}_{(\underline{0 },\underline{1}_{F})}t_{\underline{m}}(X_{\underline{m}})\subseteq B^{(\pi,t) }_{(\underline{0},\underline{1}_{F})}.\] Hence \(\mathcal{L}^{(\pi,t)}\) is invariant, as required. To see that \(\mathcal{L}^{(\pi,t)}\) is partially ordered, first note that the inclusion \(\mathcal{L}^{(\pi,t)}_{\emptyset}\subseteq\mathcal{L}^{(\pi,t)}_{F}\) is immediate for all \(F\subseteq[d]\). Likewise, whenever \(\emptyset\neq F\subseteq D\subseteq[d]\), we have that \(B^{(\pi,t)}_{(\underline{0},\underline{1}_{F})}\subseteq B^{(\pi,t)}_{( \underline{0},\underline{1}_{D})}\) and therefore \(\mathcal{L}^{(\pi,t)}_{F}\subseteq\mathcal{L}^{(\pi,t)}_{D}\), as required. The final claim follows by [16, Proposition 3.4] (see Proposition 2.5.11). In order to make further progress with maximality, we use relative \(2^{d}\)-tuples to define relative Cuntz-Nica-Pimsner algebras. **Definition 3.1.14**.: Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be a relative \(2^{d}\)-tuple of \(X\) and let \((\pi,t)\) be a Nica-covariant representation of \(X\). We say that \((\pi,t)\) is an \(\mathcal{L}\)_-relative CNP-representation (of \(X\))_ if it satisfies \[\pi(\mathcal{L}_{F})q_{F}=\{0\}\text{ for all }F\subseteq[d].\] The universal C*-algebra with respect to the \(\mathcal{L}\)-relative CNP-representations of \(X\) is denoted by \(\mathcal{NO}(\mathcal{L},X)\), and we refer to it as the \(\mathcal{L}\)_-relative Cuntz-Nica-Pimsner algebra (of \(X\))_. We have that \((\pi,t)\) is an \(\mathcal{L}\)-relative CNP-representation if and only if \(\mathfrak{J}^{(\pi,t)}_{\mathcal{L}}=\{0\}\); equivalently if \(\pi\times t\) factors through \(\mathcal{NO}(\mathcal{L},X)\). The following proposition is the analogue of Proposition 2.3.3 for relative Cuntz-Nica-Pimsner algebras, and shows that \(\mathcal{NO}(\mathcal{L},X)\) exists and admits a universal property. **Proposition 3.1.15**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \(\mathcal{L}\) be a relative \(2^{d}\)-tuple of \(X\). Then there exists a C*-algebra \(\mathcal{NO}(\mathcal{L},X)\) and an \(\mathcal{L}\)-relative CNP-representation \((\pi^{\mathcal{L}}_{X},t^{\mathcal{L}}_{X})\) of \(X\) on \(\mathcal{NO}(\mathcal{L},X)\) such that:_ * \(\mathcal{NO}(\mathcal{L},X)=\mathrm{C}^{*}(\pi^{\mathcal{L}}_{X},t^{\mathcal{L }}_{X})\)_;_ * _if_ \((\pi,t)\) _is an_ \(\mathcal{L}\)_-relative CNP-representation of_ \(X\)_, then there exists a (unique) canonical_ \(*\)_-epimorphism_ \(\Phi\colon\mathcal{NO}(\mathcal{L},X)\to\mathrm{C}^{*}(\pi,t)\)_._ _The pair \((\mathcal{NO}(\mathcal{L},X),(\pi^{\mathcal{L}}_{X},t^{\mathcal{L}}_{X}))\) is unique up to canonical \(*\)-isomorphism._ **Proof.** This follows immediately by considering the quotient of \(\mathcal{NT}_{X}\) by \(\mathfrak{J}^{(\overline{\pi}_{X},\overline{t}_{X})}_{\mathcal{L}}\). We will refer to the representation \((\pi^{\mathcal{L}}_{X},t^{\mathcal{L}}_{X})\) as the _universal \(\mathcal{L}\)-relative CNP-representation (of \(X\))_. Since \(\mathcal{NO}(\mathcal{L},X)\) is an equivariant quotient of \(\mathcal{NT}_{X}\), the representation \((\pi^{\mathcal{L}}_{X},t^{\mathcal{L}}_{X})\) admits a gauge action. Notice that \(\{\{0\}\}_{F\subseteq[d]}\) and \(\mathcal{I}\) both constitute relative \(2^{d}\)-tuples of \(X\) and in particular \[\mathcal{NO}(\{\{0\}\}_{F\subseteq[d]},X)=\mathcal{NT}_{X}\quad\text{and}\quad \mathcal{NO}(\mathcal{I},X)=\mathcal{NO}_{X}.\] Likewise, when \(X\) is a C*-correspondence we recover the relative Cuntz-Pimsner algebras. Note that \(\mathcal{NO}(\mathcal{L},X)=\mathcal{NO}(\left\langle\mathcal{L}\right\rangle,X)\) since the covariance is described C*-algebraically. If \((\pi,t)\) is an injective Nica-covariant representation, then \(\mathcal{L}^{(\pi,t)}\) is a relative \(2^{d}\)-tuple by Proposition 3.1.13, and thus we can make sense of \(\mathfrak{J}_{\mathcal{L}^{(\pi,T)}}^{(\overline{\pi}_{X},\overline{t}_{X})}\). The next proposition shows that \(\mathfrak{J}_{\mathcal{L}^{(\pi,t)}}^{(\overline{\pi}_{X},\overline{t}_{X})}\) arises as a kernel when \((\pi,t)\) admits a gauge action. **Proposition 3.1.16**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \((\pi,t)\) be an injective Nica-covariant representation of \(X\). Then \((\pi,t)\) is an \(\mathcal{L}^{(\pi,t)}\)-relative CNP-representation of \(X\), and consequently there exists a (unique) canonical \(*\)-epimorphism_ \[\Phi\colon\mathcal{NO}(\mathcal{L}^{(\pi,t)},X)\to\mathrm{C}^{*}(\pi,t).\] _Moreover, \(\Phi\) is injective if and only if \((\pi,t)\) admits a gauge action, in which case_ \[\ker\pi\times t=\mathfrak{J}_{\mathcal{L}^{(\pi,T)}}^{(\overline{\pi}_{X}, \overline{t}_{X})}.\] **Proof.** Fix \(F\subseteq[d]\) and \(a\in\mathcal{L}_{F}^{(\pi,t)}=\pi^{-1}(B_{(\underline{0},\underline{1}_{F}]}^ {(\pi,t)})\). Then \(\pi(a)q_{F}=0\) by (2.12). This shows that \((\pi,t)\) is an \(\mathcal{L}^{(\pi,t)}\)-relative CNP-representation. Thus universality of \(\mathcal{NO}(\mathcal{L}^{(\pi,t)},X)\) guarantees the existence of \(\Phi\). For the second claim, we write \((\tilde{\pi},\tilde{t})\) for \((\pi_{X}^{\mathcal{L}^{(\pi,t)}},t_{X}^{\mathcal{L}^{(\pi,t)}})\), for notational convenience. If \(\Phi\) is injective, then \((\pi,t)\) inherits the gauge action \(\beta\) of \((\tilde{\pi},\tilde{t})\). Conversely, suppose that \((\pi,t)\) admits a gauge action \(\gamma\), and note that \(\Phi\) intertwines the gauge actions (i.e., \(\Phi\) is equivariant). Thus it suffices to show that \(\Phi\) is injective on the fixed point algebra, and by Proposition 2.5.13 this amounts to showing that \(\Phi\) is injective on the \([\underline{0},\underline{1}_{[d]}]\)-core. To reach contradiction, suppose there exists \(0\neq f\in\ker\Phi\cap B_{[\underline{0},\underline{1}_{[d]}]}^{(\tilde{\pi}, \tilde{t})}\), so that we may write \[f=\tilde{\pi}(a)+\sum\{\tilde{\psi}_{\underline{n}}(k_{\underline{n}})\mid \underline{0}\neq\underline{n}\leq\underline{1}_{[d]}\},\] for some \(a\in A\) and \(k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}})\) for all \(\underline{0}\neq\underline{n}\leq\underline{1}_{[d]}\). Recall that we write \(\tilde{\pi}(a)=\tilde{\psi}_{\underline{0}}(k_{\underline{0}})\) for \(k_{\underline{0}}:=\phi_{\underline{0}}(a)\). Without loss of generality, we may assume that \(f\) is written irreducibly, so that we may choose \(\underline{0}\leq\underline{r}\leq\underline{1}_{[d]}\) minimal such that \(k_{\underline{r}}\neq 0\), and \(\tilde{\psi}_{\underline{r}}(k_{\underline{r}})\notin B_{(\underline{r}, \underline{1}_{[d]}]}^{(\tilde{\pi},\tilde{t})}\). If \(\underline{r}=\underline{1}_{[d]}\), then \(f=\tilde{\psi}_{\underline{1}_{[d]}}(k_{\underline{1}_{[d]}})\) and \(\Phi(f)=\psi_{\underline{1}_{[d]}}(k_{\underline{1}_{[d]}})=0\). Injectivity of \((\pi,t)\) then implies that \(k_{\underline{1}_{[d]}}=0\) and hence \(f=0\), a contradiction. So without loss of generality assume that \(\underline{r}<\underline{1}_{[d]}\). Then for every \(\xi_{\underline{r}},\eta_{\underline{r}}\in X_{\underline{r}}\), we have that \[0=t_{\underline{r}}(\xi_{\underline{r}})^{*}\Phi(f)t_{\underline{r}}(\eta_{ \underline{r}})=\pi(\langle\xi_{\underline{r}},k_{\underline{r}}\eta_{ \underline{r}}\rangle)+\sum\{\psi_{\underline{n}}(k_{\underline{n}}^{\prime}) \mid\underline{0}\neq\underline{n}\leq\underline{1}_{[d]}-\underline{r}\},\] where each \(k_{\underline{n}}^{\prime}\) is a suitably defined element of \(\mathcal{K}(X_{\underline{n}})\) for all \(\underline{0}\neq\underline{n}\leq\underline{1}_{[d]}-\underline{r}\). Hence we have that \(\langle X_{\underline{r}},k_{\underline{r}}X_{\underline{r}}\rangle\subseteq \mathcal{L}_{F}^{(\pi,t)}\) for \(F:=\mathrm{supp}(\underline{1}_{[d]}-\underline{r})\) (which is non-empty since \(\underline{1}_{[d]}\neq\underline{r}\)), and thus we obtain that \(\tilde{\pi}(\langle X_{\underline{r}},k_{\underline{r}}X_{\underline{r}} \rangle)\subseteq B_{(\underline{0},\underline{1}_{[d]}-\underline{r}]}^{(\tilde{ \pi},\tilde{t})}\) since \((\tilde{\pi},\tilde{t})\) is an \(\mathcal{L}^{(\pi,t)}\)-relative CNP-representation. In particular, we have that \[\tilde{\psi}_{\underline{r}}(\mathcal{K}(X_{\underline{r}}))\tilde {\psi}_{\underline{r}}(k_{\underline{r}})\tilde{\psi}_{\underline{r}}(\mathcal{ K}(X_{\underline{r}})) \subseteq[i_{\underline{r}}(X_{\underline{r}})\tilde{\pi}(\langle X_{ \underline{r}},k_{\underline{r}}X_{\underline{r}}\rangle)\tilde{t}_{\underline{r}} (X_{\underline{r}})^{*}]\] \[\subseteq[i_{\underline{r}}(X_{\underline{r}})B_{(\underline{0}, \underline{1}_{[d]}-\underline{r}]}^{(\tilde{\pi},\tilde{t})}\tilde{t}_{ \underline{r}}(X_{\underline{r}})^{*}]\subseteq B_{(\underline{r},\underline{1}_{[d] })}^{(\tilde{\pi},\tilde{t})},\] and by taking an approximate unit of \(\tilde{\psi}_{\underline{r}}(\mathcal{K}(X_{\underline{r}}))\) we derive that \(\tilde{\psi}_{\underline{r}}(k_{\underline{r}})\in B_{(\underline{r}, \underline{1}_{[d]})}^{(\tilde{\pi},\tilde{t})}\), which is a contradiction. Thus \(\Phi\) is injective on the \([\underline{0},\underline{1}_{[d]}]\)-core, as required. For the last assertion, we have that \(\pi\times t=\Phi\circ Q\) and thus in particular \[\ker\pi\times t=\ker\Phi\circ Q=\ker Q=\mathfrak{J}_{\mathcal{L}^{(\pi,t)}}^{( \overline{\pi}_{X},\overline{I}_{X})}\] for the quotient map \(Q\colon\mathcal{NT}_{X}\to\mathcal{NO}(\mathcal{L}^{(\pi,t)},X)\), and the proof is complete. Injective Nica-covariant representations admitting a gauge action provide an ample supply of maximal relative \(2^{d}\)-tuples. **Proposition 3.1.17**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \((\pi,t)\) be an injective Nica-covariant representation of \(X\) that admits a gauge action. Then \(\mathcal{L}^{(\pi,t)}\) is a maximal \(2^{d}\)-tuple of \(X\) that is contained in \(\mathcal{I}\)._ **Proof.** By Proposition 3.1.13, we have that \(\mathcal{L}^{(\pi,t)}\subseteq\mathcal{I}\). For maximality, let \(\mathcal{L}\) be a relative \(2^{d}\)-tuple of \(X\) such that \(\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X},\overline{I}_{X})}=\mathfrak{ J}_{\mathcal{L}^{(\pi,t)}}^{(\overline{\pi}_{X},\overline{I}_{X})}\) and \(\mathcal{L}^{(\pi,t)}\subseteq\mathcal{L}\). It suffices to show that \(\mathcal{L}\subseteq\mathcal{L}^{(\pi,t)}\). Fix \(F\subseteq[d]\) and \(a\in\mathcal{L}_{F}\). Then \[\overline{\pi}_{X}(a)\overline{q}_{X,F}\in\mathfrak{J}_{\mathcal{L}}^{( \overline{\pi}_{X},\overline{I}_{X})}=\mathfrak{J}_{\mathcal{L}^{(\pi,t)}}^{( \overline{\pi}_{X},\overline{I}_{X})}\] by definition. Since \(\mathfrak{J}_{\mathcal{L}^{(\pi,t)}}^{(\overline{\pi}_{X},\overline{I}_{X})}= \ker\pi\times t\) by Proposition 3.1.16, we obtain that \[\pi(a)q_{F} =\pi(a)+\sum\{(-1)^{|\underline{n}|}\psi_{\underline{n}}(\phi_{ \underline{n}}(a))\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F}\}\] \[=(\pi\times t)(\overline{\pi}_{X}(a)+\sum\{(-1)^{|\underline{n}|} \overline{\psi}_{X,\underline{n}}(\phi_{\underline{n}}(a))\mid\underline{0} \neq\underline{n}\leq\underline{1}_{F}\})\] \[=(\pi\times t)(\overline{\pi}_{X}(a)\overline{q}_{X,F})=0,\] using Proposition 2.5.10 in the first and third equalities. Thus \(a\in\pi^{-1}(B_{(\underline{0},\underline{1}_{F}]}^{(\pi,t)})=\mathcal{L}_{F}^ {(\pi,t)}\) and hence \(\mathcal{L}\subseteq\mathcal{L}^{(\pi,t)}\), as required. ### (E)-\(2^{d}\)-tuples We are interested in a special class of \(2^{d}\)-tuples that describes the relative Cuntz-Nica-Pimsner algebras \(\mathcal{NO}(\mathcal{L},X)\) in-between \(\mathcal{NT}_{X}\) and \(\mathcal{NO}_{X}\). **Proposition 3.2.1**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be a relative \(2^{d}\)-tuple of \(X\) and let \((\pi,t)\) be an \(\mathcal{L}\)-relative CNP-representation of \(X\). If \((\pi,t)\) is injective, then \(\mathcal{L}\subseteq\mathcal{I}\). Conversely, if \(\mathcal{L}\subseteq\mathcal{I}\) then the universal \(\mathcal{L}\)-relative CNP-representation \((\pi_{X}^{\mathcal{L}},t_{X}^{\mathcal{L}})\) is injective._ **Proof.** First assume that \((\pi,t)\) is injective. Fix \(F\subseteq[d]\) and \(a\in\mathcal{L}_{F}\). Since \((\pi,t)\) is an \(\mathcal{L}\)-relative CNP-representation, we have that \(\pi(a)q_{F}=0\) and hence \(\pi(a)\in B_{(\underline{0},\underline{1}_{F}]}^{(\pi,t)}\). Since \((\pi,t)\) is assumed to be injective, an application of [16, Proposition 3.4] gives that \(a\in\mathcal{I}_{F}\). Hence \(\mathcal{L}\subseteq\mathcal{I}\). Conversely, assume that \(\mathcal{L}\subseteq\mathcal{I}\). Then the canonical quotient map \(\mathcal{NT}_{X}\to\mathcal{NO}_{X}\) (which is injective on \(X\)) factors through the canonical \(*\)-epimorphism \(\mathcal{NO}(\mathcal{L},X)\to\mathcal{NO}_{X}\). Hence \((\pi_{X}^{\mathcal{L}},t_{X}^{\mathcal{L}})\) is injective, finishing the proof. We see that the \(2^{d}\)-tuples \(\mathcal{L}\subseteq\mathcal{I}\) are special, and we give them their own name to reflect this. **Definition 3.2.2**.: Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). We say that a \(2^{d}\)-tuple \(\mathcal{L}\) of \(X\) is an _(E)-\(2^{d}\)-tuple (of \(X\))_ if \(\mathcal{L}\subseteq\mathcal{I}\). Every (E)-\(2^{d}\)-tuple is automatically a relative \(2^{d}\)-tuple with \(\mathcal{L}_{\emptyset}=\{0\}=\mathcal{I}_{\emptyset}\). The "E" of "(E)-\(2^{d}\)-tuple" stands for "Embedding", since \(X\hookrightarrow\mathcal{NO}(\mathcal{L},X)\) isometrically for a relative \(2^{d}\)-tuple \(\mathcal{L}\) if and only if \(\mathcal{L}\subseteq\mathcal{I}\), by Proposition 3.2.1. By Proposition 3.1.13, injective Nica-covariant representations provide the quintessential supply of (E)-\(2^{d}\)-tuples. It is straightforward to check that the sum of two (E)-\(2^{d}\)-tuples is an (E)-\(2^{d}\)-tuple. We also make the following observation. **Proposition 3.2.3**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be a \(2^{d}\)-tuple of \(X\). If \(\mathcal{L}\) is invariant and satisfies \(\mathcal{L}\subseteq\mathcal{J}\), then \(\mathcal{L}\) is an (E)-\(2^{d}\)-tuple of \(X\)._ **Proof.** Fix \(F\subseteq[d]\) and \(a\in\mathcal{L}_{F}\). By assumption we have that \[\big{\langle}X_{\underline{n}},aX_{\underline{n}}\big{\rangle}\subseteq\big{[} \big{\langle}X_{\underline{n}},\mathcal{L}_{F}X_{\underline{n}}\big{\rangle} \big{]}\subseteq\mathcal{L}_{F}\subseteq\mathcal{J}_{F}\text{ for all }\underline{n} \perp F.\] By definition this means that \(a\in\mathcal{I}_{F}\) and hence \(\mathcal{L}\subseteq\mathcal{I}\), as required. Let \((\pi,t)\) be a Nica-covariant representation and \(\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}\) be the ideal induced by an (E)-\(2^{d}\)-tuple \(\mathcal{L}\). Our goal in the next propositions is to show that we can choose \(\mathcal{L}\) to be in addition invariant and partially ordered, and to consist of ideals. **Proposition 3.2.4**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be an (E)-\(2^{d}\)-tuple of \(X\) and let \((\pi,t)\) be a Nica-covariant representation of \(X\). Then the \(2^{d}\)-tuple \(\operatorname{Inv}(\mathcal{L})\) of \(X\) defined by_ \[\operatorname{Inv}(\mathcal{L})_{F}:=\overline{\operatorname{span}}\{X_{ \underline{n}}(\mathcal{L}_{F})\mid\underline{n}\perp F\}\text{ for all }F\subseteq[d]\] _is an invariant (E)-\(2^{d}\)-tuple consisting of ideals, and is such that_ \[\mathcal{L}\subseteq\operatorname{Inv}(\mathcal{L})\quad\text{and}\quad \mathfrak{J}_{\mathcal{L},F}^{(\pi,t)}=\mathfrak{J}_{\operatorname{Inv}( \mathcal{L}),F}^{(\pi,t)}\text{ for all }F\subseteq[d].\] _In particular, we have that \(\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}=\mathfrak{J}_{\operatorname{Inv}( \mathcal{L})}^{(\pi,t)}\)._ **Proof.** Recall that \(X_{\underline{n}}(\mathcal{L}_{F})\equiv\big{[}\big{\langle}X_{\underline{n}},\mathcal{L}_{F}X_{\underline{n}}\big{\rangle}\big{]}\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\) and \(F\subseteq[d]\). By considering \(\underline{n}=\underline{0}\), we see that \(\mathcal{L}\subseteq\operatorname{Inv}(\mathcal{L})\). Fix \(F\subseteq[d]\). Then \(X_{\underline{n}}(\mathcal{L}_{F})\) is an ideal of \(A\) for all \(\underline{n}\perp F\), and hence \(\operatorname{Inv}(\mathcal{L})_{F}\) is itself an ideal of \(A\). Since \(\mathcal{L}\) is an (E)-\(2^{d}\)-tuple, we have that \(\mathcal{L}_{F}\subseteq\mathcal{I}_{F}\) and thus \(X_{\underline{n}}(\mathcal{L}_{F})\subseteq\mathcal{I}_{F}\) for all \(\underline{n}\perp F\) because \(\mathcal{I}\) is invariant [16, Proposition 2.7]. Thus \(\operatorname{Inv}(\mathcal{L})_{F}\subseteq\mathcal{I}_{F}\), and so \(\operatorname{Inv}(\mathcal{L})\) is an (E)-\(2^{d}\)-tuple that consists of ideals. To see that \(\operatorname{Inv}(\mathcal{L})\) is invariant, fix \(F\subseteq[d],\underline{m}\perp F\) and \(a\in\operatorname{Inv}(\mathcal{L})_{F}\). Since \(\operatorname{Inv}(\mathcal{L})_{F}\) is an ideal, it suffices to show that \(\big{\langle}X_{\underline{m}},aX_{\underline{m}}\big{\rangle}\subseteq \operatorname{Inv}(\mathcal{L})_{F}\). Assume that \(a=\big{\langle}\xi_{\underline{n}},b\eta_{\underline{n}}\big{\rangle}\) for some \(\underline{n}\perp F,\xi_{\underline{n}},\eta_{\underline{n}}\in X_{ \underline{n}}\) and \(b\in\mathcal{L}_{F}\). Then we have that \[\big{\langle}X_{\underline{m}},aX_{\underline{m}}\big{\rangle}=\big{\langle}X _{\underline{m}},\big{\langle}\xi_{\underline{n}},b\eta_{\underline{n}}\big{\rangle} \,X_{\underline{m}}\big{\rangle}\subseteq\big{\langle}X_{\underline{n}+ \underline{m}},bX_{\underline{n}+\underline{m}}\big{\rangle}\subseteq X_{ \underline{n}+\underline{m}}(\mathcal{L}_{F})\subseteq\operatorname{Inv}( \mathcal{L})_{F},\] using that \(\underline{n}+\underline{m}\perp F\) in the final inclusion. It follows that \(\big{\langle}X_{\underline{m}},X_{\underline{n}}(\mathcal{L}_{F})X_{\underline {m}}\big{\rangle}\subseteq\operatorname{Inv}(\mathcal{L})_{F}\) for all \(\underline{n}\perp F\). We obtain that \(\big{\langle}X_{\underline{m}},\operatorname{Inv}(\mathcal{L})_{F}X_{ \underline{m}}\big{\rangle}\subseteq\operatorname{Inv}(\mathcal{L})_{F}\) by linearity and continuity of the inner product, as required. Since \(\mathcal{L}\subseteq\operatorname{Inv}(\mathcal{L})\), we have that \(\mathfrak{J}_{\mathcal{L},F}^{(\pi,t)}\subseteq\mathfrak{J}_{\operatorname{Inv }(\mathcal{L}),F}^{(\pi,t)}\) for all \(F\subseteq[d]\). For the reverse inclusion, fix \(F\subseteq[d]\) and \(a\in\operatorname{Inv}(\mathcal{L})_{F}\). Assume that \(a=\big{\langle}\xi_{\underline{n}},b\eta_{\underline{n}}\big{\rangle}\) for some \(\underline{n}\perp F,\xi_{\underline{n}},\eta_{\underline{n}}\in X_{ \underline{n}}\) and \(b\in\mathcal{L}_{F}\). We have that \[\pi(a)q_{F}=t_{\underline{n}}(\xi_{\underline{n}})^{*}\pi(b)t_{\underline{n}}( \eta_{\underline{n}})q_{F}=t_{\underline{n}}(\xi_{\underline{n}})^{*}(\pi(b)q_{F })t_{\underline{n}}(\eta_{\underline{n}})\in\mathfrak{J}_{\mathcal{L},F}^{( \pi,t)},\] where we have used Proposition 2.5.9 in the second equality. Therefore \(\pi(X_{\underline{n}}(\mathcal{L}_{F}))q_{F}\subseteq\mathfrak{J}_{\mathcal{L},F}^ {(\pi,t)}\) for all \(\underline{n}\perp F\), from which it follows that \(\pi(\operatorname{Inv}(\mathcal{L})_{F})q_{F}\subseteq\mathfrak{J}_{\mathcal{L},F}^ {(\pi,t)}\). In total we have that \(\mathfrak{J}_{\mathcal{L},F}^{(\pi,t)}=\mathfrak{J}_{\operatorname{Inv}( \mathcal{L}),F}^{(\pi,t)}\) for all \(F\subseteq[d]\), and thus in particular \(\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}=\mathfrak{J}_{\operatorname{Inv}( \mathcal{L})}^{(\pi,t)}\), finishing the proof. In order to choose \(\mathcal{L}\) to be partially ordered, we need the following auxiliary proposition. **Proposition 3.2.5**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be a relative \(2^{d}\)-tuple of \(X\) and let \((\pi,t)\) be a Nica-covariant representation of \(X\). Fix \(D\subseteq F\subseteq[d]\). If \(a\in\bigcap\{\phi_{\underline{i}}^{-1}(\mathcal{K}(X_{\underline{i}}))\mid i\in F\}\) and \(\pi(a)q_{D}\in\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}\), then \(\pi(a)q_{F}\in\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}\)._ **Proof.** Without loss of generality, we may assume that \(D=[k]\) (with the convention that if \(k=0\) then \(D=\emptyset\)) and \(F=[\ell]\) for some \(0\leq k\leq\ell\leq d\). Note that we have that \[\pi(a)q_{D}=\pi(a)+\sum\{(-1)^{[\underline{n}]}\psi_{\underline{n}}(\phi_{ \underline{n}}(a))\mid\underline{0}\neq\underline{n}\leq\underline{1}_{D}\}\] because \(a\in\bigcap\{\phi_{\underline{i}}^{-1}(\mathcal{K}(X_{\underline{i}}))\mid i\in F\}\), and hence in particular \(a\in\bigcap\{\phi_{\underline{i}}^{-1}(\mathcal{K}(X_{\underline{i}}))\mid i\in D\}\) since \(D\subseteq F\). If \(k=\ell\), then there is nothing to show. If \(k<\ell\), then we have that \[\pi(a)q_{D}p_{\underline{k+1}} =\pi(a)p_{\underline{k+1}}+\sum\{(-1)^{|\underline{n}|}\psi_{ \underline{n}}(\phi_{\underline{n}}(a))p_{\underline{k+1}}\mid 0\neq\underline{n}\leq 1_{D}\}\] \[=\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\lim_{\lambda}\pi(a)p_{ \underline{k+1},\lambda}+\sum\{(-1)^{|\underline{n}|}(\|\!\cdot\!\|\!\cdot\!\| \!\cdot\!\lim_{\lambda}\psi_{\underline{n}}(\phi_{\underline{n}}(a))p_{ \underline{k+1},\lambda})\mid 0\neq\underline{n}\leq 1_{D}\}\] \[=\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\! \|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\| \!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot \!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\! \|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\! \|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\|\!\cdot\!\|\!\cdot\|\!\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\! \|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\| \!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\! \cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\|\!\!\cdot\!\|\!\cdot\!\|\!\cdot\|\! \cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\cdot\|\! \cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\! \cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\|\!\!\cdot\|\!\cdot\|\!\cdot\!\|\! \cdot\|\!\cdot\|\!\!\cdot\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\! \cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\! \cdot\|\!\cdot\|\!\cdot\|\!\cdot\|\!\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\! \cdot\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\! \cdot\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\cdot\|\!\!\cdot\|\!\cdot\|\!\!\cdot\|\!\! \cdot\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\! \cdot\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\! \cdot\|\!\!\cdot\|\!\!\cdot\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\cdot\!\|\!\! \cdot\|\!\!\cdot\|\!\cdot\|\!\cdot\|\!\!\cdot\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\! \cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\cdot\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\! \cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\! \cdot\|\!\!\cdot\|\!\cdot\!\|\!\cdot\|\!\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\! \cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\! \cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\|\!\cdot\|\!\!\cdot\|\!\|\! \cdot\!\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\|\!\cdot\|\!\cdot\!\|\!\cdot\|\!\! \cdot\|\!\!\cdot\|\!\!\cdot\|\!\|\!\cdot\!\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\|\! \cdot\|\!\cdot\|\!\!\cdot\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\|\!\cdot\!\|\!\cdot\|\!\! \cdot\|\!\!\cdot\|\!\!\cdot\|\!\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\! \cdot\|\!\cdot\|\!\!\cdot\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\!\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\!\cdot\|\!\|\!\cdot\!\|\!\cdot\|\!\!\cdot\|\!\|\!\cdot\| Letting \(\mathcal{L}\) be an (E)-\(2^{d}\)-tuple and \((\pi,t)\) be a Nica-covariant representation, Propositions 3.2.4 and 3.2.6 combine to give that \(\operatorname{PO}(\operatorname{Inv}(\mathcal{L}))\) is an (E)-\(2^{d}\)-tuple that is invariant, partially ordered, consists of ideals and satisfies \(\mathcal{L}\subseteq\operatorname{PO}(\operatorname{Inv}(\mathcal{L}))\) and \(\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}=\mathfrak{J}_{\operatorname{PO}( \operatorname{Inv}(\mathcal{L}))}^{(\pi,t)}\). Next we focus on the maximal (E)-\(2^{d}\)-tuples. **Definition 3.2.7**.: Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \((\pi,t)\) be a Nica-covariant representation of \(X\). An _(M)-\(2^{d}\)-tuple \(\mathcal{M}\) (of \(X\)) with respect to \((\pi,t)\)_ is a maximal \(2^{d}\)-tuple of \(X\) with respect to \((\pi,t)\) that is also an (E)-\(2^{d}\)-tuple of \(X\). When we replace \((\pi,t)\) by \((\overline{\pi}_{X},\overline{t}_{X})\), we will refer to \(\mathcal{M}\) simply as an _(M)-\(2^{d}\)-tuple (of \(X\))_. The "M" of "(M)-\(2^{d}\)-tuple" stands for "Maximal". Note that (M)-\(2^{d}\)-tuples (with respect to \((\pi,t)\)) maximalise collections of (E)-\(2^{d}\)-tuples, instead of just relative \(2^{d}\)-tuples. Indeed, if \(\mathcal{M}\) is an (M)-\(2^{d}\)-tuple with respect to \((\pi,t)\), then it contains every relative \(2^{d}\)-tuple \(\mathcal{L}\) of \(X\) satisfying \(\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}=\mathfrak{J}_{\mathcal{M}}^{(\pi,t)}\) by Proposition 3.1.8. Each such \(\mathcal{L}\) satisfies \(\mathcal{L}\subseteq\mathcal{M}\subseteq\mathcal{I}\) and is therefore an (E)-\(2^{d}\)-tuple, as claimed. **Remark 3.2.8**.: Both \(\mathcal{L}:=\{\{0\}\}_{F\subseteq[d]}\) and \(\mathcal{I}\) are (M)-\(2^{d}\)-tuples. For \(\mathcal{I}\) this is shown in Remark 3.1.9. On the other hand, \(\mathcal{L}\) is clearly an (E)-\(2^{d}\)-tuple. To see that \(\mathcal{L}\) is maximal, let \(\mathcal{L}^{\prime}\) be a relative \(2^{d}\)-tuple such that \(\mathfrak{J}_{\mathcal{L}^{\prime}}^{(\overline{\pi}_{X},\overline{t}_{X})}= \mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X},\overline{t}_{X})}=\{0\}\) and \(\mathcal{L}\subseteq\mathcal{L}^{\prime}\). We may replace \((\overline{\pi}_{X},\overline{t}_{X})\) by the Fock representation \((\overline{\pi},\overline{t})\). For the projection \(P_{\underline{0}}\colon\mathcal{F}X\to X_{\underline{0}}\), we have that \[\phi_{\underline{0}}(\mathcal{L}^{\prime}_{F})=P_{\underline{0}}\big{(} \overline{\pi}(\mathcal{L}^{\prime}_{F})\overline{q}_{F}\big{)}P_{\underline{ 0}}\subseteq P_{\underline{0}}\big{(}\mathfrak{J}_{\mathcal{L}^{\prime}}^{( \overline{\pi},\overline{t})}\big{)}P_{\underline{0}}=\{0\},\] for all \(F\subseteq[d]\), and thus \(\mathcal{L}=\mathcal{L}^{\prime}\) as required. Moreover, in Proposition 3.1.17 we have shown that \(\mathcal{L}^{(\pi,t)}\) is an (M)-\(2^{d}\)-tuple whenever \((\pi,t)\) is injective and admits a gauge action. The (M)-\(2^{d}\)-tuples of \(X\) parametrise the gauge-invariant ideals \(\mathfrak{J}\) of \(\mathcal{NT}_{X}\) for which \(\mathcal{NT}_{X}/\mathfrak{J}\) lies between \(\mathcal{NT}_{X}\) and \(\mathcal{NO}_{X}\). To prove this, we will require the following proposition. **Proposition 3.2.9**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \(\mathfrak{J}\subseteq\mathcal{NT}_{X}\) be a gauge-invariant ideal. Then the following are equivalent:_ * \(\overline{\pi}_{X}^{-1}(\mathfrak{J})=\{0\}\)_;_ * _there exists a unique (M)-_\(2^{d}\)_-tuple of_ \(X\) _inducing_ \(\mathfrak{J}\)_;_ * _there exists an (E)-_\(2^{d}\)_-tuple of_ \(X\) _inducing_ \(\mathfrak{J}\)_._ **Proof.** First assume that \(\overline{\pi}_{X}^{-1}(\mathfrak{J})=\{0\}\). Let \(Q_{\mathfrak{J}}\colon\mathcal{NT}_{X}\to\mathcal{NT}_{X}/\mathfrak{J}\) denote the quotient map. Then the Nica-covariant representation \((Q_{\mathfrak{J}}\circ\overline{\pi}_{X},Q_{\mathfrak{J}}\circ\overline{t}_{X})\) is injective and admits a gauge action. By Proposition 3.1.16, we have that \[\mathfrak{J}=\ker Q_{\mathfrak{J}}=\ker(Q_{\mathfrak{J}}\circ\overline{\pi}_{ X})\times(Q_{\mathfrak{J}}\circ\overline{t}_{X})=\mathfrak{J}_{\mathcal{L}^{ \prime}(Q_{\mathfrak{J}}\circ\overline{\pi}_{X},Q_{\mathfrak{J}}\circ\overline{ t}_{X})}^{(\overline{\pi}_{X},\overline{t}_{X})},\] since \((Q_{\mathfrak{J}}\circ\overline{\pi}_{X})\times(Q_{\mathfrak{J}}\circ\overline{ t}_{X})=Q_{\mathfrak{J}}\). Thus \(\mathcal{L}^{(Q_{\mathfrak{J}}\circ\overline{\pi}_{X},Q_{\mathfrak{J}}\circ \overline{t}_{X})}\) induces \(\mathfrak{J}\), and by Proposition 3.1.17 it is an (M)-\(2^{d}\)-tuple of \(X\). By Proposition 3.1.8 it is also unique, proving that (i) implies (ii). Item (ii) implies item (iii) trivially. So assume that there exists an (E)-\(2^{d}\)-tuple \(\mathcal{L}\) such that \(\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X},\overline{t}_{X})}=\mathfrak{J}\). In particular \((\pi_{X}^{\mathcal{L}},t_{X}^{\mathcal{L}})\) is injective by Proposition 3.2.1. Hence, letting \[Q_{\mathfrak{J}}\colon\mathcal{NT}_{X}\to\mathcal{NO}(\mathcal{L},X)=\mathcal{NT} _{X}/\mathfrak{J}\] denote the quotient map, we have that \[\{0\}=\ker\pi_{X}^{\mathcal{L}}=\ker Q_{\mathfrak{J}}\circ\overline{\pi}_{X}=\{a \in A\mid\overline{\pi}_{X}(a)\in\mathfrak{J}\}=\overline{\pi}_{X}^{-1}( \mathfrak{J}),\] showing that item (iii) implies item (i), finishing the proof. **Remark 3.2.10**.: Proposition 3.2.9 implies that the mapping \[\{\mathcal{M}\mid\mathcal{M}\text{ is an (M)-}2^{d}\text{-tuple of }X\} \to\{\mathfrak{J}\subseteq\mathcal{NT}_{X}\mid\mathfrak{J}\text{ is a gauge-invariant ideal, }\overline{\pi}_{X}^{-1}(\mathfrak{J})=\{0\}\}\] \[\mathcal{M}\mapsto\mathfrak{J}_{\mathcal{M}}^{(\overline{\pi}_{X}, \overline{t}_{X})}\] is a bijection. More specifically, the mappings \[\mathcal{M}\mapsto\mathfrak{J}_{\mathcal{M}}^{(\overline{\pi}_{X}, \overline{\mathfrak{J}}_{X})}\text{ for all (M)-$2^{d}$-tuples $\mathcal{M}$ of $X$},\] \[\mathfrak{J}\mapsto\mathcal{L}^{(Q_{3}\circ\overline{\pi}_{X},Q_{ 3}\circ\overline{\sigma}_{X})}\text{ for all gauge-invariant ideals $\mathfrak{J}\subseteq\mathcal{NT}_{X}$ satisfying $\overline{\pi}_{X}^{-1}(\mathfrak{J})=\{0\}$},\] are mutually inverse, where \(Q_{\mathfrak{J}}\colon\mathcal{NT}_{X}\to\mathcal{NT}_{X}/\mathfrak{J}\) is the quotient map. The following Gauge-Invariant Uniqueness Theorem recovers [34, Corollary 11.8] when \(d=1\), and [16, Theorem 4.2] when \(\mathcal{L}=\mathcal{I}\). **Theorem 3.2.11** (\(\mathbb{Z}_{+}^{d}\)-Giumt for (M)-\(2^{d}\)-tuples).: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be an (M)-\(2^{d}\)-tuple of \(X\) and \((\pi,t)\) be a Nica-covariant representation of \(X\). Then \(\mathcal{N}\mathcal{O}(\mathcal{L},X)\cong\mathrm{C}^{*}(\pi,t)\) via a (unique) canonical \(*\)-isomorphism if and only if \((\pi,t)\) admits a gauge action and \(\mathcal{L}^{(\pi,t)}=\mathcal{L}\)._ Proof.: We start with the following remark. If \(\mathcal{N}\mathcal{O}(\mathcal{L},X)\cong\mathrm{C}^{*}(\pi,t)\) canonically, then \((\pi,t)\) is injective as \(X\) embeds isometrically in \(\mathcal{N}\mathcal{O}(\mathcal{L},X)\). On the other hand, if \(\mathcal{L}^{(\pi,t)}=\mathcal{L}\) then \(\ker\pi\equiv\mathcal{L}_{\emptyset}^{(\pi,t)}=\mathcal{L}_{\emptyset}=\{0\}\), as \(\mathcal{L}\) is in particular an (E)-\(2^{d}\)-tuple. Hence in both directions we may use that \((\pi,t)\) is injective. Assume that \(\mathcal{N}\mathcal{O}(\mathcal{L},X)\cong\mathrm{C}^{*}(\pi,t)\) via a canonical \(*\)-isomorphism \(\Phi\colon\mathcal{N}\mathcal{O}(\mathcal{L},X)\to\mathrm{C}^{*}(\pi,t)\). For notational convenience, we will abbreviate \((\pi_{X}^{\mathcal{L}},t_{X}^{\mathcal{L}})\) as \((\tilde{\pi},\tilde{t})\). Let \(\beta\) denote the gauge action of \((\tilde{\pi},\tilde{t})\). By defining \(\gamma_{\underline{z}}:=\Phi\circ\beta_{\underline{z}}\circ\Phi^{-1}\) for each \(\underline{z}\in\mathbb{T}^{d}\), we obtain a gauge action \(\gamma\) of \((\pi,t)\). Next we show that \(\mathcal{L}^{(\pi,t)}=\mathcal{L}\). To this end, it suffices to show that \(\mathfrak{J}_{\mathcal{L}^{(\pi,t)}}^{(\overline{\pi}_{X},\overline{\mathfrak{ J}}_{X})}=\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X},\overline{\mathfrak{ J}}_{X})}\). Then we can apply maximality of \(\mathcal{L}^{(\pi,t)}\) (which holds by Proposition 3.1.17) and of \(\mathcal{L}\) (which holds by assumption), together with uniqueness from item (ii) of Proposition 3.2.9, to obtain the result. To this end, fix \(F\subseteq[d]\) and \(a\in\mathcal{L}_{F}\). Then \(\tilde{\pi}(a)\tilde{q}_{F}=0\) since \((\tilde{\pi},\tilde{t})\) is an \(\mathcal{L}\)-relative CNP-representation. Applying \(\Phi\), we obtain that \(\pi(a)q_{F}=0\), and therefore \(a\in\mathcal{L}_{F}^{(\pi,t)}\) by definition. Hence \(\mathfrak{J}_{\mathcal{L}^{(\pi,T)}}^{(\overline{\pi}_{X},\overline{\mathfrak{ J}}_{X})}\subseteq\mathfrak{J}_{\mathcal{L}^{(\pi,t)}}^{(\overline{\pi}_{X}, \overline{\mathfrak{J}}_{X})}\). Next, fix \(F\subseteq[d]\) and \(a\in\mathcal{L}_{F}^{(\pi,t)}\). An application of (2.12) yields that \(\pi(a)q_{F}=0\). Recall that \(\tilde{\pi}=Q\circ\overline{\pi}_{X}\) and \(\tilde{t}_{\underline{n}}=Q\circ\overline{\mathfrak{J}}_{X,\underline{n}}\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\setminus\{\underline{0}\}\), where \(Q\colon\mathcal{NT}_{X}\to\mathcal{N}\mathcal{O}(\mathcal{L},X)\) is the quotient map. Hence, applying \(\Phi^{-1}\) to \(\pi(a)q_{F}=0\), we obtain that \(Q(\overline{\pi}_{X}(a)\overline{q}_{X,F})=0\), and therefore \[\overline{\pi}_{X}(a)\overline{q}_{X,F}\in\ker Q=\mathfrak{J}_{\mathcal{L}}^{( \overline{\pi}_{X},\overline{\mathfrak{J}}_{X})}.\] Hence we have that \(\mathfrak{J}_{\mathcal{L}^{(\pi,T)}}^{(\overline{\pi}_{X},\overline{\mathfrak{ J}}_{X})}\subseteq\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X}, \overline{\mathfrak{J}}_{X})}\), and thus \(\mathfrak{J}_{\mathcal{L}^{(\pi,T)}}^{(\overline{\pi}_{X},\overline{\mathfrak{ J}}_{X})}=\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X}, \overline{\mathfrak{J}}_{X})}\), as required. The converse is an immediate application of Proposition 3.1.16. Indeed, since \((\pi,t)\) is injective and admits a gauge action, we obtain that \(\mathcal{N}\mathcal{O}(\mathcal{L}^{(\pi,t)},X)\cong\mathrm{C}^{*}(\pi,t)\) canonically, and the fact that \(\mathcal{L}^{(\pi,t)}=\mathcal{L}\) then gives that \(\mathcal{N}\mathcal{O}(\mathcal{L}^{(\pi,t)},X)=\mathcal{N}\mathcal{O}(\mathcal{L },X)\), finishing the proof. ### Properties of ideals induced by relative \(2^{d}\)-tuples The definition of (M)-\(2^{d}\)-tuples itself depends extensively on the associated gauge-invariant ideals of \(\mathcal{NT}_{X}\). Hence, to obtain a genuinely useful parametrisation, we must characterise (M)-\(2^{d}\)-tuples in a way that does not (explicitly) depend on any gauge-invariant ideal of \(\mathcal{NT}_{X}\). To this end, we need to study the properties of the gauge-invariant ideals arising from relative \(2^{d}\)-tuples. When a relative \(2^{d}\)-tuple \(\mathcal{L}\) consists of ideals and is invariant, the ideals \(\mathfrak{J}_{\mathcal{L},F}^{(\pi,t)}\) take on a particularly simple form akin to the one obtained in [16, Proposition 4.6]. **Proposition 3.3.1**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be a relative \(2^{d}\)-tuple of \(X\) and let \((\pi,t)\) be a Nica-covariant representation of \(X\). If \(\mathcal{L}\) is invariant and consists of ideals, then_ \[\mathfrak{J}_{\mathcal{L},F}^{(\pi,t)}=\overline{\mathrm{span}}\{t_{\underline{n}} (X_{\underline{n}})\pi(\mathcal{L}_{F})q_{F}t_{\underline{m}}(X_{\underline{m}})^{ *}\mid\underline{n},\underline{m}\in\mathbb{Z}_{+}^{d}\}\text{ for all $F\subseteq[d]$}. \tag{3.1}\] Proof.: The proof follows the arguments of [16, Proposition 4.6]. In short, fix \(F\subseteq[d]\) and denote the right hand side of (3.1) by \(\mathfrak{J}_{F}\). The fact that \(\mathfrak{J}_{F}\subseteq\mathfrak{J}_{\mathcal{L},F}^{(\pi,t)}\) is immediate by definition. To prove that \(\mathfrak{J}_{\mathcal{L},F}^{(\pi,t)}\subseteq\mathfrak{J}_{F}\), it suffices to show that \(\mathfrak{J}_{F}\) is an ideal of \(\mathrm{C}^{*}(\pi,t)\), as \(\mathfrak{J}_{F}\) contains the generators of \(\mathfrak{J}_{\mathcal{L},F}^{(\pi,t)}\). Since \(\mathfrak{J}_{F}\) is selfadjoint, it suffices to show that \(\mathfrak{J}_{F}\mathrm{C}^{*}(\pi,t)\subseteq\mathfrak{J}_{F}\). It is clear that \(\mathfrak{J}_{F}t_{\underline{r}}(X_{\underline{r}})^{*}\subseteq\mathfrak{J} _{F}\) for all \(\underline{r}\in\mathbb{Z}_{+}^{d}\). Hence it remains to show that \(\mathfrak{J}_{F}t_{\underline{r}}(X_{\underline{r}})\subseteq\mathfrak{J}_{F}\) for all \(\underline{r}\in\mathbb{Z}_{+}^{d}\). For \(\underline{n},\underline{m},\underline{r}\in\mathbb{Z}_{+}^{d}\), we have that \[t_{\underline{n}}(X_{\underline{n}})\pi(\mathcal{L}_{F})q_{F}t_{\underline{m }}(X_{\underline{m}})^{*}t_{\underline{r}}(X_{\underline{r}})\subseteq\left[t_ {\underline{n}}(X_{\underline{n}})\pi(\mathcal{L}_{F})q_{F}t_{\underline{m}^{ \prime}}(X_{\underline{m}^{\prime}})t_{\underline{r}^{\prime}}(X_{\underline {r}^{\prime}})^{*}\right],\] where \(\underline{m}^{\prime}=-\underline{m}+\underline{m}\vee\underline{r}\) and \(\underline{r}^{\prime}=-\underline{r}+\underline{m}\vee\underline{r}\), due to Nica-covariance. If \(\underline{m}^{\prime}\not\perp F\), then \[t_{\underline{n}}(X_{\underline{n}})\pi(\mathcal{L}_{F})q_{F}t_{\underline{m}^ {\prime}}(X_{\underline{m}^{\prime}})t_{\underline{r}^{\prime}}(X_{ \underline{r}^{\prime}})^{*}=\{0\}\subseteq\mathfrak{J}_{F}\] by Proposition 2.5.9. If \(\underline{m}^{\prime}\perp F\), then positive invariance of \(\mathcal{L}_{F}\) for \(X_{\underline{m}^{\prime}}\) and Lemma 2.2.6 together yield that \[\pi(\mathcal{L}_{F})t_{\underline{m}^{\prime}}(X_{\underline{m}^{\prime}})=t_ {\underline{m}^{\prime}}(\mathcal{L}_{F}X_{\underline{m}^{\prime}})\subseteq t _{\underline{m}^{\prime}}(X_{\underline{m}^{\prime}}\mathcal{L}_{F})=t_{ \underline{m}^{\prime}}(X_{\underline{m}^{\prime}})\pi(\mathcal{L}_{F}).\] Then we have that \[\left[t_{\underline{n}}(X_{\underline{n}})\pi(\mathcal{L}_{F})q_{ F}t_{\underline{m}^{\prime}}(X_{\underline{m}^{\prime}})t_{\underline{r}^{ \prime}}(X_{\underline{r}^{\prime}})^{*}\right] \subseteq\left[t_{\underline{n}}(X_{\underline{n}})\pi(\mathcal{ L}_{F})t_{\underline{m}^{\prime}}(X_{\underline{m}^{\prime}})q_{F}t_{\underline{r}^{ \prime}}(X_{\underline{r}^{\prime}})^{*}\right]\] \[\subseteq\left[t_{\underline{n}+\underline{m}^{\prime}}(X_{ \underline{n}+\underline{m}^{\prime}})\pi(\mathcal{L}_{F})q_{F}t_{\underline{ r}^{\prime}}(X_{\underline{r}^{\prime}})^{*}\right]\subseteq\mathfrak{J}_{F},\] where we have used Proposition 2.5.9 in the first inclusion. Hence in all cases we have that \[t_{\underline{n}}(X_{\underline{n}})\pi(\mathcal{L}_{F})q_{F}t_{\underline{m} }(X_{\underline{m}})^{*}t_{\underline{r}}(X_{\underline{r}})\subseteq \mathfrak{J}_{F},\] and thus \(\mathfrak{J}_{F}t_{\underline{r}}(X_{\underline{r}})\subseteq\mathfrak{J}_{F}\) for all \(\underline{r}\in\mathbb{Z}_{+}^{d}\), as required. By requiring that \(\mathcal{L}\) is partially ordered, we can say more. **Proposition 3.3.2**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be a relative \(2^{d}\)-tuple of \(X\) that is invariant, partially ordered, and consists of ideals. Let \((\pi,t)\) be a Nica-covariant representation of \(X\). Then_ \[q_{F}\mathfrak{J}_{\mathcal{L},D}^{(\pi,t)}q_{F}\subseteq\overline{\mathrm{ span}}\{t_{\underline{n}}(X_{\underline{n}})\pi(\mathcal{L}_{F\cup D})q_{F\cup D}t_{ \underline{m}}(X_{\underline{m}})^{*}\mid\underline{n},\underline{m}\perp F \}\subseteq\mathfrak{J}_{\mathcal{L},F\cup D}^{(\pi,t)}\text{ for all }F,D\subseteq[d].\] _Moreover, we have that_ \[q_{F}\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}q_{F}\subseteq\sum\{\mathfrak{J}_{ \mathcal{L},D}^{(\pi,t)}\mid F\subseteq D\subseteq[d]\}\text{ for all }F\subseteq[d].\] **Proof.** For the first part, fix \(F,D\subseteq[d]\). Then we have that \[q_{F}\mathfrak{J}_{\mathcal{L},D}^{(\pi,t)}q_{F} \subseteq\overline{\mathrm{span}}\{q_{F}t_{\underline{n}}(X_{ \underline{n}})\pi(\mathcal{L}_{D})q_{D}t_{\underline{m}}(X_{\underline{m}})^{* }q_{F}\mid\underline{n},\underline{m}\in\mathbb{Z}_{+}^{d}\}\] \[=\overline{\mathrm{span}}\{t_{\underline{n}}(X_{\underline{n}})\pi( \mathcal{L}_{D})q_{F\cup D}t_{\underline{m}}(X_{\underline{m}})^{*}\mid \underline{n},\underline{m}\perp F\}\] \[\subseteq\overline{\mathrm{span}}\{t_{\underline{n}}(X_{ \underline{n}})\pi(\mathcal{L}_{F\cup D})q_{F\cup D}t_{\underline{m}}(X_{ \underline{m}})^{*}\mid\underline{n},\underline{m}\perp F\}\subseteq\mathfrak{J }_{\mathcal{L},F\cup D}^{(\pi,t)},\] where we have used Propositions 2.5.9 and 3.3.1, that \(q_{F}\) and \(q_{D}\) are commuting projections, and that \(\mathcal{L}\) is partially ordered (so \(\mathcal{L}_{D}\subseteq\mathcal{L}_{F\cup D}\)). For the second part, fix \(F\subseteq[d]\). Then applying the first part for all \(D\subseteq[d]\) yields that \[q_{F}\mathfrak{J}_{\mathcal{L}}^{(\pi,t)}q_{F}=\sum\{q_{F}\mathfrak{J}_{ \mathcal{L},D}^{(\pi,t)}q_{F}\mid D\subseteq[d]\}\subseteq\sum\{\mathfrak{J}_{ \mathcal{L},F\cup D}^{(\pi,t)}\mid D\subseteq[d]\}=\sum\{\mathfrak{J}_{ \mathcal{L},D}^{(\pi,t)}\mid F\subseteq D\subseteq[d]\},\] and the proof is complete. Let \(I\) be an ideal of \(A\) and fix \(F\subseteq[d]\). For \(a\in A\), we write \[\lim_{\underline{m}\perp F}\|\phi_{\underline{m}}(a)+\mathcal{K}(X_{\underline{m} }I)\|=0\] if for any \(\varepsilon>0\) there exists \(\underline{m}\perp F\) such that \[\|\phi_{\underline{n}}(a)+\mathcal{K}(X_{\underline{n}}I)\|<\varepsilon\text{ for all }\underline{n}\geq\underline{m}\text{ satisfying }\underline{n}\perp F.\] When working with a \(2^{d}\)-tuple \(\mathcal{L}\) satisfying certain compatibility relations, the limit condition can be recast in a simpler form. **Proposition 3.3.3**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be an invariant \(2^{d}\)-tuple of \(X\) which consists of ideals and satisfies_ \[\mathcal{L}_{F}\subseteq\bigcap\{\phi_{\underline{\mathrm{i}}}^{-1}(\mathcal{K} (X_{\underline{\mathrm{i}}}))\mid i\in[d]\}\text{ for all }F\subseteq[d].\] _Then, for each \(F\subseteq[d]\) and \(a\in A\), we have that \(\lim_{\underline{m}\perp F}\|\phi_{\underline{m}}(a)+\mathcal{K}(X_{ \underline{m}}\mathcal{L}_{F})\|=0\) if and only if for any \(\varepsilon>0\), there exists \(\underline{m}\perp F\) and \(k_{\underline{m}}\in\mathcal{K}(X_{\underline{m}}\mathcal{L}_{F})\) such that \(\|\phi_{\underline{m}}(a)+k_{\underline{m}}\|<\varepsilon\)._ **Proof.** The forward implication is immediate by definition. So assume that for any \(\varepsilon>0\) there exists \(\underline{m}\perp F\) and \(k_{\underline{m}}\in\mathcal{K}(X_{\underline{m}}\mathcal{L}_{F})\) such that \(\|\phi_{\underline{m}}(a)+k_{\underline{m}}\|<\varepsilon\). Fix \(\varepsilon>0\) and a corresponding \(\underline{m}\perp\overline{F}\) and \(k_{\underline{m}}\in\mathcal{K}(\overline{X}_{\underline{m}}\mathcal{L}_{F})\). If \(F=[d]\), then \(\underline{m}\perp\overline{F}\) implies that \(\underline{m}=\underline{0}\), and there is nothing to show. So assume that \(F\subseteq[d]\). Without loss of generality, we may assume that \(\underline{m}\neq\underline{0}\). Indeed, if \(\underline{m}=\underline{0}\) then \(k_{\underline{0}}\in\mathcal{K}(A\mathcal{L}_{F})\) and hence \(k_{\underline{0}}=\phi_{\underline{0}}(b)\) for some \(b\in\mathcal{L}_{F}\). Note that \[\|a+b\|=\|\phi_{\underline{0}}(a)+\phi_{\underline{0}}(b)\|=\|\phi_{ \underline{0}}(a)+k_{\underline{0}}\|<\varepsilon.\] Fix \(i\in F^{c}\). Since \(b\in\mathcal{L}_{F}\), we have that \(\phi_{\underline{\mathrm{i}}}(b)\in\mathcal{K}(X_{\underline{\mathrm{i}}})\) by assumption, and \(\big{\langle}X_{\underline{\mathrm{i}}},bX_{\underline{\mathrm{i}}}\big{\rangle} \subseteq\mathcal{L}_{F}\) by invariance of \(\mathcal{L}\). Thus an application of (2.1) gives that \(\phi_{\underline{\mathrm{i}}}(b)\in\mathcal{K}(X_{\underline{\mathrm{i}}} \mathcal{L}_{F})\), and \[\|\phi_{\underline{\mathrm{i}}}(a)+\phi_{\underline{\mathrm{i}}}(b)\|=\|\phi_{ \underline{\mathrm{i}}}(a+b)\|\leq\|a+b\|<\varepsilon.\] So we may assume that \(\underline{m}\neq\underline{0}\). Next, take \(\underline{n}\geq\underline{m}\) with \(\underline{n}\perp F\). We will show that \(\|\phi_{\underline{n}}(a)+\mathcal{K}(X_{\underline{n}}\mathcal{L}_{F})\|<\varepsilon\). We write \(\underline{n}=\underline{m}+\underline{r}\) for some \(\underline{r}\perp F\), and without loss of generality we may assume that \(\underline{r}\neq\underline{0}\). By Proposition 2.5.2, we have that \(\phi_{\underline{r}}(\mathcal{L}_{F})\subseteq\mathcal{K}(X_{\underline{r}} \mathcal{L}_{F})\). An application of Corollary 2.2.5 then yields that \(k_{\underline{m}}\otimes\mathrm{id}_{X_{\underline{n}}}\in\mathcal{K}((X_{ \underline{m}}\otimes_{A}X_{\underline{r}})\mathcal{L}_{F})\), and so \(\underline{\mu}_{\underline{m}}(k_{\underline{m}})\in\mathcal{K}(X_{ \underline{n}}\mathcal{L}_{F})\). We then obtain that \[\|\phi_{\underline{n}}(a)+\mathcal{K}(X_{\underline{n}}\mathcal{L}_{F})\|\leq \|\phi_{\underline{n}}(a)+\iota_{\underline{m}}^{\underline{n}}(k_{\underline{ m}})\|=\|\iota_{\underline{m}}^{\underline{n}}(\phi_{\underline{m}}(a)+k_{ \underline{m}})\|\leq\|\phi_{\underline{m}}(a)+k_{\underline{m}}\|<\varepsilon,\] from which it follows that \(\lim_{\underline{m}\perp F}\|\phi_{\underline{m}}(a)+\mathcal{K}(X_{ \underline{m}}\mathcal{L}_{F})\|=0\), as required. The limit condition appears naturally in the study of ideals of \(\mathcal{NT}_{X}\) that are induced by invariant, partially ordered relative \(2^{d}\)-tuples that consist of ideals. **Proposition 3.3.4**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be a relative \(2^{d}\)-tuple of \(X\) that is invariant, partially ordered and consists of ideals. Fix \(F\subsetneq[d]\) and let \(a\in\bigcap\{\phi_{\underline{\mathrm{i}}}^{-1}(\mathcal{K}(X_{\underline{ \mathrm{i}}}))\mid i\in[d]\}\). Then the following are equivalent:_ * \(\overline{\pi}_{X}(a)\overline{q}_{X,F}\in\mathfrak{J}_{\mathcal{L}}^{( \overline{\pi},\overline{\imath},\overline{\imath}_{X})}\)_;_ * \(\overline{\pi}_{X}(a)\overline{q}_{X,F}\in\sum\{\mathfrak{J}_{\mathcal{L},D}^{( \overline{\pi}_{X},\overline{\imath}_{X})}\mid F\subseteq D\subseteq[d]\}\)_._ _Furthermore, if one (and hence both) of (i) and (ii) holds, then_ \[\lim_{\underline{m}\perp F}\|\phi_{\underline{m}}(a)+\mathcal{K}(X_{\underline{ m}}\mathcal{L}_{F})\|=0.\] **Proof.** Without loss of generality, we may replace \((\overline{\pi}_{X},\overline{\imath}_{X})\) by the Fock representation \((\overline{\pi},\overline{\imath})\) and write \(\mathfrak{J}_{\mathcal{L}}\equiv\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}, \overline{\imath})}\), and \(\mathfrak{J}_{\mathcal{L},F}\equiv\mathfrak{J}_{\mathcal{L},F}^{(\overline{\pi}, \overline{\imath})}\) for all \(F\subseteq[d]\). The implication [(ii)\(\Rightarrow\)(i)] is immediate. Conversely, if \(\overline{\pi}(a)\overline{q}_{F}\in\mathfrak{J}_{\mathcal{L}}\), then we have that \[\overline{\pi}(a)\overline{q}_{F}=\overline{q}_{F}(\overline{\pi}(a)\overline{q} _{F})\overline{q}_{F}\in\overline{q}_{F}\mathfrak{J}_{\mathcal{L}}\overline{q}_{F} \subseteq\sum\{\mathfrak{J}_{\mathcal{L},D}\mid F\subseteq D\subseteq[d]\},\] using Proposition 2.5.9 (and the fact that \(\overline{q}_{F}\) is a projection) in the first equality, and Proposition 3.3.2 in the final inclusion. This establishes the equivalence of (i) and (ii). Now suppose that one (and hence both) of (i) and (ii) holds. Condition (ii) then yields that \[\overline{\pi}(a)\overline{q}_{F} \in\sum\{\overline{q}_{F}\mathfrak{J}_{\mathcal{L},D}\overline{q}_{F} \mid F\subseteq D\subseteq[d]\}\] \[\subseteq\overline{\mathrm{span}}\{\overline{\imath}_{\underline{n}}(X_{ \underline{n}})\overline{\pi}(\mathcal{L}_{D})\overline{q}_{D}\overline{ \imath}_{\underline{m}}(X_{\underline{m}})^{*}\mid\underline{n},\underline{m} \perp F,F\subseteq D\subseteq[d]\},\] using Proposition 3.3.2 for the containment. Let \(\gamma\) be the gauge action of \((\overline{\pi},\overline{\imath})\) and let \(E_{\gamma}\colon\mathrm{C}^{*}(\overline{\pi},\overline{\imath})\to\mathrm{C}^{*}( \overline{\pi},\overline{\imath})^{\gamma}\) be the associated faithful conditional expectation. Therefore, since \(\overline{\pi}(a)\overline{q}_{F}\in\mathrm{C}^{*}(\overline{\pi},\overline{ \imath})^{\gamma}\), applying \(E_{\gamma}\) yields that \[\overline{\pi}(a)\overline{q}_{F}\in\overline{\mathrm{span}}\{\overline{\imath}_{ \underline{n}}(X_{\underline{n}})\overline{\pi}(\mathcal{L}_{D})\overline{q}_{D} \overline{\imath}_{\underline{n}}(X_{\underline{n}})^{*}\mid\underline{n}\perp F,F \subseteq D\subseteq[d]\}. \tag{3.2}\] Fix \(\varepsilon>0\). By Proposition 3.3.3, it is sufficient to find \(\underline{m}\perp F\) and \(k_{\underline{m}}\in\mathcal{K}(X_{\underline{m}}\mathcal{L}_{F})\) such that \(\|\phi_{\underline{m}}(a)+k_{\underline{m}}\|<\varepsilon\). By (3.2), for each \(F\subseteq D\subseteq[d]\) there exists \[c_{D}\in\operatorname{span}\{\overline{t}_{\underline{n}}(X_{\underline{n}}) \overline{\pi}(\mathcal{L}_{D})\overline{q}_{D}\overline{t}_{\underline{n}}(X _{\underline{n}})^{*}\mid\underline{n}\perp F\}\] such that \[\|\overline{\pi}(a)\overline{q}_{F}-\sum\{c_{D}\mid F\subseteq D\subseteq[d] \}\|<\varepsilon.\] Moreover, we may assume that \[c_{D}=\sum\{\sum_{j=1}^{N}\overline{t}_{\underline{n}}(\xi_{\underline{n},D}^ {j})\overline{\pi}(b_{\underline{n},D}^{j})\overline{q}_{D}\overline{t}_{ \underline{n}}(\eta_{\underline{n},D}^{j})^{*}\mid\underline{0}\leq\underline {n}\leq m\cdot\underline{1}_{F^{c}}\},\] for some \(m,N\in\mathbb{N}\), \(\xi_{\underline{n},D}^{j},\eta_{\underline{n},D}^{j}\in X_{\underline{n}}\) and \(b_{\underline{n},D}^{j}\in\mathcal{L}_{D}\), for all \(j\in[N]\) and \(\underline{0}\leq\underline{n}\leq m\cdot\underline{1}_{F^{c}}\). We may use the same \(m\) and \(N\) for every \(c_{D}\) by padding with zeros, if necessary. Fix the projection \[P\colon\mathcal{F}X\to X_{(m+1)\cdot\underline{1}_{F^{c}}}.\] First we claim that \(c_{D}P=0\) whenever \(F\subsetneq D\subseteq[d]\), so that \[\|\overline{\pi}(a)\overline{q}_{F}P-c_{F}P\| =\|(\overline{\pi}(a)\overline{q}_{F}-\sum\{c_{D}\mid F\subseteq D \subseteq[d]\})P\|\] \[\leq\|\overline{\pi}(a)\overline{q}_{F}-\sum\{c_{D}\mid F\subseteq D \subseteq[d]\}\|<\varepsilon.\] Indeed, for each \(\underline{0}\leq\underline{n}\leq m\cdot\underline{1}_{F^{c}}\) and each \(j\in[N]\), we have that \(\overline{t}_{\underline{n}}(\eta_{\underline{n},D}^{j})^{*}P\) has image in \(X_{\underline{r}}\) for some \(\underline{r}\neq\underline{0}\) with \(\operatorname{supp}\underline{r}=F^{c}\), since \(m<m+1\). Note also that \(D\cap F^{c}\neq\emptyset\) since \(F\subsetneq D\), and thus \[\overline{q}_{D}\overline{t}_{\underline{n}}(\eta_{\underline{n},D}^{j})^{*}P( \mathcal{F}X)\subseteq\overline{q}_{D}\sum\{X_{\underline{r}}\mid\operatorname {supp}\underline{r}=F^{c}\}=\{0\}.\] Consequently \[\overline{t}_{\underline{n}}(\xi_{\underline{n},D}^{j})\overline{\pi}(b_{ \underline{n},D}^{j})\overline{q}_{D}\overline{t}_{\underline{n}}(\eta_{ \underline{n},D}^{j})^{*}P=0,\text{ for all }\underline{0}\leq\underline{n}\leq m\cdot \underline{1}_{F^{c}},j\in[N],\] and hence \(c_{D}P=0\). Next we examine the terms of \(c_{F}P\). To this end, fix \(\underline{0}\neq\underline{n}\leq m\cdot\underline{1}_{F^{c}}\) and \(j\in[N]\). Take \(\zeta_{(m+1)\cdot\underline{1}_{F^{c}}}\in X_{(m+1)\cdot\underline{1}_{F^{c}}}\) such that \[\zeta_{(m+1)\cdot\underline{1}_{F^{c}}}=\zeta_{\underline{n}}\zeta_{(m+1)\cdot \underline{1}_{F^{c}}-\underline{n}}\text{ for some }\zeta_{\underline{n}}\in X_{\underline{n}},\zeta_{(m+1)\cdot \underline{1}_{F^{c}}-\underline{n}}\in X_{(m+1)\cdot\underline{1}_{F^{c}}- \underline{n}}.\] Note that \(\operatorname{supp}((m+1)\cdot\underline{1}_{F^{c}}-\underline{n})=F^{c}\), and so we have that \[\overline{t}_{\underline{n}}(\xi_{\underline{n},F}^{j})\overline{\pi}(b_{ \underline{n},F}^{j})\overline{q}_{F}\overline{t}_{\underline{n}}(\eta_{ \underline{n},F}^{j})^{*}P\zeta_{(m+1)\cdot\underline{1}_{F^{c}}} =\overline{t}_{\underline{n}}(\xi_{\underline{n},F}^{j})\overline{ \pi}(b_{\underline{n},F}^{j})\overline{q}_{F}\overline{t}_{\underline{n}}(\eta_ {\underline{n},F}^{j})^{*}\zeta_{(m+1)\cdot\underline{1}_{F^{c}}}\] \[=\xi_{\underline{n},F}^{j}\big{(}\phi_{(m+1)\cdot\underline{1}_{F^ {c}}-\underline{n}}(b_{\underline{n},F}^{j}(\eta_{\underline{n},F}^{j},\zeta_{ \underline{n}}))\zeta_{(m+1)\cdot\underline{1}_{F^{c}}-\underline{n}}\big{)}\] \[=\big{(}\Theta_{\underline{\xi}_{\underline{n},F}^{j}b_{\underline{ n},F}^{j},\eta_{\underline{n},F}^{j}}(\zeta_{\underline{n}})\zeta_{(m+1)\cdot \underline{1}_{F^{c}}-\underline{n}}\] \[=t_{\underline{n}}^{(m+1)\cdot\underline{1}_{F^{c}}}(\Theta_{ \underline{\xi}_{\underline{n},F}^{j}b_{\underline{n},F}^{j},\eta_{\underline{n},F }^{j}})\zeta_{(m+1)\cdot\underline{1}_{F^{c}}}.\] We deduce that \[\overline{t}_{\underline{n}}(\xi_{\underline{n},F}^{j})\overline{\pi}(b_{ \underline{n},F}^{j})\overline{q}_{F}\overline{t}_{\underline{n}}(\eta_{ \underline{n},F}^{j})^{*}P=\underline{n}^{(m+1)\cdot\underline{1}_{F^{c}}}( \Theta_{\underline{\xi}_{\underline{n},F}^{j}b_{\underline{n},F}^{j},\eta_{ \underline{n},F}^{j}}),\] where we view the right hand side as an element of \(\mathcal{L}(\mathcal{F}X)\) in the way described in Subsection 2.3. Arguing in a similar (and simpler) way when \(\underline{n}=\underline{0}\), we deduce that \[c_{F}P=\phi_{(m+1)\cdot\underline{1}_{F^{c}}}(b)+\sum\{\sum_{j=1}^{N}t_{ \underline{n}}^{(m+1)\cdot\underline{1}_{F^{c}}}(\Theta_{\underline{n},F}b_{ \underline{n},F}^{j},\eta_{\underline{n},F}^{j})\mid\underline{0}\neq \underline{n}\leq m\cdot\underline{1}_{F^{c}}\},\] for some \(b\in\mathcal{L}_{F}\). By Proposition 2.5.2, we have that \[\phi_{(m+1)\cdot\underline{1}_{F^{c}}}(b)\in\mathcal{K}(X_{(m+1)\cdot\underline{1} _{F^{c}}}\mathcal{L}_{F})\] and \[\phi_{(m+1)\cdot\underline{1}_{F^{c}}-\underline{n}}(b_{\underline{n},F}^{j})\in \mathcal{K}(X_{(m+1)\cdot\underline{1}_{F^{c}}-\underline{n}}\mathcal{L}_{F}) \text{ for all }\underline{0}\neq\underline{n}\leq m\cdot\underline{1}_{F^{c}},j\in[N].\] An application of Lemma 2.2.4 gives that all of the summands of \(c_{F}P\) belong to \(\mathcal{K}(X_{(m+1)}\underline{1}_{F^{c}}\mathcal{L}_{F})\). Therefore we have that \(c_{F}P\in\mathcal{K}(X_{(m+1)}\underline{1}_{F^{c}}\mathcal{L}_{F})\hookrightarrow \mathcal{L}(\mathcal{F}X)\), which satisfies \[\|\phi_{(m+1)\cdot\underline{1}_{F^{c}}}(a)-c_{F}P\|=\|\overline{\pi}(a) \overline{q}_{F}P-c_{F}P\|<\varepsilon,\] where we use that \[\phi_{(m+1)\cdot\underline{1}_{F^{c}}}(a)=\overline{\pi}(a)P=\overline{\pi}(a) \overline{q}_{F}P\] in the identification \(\mathcal{L}(X_{(m+1)}\cdot\underline{1}_{F^{c}})\hookrightarrow\mathcal{L}( \mathcal{F}X)\). Hence we have found \(\underline{m}:=(m+1)\cdot\underline{1}_{F^{c}}\perp F\) and \(k_{\underline{m}}:=-c_{F}P\in\mathcal{K}(X_{\underline{m}}\mathcal{L}_{F})\) such that \(\|\phi_{\underline{m}}(a)+k_{\underline{m}}\|<\varepsilon\), as required. **Remark 3.3.5**.: Proposition 3.3.4 holds for \(F=[d]\) as well, though the second claim requires a different argument. Specifically, we compress \(\overline{\pi}(a)\overline{q}_{[d]}\in\mathfrak{J}_{\mathcal{L},[d]}\) at the \(\underline{0}\)-th entry to deduce that \(a\in\mathcal{L}_{[d]}\). Then \(\phi_{\underline{0}}(a)\in\mathcal{K}(A\mathcal{L}_{[d]})\), and so \(\lim_{\underline{m}\perp[d]}\|\phi_{\underline{m}}(a)+\mathcal{K}(X_{ \underline{m}}\mathcal{L}_{[d]})\|=0\), as required. ### Constructing (M)-\(2^{d}\)-tuples In this subsection we show how the maximal \(2^{d}\)-tuple inducing \(\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X},\overline{I}_{X})}\), for \(\mathcal{L}\) an (E)-\(2^{d}\)-tuple, is constructed. The following definition is motivated in part by Proposition 3.3.4. **Definition 3.4.1**.: Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \(\mathcal{L}\) be a \(2^{d}\)-tuple of \(X\) that consists of ideals. Fixing \(\emptyset\neq F\subsetneq[d]\), we define \[\mathcal{L}_{\mathrm{inv},F}:=\bigcap_{\underline{m}\perp F}X_{\underline{m}}^{ -1}(\cap_{F\subsetneq D}\mathcal{L}_{D})\quad\text{and}\quad\mathcal{L}_{ \mathrm{lim},F}:=\{a\in A\mid\lim_{\underline{m}\perp F}\|\phi_{\underline{m}} (a)+\mathcal{K}(X_{\underline{m}}\mathcal{L}_{F})\|=0\}.\] If \(\mathcal{L}\) is in addition an (E)-\(2^{d}\)-tuple of \(X\), then we define the \(2^{d}\)-tuple \(\mathcal{L}^{(1)}\) of \(X\) by \[\mathcal{L}_{F}^{(1)}:=\begin{cases}\{0\}&\text{if $F=\emptyset$},\\ \mathcal{I}_{F}\cap\mathcal{L}_{\mathrm{inv},F}\cap\mathcal{L}_{\mathrm{lim},F }&\text{if $\emptyset\neq F\subsetneq[d]$},\\ \mathcal{L}_{[d]}&\text{if $F=[d]$}.\end{cases}\] Observe that each \(\mathcal{L}_{F}^{(1)}\) is non-empty, as \(0\in\mathcal{L}_{F}^{(1)}\). Note also that \(\mathcal{L}^{(1)}\) is an (E)-\(2^{d}\)-tuple by construction. We will show that \(\mathcal{L}^{(1)}\) consists of ideals, and so we can write \(\mathcal{L}^{(k+1)}:=(\mathcal{L}^{(k)})^{(1)}\) for \(k\in\mathbb{N}\) (by convention we set \(\mathcal{L}^{(0)}:=\mathcal{L}\)). To this end, first we note that \(\mathcal{L}_{\mathrm{inv},F}\) is an intersection of ideals and is thus an ideal itself. To address \(\mathcal{L}_{\mathrm{lim},F}\), we have the following proposition. **Proposition 3.4.2**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be a \(2^{d}\)-tuple of \(X\) that consists of ideals and fix \(\emptyset\neq F\subsetneq[d]\). Then we have that \(\mathcal{L}_{\mathrm{lim},F}\) is an ideal._ **Proof.** Firstly, we define \[B_{F}:=\bigoplus_{\underline{m}\perp F}\mathcal{L}(X_{\underline{m}})/\mathcal{ K}(X_{\underline{m}}\mathcal{L}_{F})\quad\text{and}\quad c_{0}(B_{F}):=\{(S_{ \underline{m}})_{\underline{m}\perp F}\in B_{F}\mid\lim_{\underline{m}\perp F} \|S_{\underline{m}}\|=0\},\] and observe that \(c_{0}(B_{F})\) is an ideal in \(B_{F}\). Consider the \(*\)-homomorphism \(\psi_{F}\) defined by \[\psi_{F}\colon A\to\bigoplus_{\underline{m}\perp F}\mathcal{L}(X_{\underline{m }})\to B_{F}\to B_{F}/c_{0}(B_{F});\] \[\psi_{F}\colon a\mapsto(\phi_{\underline{m}}(a))_{\underline{m}\perp F}\mapsto( \phi_{\underline{m}}(a)+\mathcal{K}(X_{\underline{m}}\mathcal{L}_{F}))_{ \underline{m}\perp F}\mapsto(\phi_{\underline{m}}(a)+\mathcal{K}(X_{\underline{m }}\mathcal{L}_{F}))_{\underline{m}\perp F}+c_{0}(B_{F}).\] It is now a standard C*-result that \[\ker\psi_{F}=\{a\in A\mid\lim_{\underline{m}\perp F}\|\phi_{\underline{m}}(a) +\mathcal{K}(X_{\underline{m}}\mathcal{L}_{F})\|=0\},\] and thus \(\mathcal{L}_{\mathrm{lim},F}=\ker\psi_{F}\) is an ideal, as required. **Proposition 3.4.3**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be an \((E)\)-\(2^{d}\)-tuple of \(X\) that consists of ideals. Then \(\mathcal{L}^{(1)}\) is an (E)-\(2^{d}\)-tuple of \(X\) that consists of ideals. If \(\mathcal{L}\) is invariant, then so is \(\mathcal{L}^{(1)}\)._ **Proof.** We have already remarked that \(\mathcal{L}^{(1)}\) is an (E)-\(2^{d}\)-tuple. The fact that \(\mathcal{L}^{(1)}\) consists of ideals follows from Proposition 3.4.2 and the discussion preceding it. Now assume that \(\mathcal{L}\) is invariant. The invariance condition for \(\mathcal{L}^{(1)}\) clearly holds when \(F=\emptyset\) or \(F=[d]\), so assume that \(\emptyset\neq F\subsetneq[d]\). Fix \(\underline{n}\perp F\), \(\xi_{\underline{n}},\eta_{\underline{n}}\in X_{\underline{n}}\), and \(a\in\mathcal{L}_{F}^{(1)}\). It suffices to show that \[\big{\langle}\xi_{\underline{n}},\phi_{\underline{n}}(a)\eta_{\underline{n}} \big{\rangle}\in\mathcal{L}_{F}^{(1)}=\mathcal{I}_{F}\cap\mathcal{L}_{\rm inv },F\cap\mathcal{L}_{\rm lim},F.\] If \(\underline{n}=\underline{0}\) there is nothing to show, so assume that \(\underline{n}\neq\underline{0}\). First we have that \(\big{\langle}\xi_{\underline{n}},\phi_{\underline{n}}(a)\eta_{\underline{n}} \big{\rangle}\in\mathcal{I}_{F}\) by [16, Proposition 2.7], since \(a\in\mathcal{I}_{F}\). Now fix \(\underline{m}\perp F\) and \(\xi_{\underline{m}},\eta_{\underline{m}}\in X_{\underline{m}}\). Then \[\big{\langle}\xi_{\underline{m}},\phi_{\underline{m}}(\langle\xi_{\underline{ n}},\phi_{\underline{n}}(a)\eta_{\underline{n}}\rangle)\eta_{\underline{m}} \big{\rangle}=\big{\langle}\xi_{\underline{n}}\xi_{\underline{m}},(\phi_{ \underline{n}}(a)\eta_{\underline{n}})\eta_{\underline{m}}\big{\rangle}= \big{\langle}\xi_{\underline{n}}\xi_{\underline{m}},\phi_{\underline{n}+ \underline{m}}(a)(\eta_{\underline{n}}\eta_{\underline{m}})\big{\rangle}\in \cap_{F\subsetneq D}\mathcal{L}_{D},\] since \(a\in\mathcal{L}_{F}^{(1)}\) and thus in particular \(a\in\mathcal{L}_{\rm inv},F\), noting that \(\underline{n}+\underline{m}\perp F\). This proves that \(\big{\langle}\xi_{\underline{n}},\phi_{\underline{n}}(a)\eta_{\underline{n}} \big{\rangle}\in\mathcal{L}_{\rm inv},F\). It remains to check that \(\lim_{\underline{m}\perp F}\|\phi_{\underline{m}}(\langle\xi_{\underline{n}}, \phi_{\underline{n}}(a)\eta_{\underline{n}}\rangle)+\mathcal{K}(X_{\underline {m}}\mathcal{L}_{F})\|=0\). To this end, fix \(\varepsilon>0\). Since \(a\in\mathcal{L}_{F}^{(1)}\subseteq\mathcal{L}_{\rm lim},F\), there exist \(\underline{m}\perp F\) and \(k_{\underline{m}}\in\mathcal{K}(X_{\underline{m}}\mathcal{L}_{F})\) such that \[\|\phi_{\underline{m}}(a)+k_{\underline{m}}\|<\frac{\varepsilon}{\|\xi_{ \underline{n}}\|\cdot\|\eta_{\underline{n}}\|}.\] Without loss of generality, we may assume that \(\underline{m}\neq\underline{0}\) (if \(\underline{m}=\underline{0}\), then argue as in the proof of Proposition 3.3.3 to replace \(\underline{m}\) by \(\underline{i}\) for some \(i\in F^{c}\)). We have that \[\underline{\eta_{\underline{m}}^{m+\underline{n}}}(k_{\underline{m}})=u_{ \underline{m},\underline{n}}(k_{\underline{m}}\otimes{\rm id}_{X_{\underline{ n}}})u_{\underline{m},\underline{n}}^{*},\] and \(k_{\underline{m}}\otimes{\rm id}_{X_{\underline{n}}}\in\mathcal{K}((X_{ \underline{m}}\otimes_{A}X_{\underline{n}})\mathcal{L}_{F})\) by Corollary 2.2.5, noting that \(\phi_{\underline{n}}(\mathcal{L}_{F})\subseteq\mathcal{K}(X_{\underline{n}} \mathcal{L}_{F})\) by Proposition 2.5.2. It follows that \[\underline{\eta_{\underline{m}}^{m+\underline{n}}}(k_{\underline{m}})\in \mathcal{K}(X_{\underline{m}+\underline{n}}\mathcal{L}_{F}).\] Next, define \(\tau(\xi_{\underline{n}})\in\mathcal{L}(X_{\underline{m}},X_{\underline{m}+ \underline{n}})\) and \(\tau(\eta_{\underline{n}})\in\mathcal{L}(X_{\underline{m}},X_{\underline{m}+ \underline{n}})\) by \[\tau(\xi_{\underline{n}})\xi_{\underline{m}}=\xi_{\underline{n}}\xi_{ \underline{m}}\ \ \text{and}\ \ \ \tau(\eta_{\underline{n}})\xi_{\underline{m}}=\eta_{\underline{n}}\xi_{ \underline{m}}\] for all \(\xi_{\underline{m}}\in X_{\underline{m}}\), and observe that \(\|\tau(\xi_{\underline{n}})\|\leq\|\xi_{\underline{n}}\|\) and \(\|\tau(\eta_{\underline{n}})\|\leq\|\eta_{\underline{n}}\|\). We then have that \[\tau(\xi_{\underline{n}})^{*}\underline{\eta_{\underline{m}}^{m+\underline{n}}}( k_{\underline{m}})\tau(\eta_{\underline{n}})\in\mathcal{K}(X_{\underline{m}} \mathcal{L}_{F}). \tag{3.3}\] Indeed, taking \(\zeta_{\underline{n}},\zeta_{\underline{n}}^{\prime}\in X_{\underline{n}}, \zeta_{\underline{m}},\zeta_{\underline{m}}^{\prime}\in X_{\underline{m}}\) and \(b\in\mathcal{L}_{F}\), we compute \[\tau(\xi_{\underline{n}})^{*}\Theta_{\underline{n}\underline{m}\underline{n}}^{X_ {\underline{m}+\underline{n}}}(\xi_{\underline{n}},\zeta_{\underline{n}}^{ \prime}\underline{\zeta}_{\underline{m}}^{\prime}}\tau(\eta_{\underline{n}})= \Theta_{\underline{m}}^{X_{\underline{m}}}(\langle\xi_{\underline{n}},\zeta_{ \underline{n}}\rangle)\zeta_{\underline{m}^{b},\phi_{\underline{m}}}(\langle\eta_ {\underline{n}},\zeta_{\underline{n}}^{\prime}\rangle)\zeta_{\underline{m}}^{ \prime}\in\mathcal{K}(X_{\underline{m}}\mathcal{L}_{F}).\] By taking finite linear combinations and their norm-limits, we conclude that \[\tau(\xi_{\underline{n}})^{*}\mathcal{K}(X_{\underline{m}+\underline{n}} \mathcal{L}_{F})\tau(\eta_{\underline{n}})\subseteq\mathcal{K}(X_{\underline{m}} \mathcal{L}_{F}),\] which implies (3.3). Moreover, a direct computation on the elements of \(X_{\underline{m}}\) yields that \[\tau(\xi_{\underline{n}})^{*}\phi_{\underline{m}+\underline{n}}(a)\tau(\eta_{ \underline{n}})=\phi_{\underline{m}}(\langle\xi_{\underline{n}},\phi_{ \underline{n}}(a)\eta_{\underline{n}}\rangle).\] We then have that \[\|\phi_{\underline{m}}(\langle\xi_{\underline{n}},\phi_{\underline{ n}}(a)\eta_{\underline{n}}\rangle)+\tau(\xi_{\underline{n}})^{*}\underline{\iota} \underline{\underline{m}+\underline{n}}(k_{\underline{m}})\tau(\eta_{\underline{n}})\| =\|\tau(\xi_{\underline{n}})^{*}(\phi_{\underline{m}+\underline{n}}(a)+\underline{ \iota}\underline{\underline{m}+\underline{n}}(k_{\underline{m}}))\tau(\eta_{ \underline{n}})\|\] \[\leq\|\xi_{\underline{n}}\|\cdot\|\underline{\iota}\underline{\underline {m}+\underline{n}}(\phi_{\underline{m}}(a))+\underline{\iota}\underline{\underline{m}+ \underline{n}}(k_{\underline{m}})\|\cdot\|\eta_{\underline{n}}\|\] \[\leq\|\xi_{\underline{n}}\|\cdot\|\phi_{\underline{m}}(a)+k_{ \underline{m}}\|\cdot\|\eta_{\underline{n}}\|<\varepsilon.\] This shows that \(\big{\langle}\xi_{\underline{n}},\phi_{\underline{n}}(a)\eta_{\underline{n}} \big{\rangle}\in\mathcal{L}_{\rm lim},F\) by (3.3) and Proposition 3.3.3 (since \(\mathcal{L}\) is assumed to be invariant), and the proof is complete. Next we explore the interaction of the \(\mathcal{L}^{(1)}\) construction with the partial ordering property. To this end, we have the following auxiliary proposition. **Proposition 3.4.4**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be an (E)-\(2^{d}\)-tuple of \(X\) that is invariant and consists of ideals. Then for each \(F\subseteq[d]\), we have that_ \[\overline{\pi}_{X}(\mathcal{L}_{F}^{(1)})\overline{\mathcal{J}}_{X,F}\subseteq \sum\{\mathfrak{J}_{\mathcal{L},D}^{(\overline{\pi}_{X},\overline{\mathfrak{ I}}_{X})}\mid F\subseteq D\subseteq[d]\}\subseteq\mathfrak{J}_{\mathcal{L}}^{( \overline{\pi}_{X},\overline{\mathfrak{I}}_{X})}.\] **Proof.** It suffices to show the first inclusion of the statement, as the second holds by definition. Without loss of generality, we may replace \((\overline{\pi}_{X},\overline{\mathfrak{I}}_{X})\) by the Fock representation \((\overline{\pi},\overline{t})\), and write \(\mathfrak{I}_{\mathcal{L}}\equiv\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}, \overline{t})}\) and \(\mathfrak{I}_{\mathcal{L},F}\equiv\mathfrak{J}_{\mathcal{L},F}^{(\overline{ \pi},\overline{t})}\) for all \(F\subseteq[d]\). The claim holds trivially when \(F=\emptyset\) or \(F=[d]\), so fix \(\emptyset\neq F\subsetneq[d]\) and \(a\in\mathcal{L}_{F}^{(1)}\). We must show that \(\overline{\pi}(a)\overline{q}_{F}\in\sum\{\mathfrak{I}_{\mathcal{L},D}\mid F \subseteq D\subseteq[d]\}\). To this end, fix \(\varepsilon>0\). Since \(a\in\mathcal{L}_{F}^{(1)}\subseteq\mathcal{L}_{\lim,F}\), there exists \(\underline{m}\perp F\) such that \[\|\phi_{\underline{n}}(a)+\mathcal{K}(X_{\underline{n}}\mathcal{L}_{F})\|< \varepsilon\text{ for every }\underline{n}\geq\underline{m}\text{ with }\underline{n}\perp F.\] Consider a suitable \(m\in\mathbb{N}\) such that \[\underline{n}:=(m+1)\cdot\mathbbm{1}_{F^{c}}\geq\underline{m},\] and take \(k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}}\mathcal{L}_{F})\) such that \(\|\phi_{\underline{n}}(a)+k_{\underline{n}}\|<\varepsilon\). Next, for each \(i\in F^{c}\) we define the projection \[W_{i}\colon\mathcal{F}X\to\sum\{X_{\underline{r}}\mid\underline{r}=(r_{1}, \ldots,r_{d})\in\mathbb{Z}_{+}^{d},r_{i}\leq m\},\] and consider the operator \[\overline{\pi}(a)\overline{q}_{F}\prod_{i\in F^{c}}(I-W_{i})=\overline{\pi}(a )\overline{q}_{F}+\sum_{\emptyset\neq D\subseteq F^{c}}(-1)^{|D|}\overline{ \pi}(a)\overline{q}_{F}\prod_{i\in D}W_{i}.\] The products can be taken in any order, since the projections \(W_{i}\) commute. For each element \(\underline{r}=(r_{1},\ldots,r_{d})\in\mathbb{Z}_{+}^{d}\) and \(\zeta_{\underline{r}}\in X_{\underline{r}}\), we have that \[\overline{\pi}(a)\overline{q}_{F}\prod_{i\in F^{c}}(I-W_{i})\zeta_{\underline{ r}}=\begin{cases}\phi_{\underline{r}}(a)\zeta_{\underline{r}}&\text{if }r_{i}\geq m+1\text{ for all }i\in F^{c}\text{ and }\underline{r}\perp F,\\ 0&\text{if }r_{i}\leq m\text{ for some }i\in F^{c}\text{ or }\underline{r}\not\perp F.\end{cases}\] It follows that \[\overline{\pi}(a)\overline{q}_{F}\prod_{i\in F^{c}}(I-W_{i})=\sum_{\underline{ r}>\underline{n},\underline{r}\perp F}\phi_{\underline{r}}(a).\] Then we have that \[\|\sum_{\underline{r}>\underline{n},\underline{r}\perp F}\phi_{ \underline{r}}(a)+\sum_{\underline{r}>\underline{n},\underline{r}\perp F}\iota _{\underline{n}}^{\underline{r}}(k_{\underline{n}})\| =\|\sum_{\underline{r}>\underline{n},\underline{r}\perp F}\iota _{\underline{n}}^{\underline{r}}(\phi_{\underline{n}}(a)+k_{\underline{n}})\|\] \[=\sup\{\|\iota_{\underline{n}}^{\underline{r}}(\phi_{\underline{n} }(a)+k_{\underline{n}})\|\mid\underline{r}\geq\underline{n},\underline{r} \perp F\}\] \[\leq\|\phi_{\underline{n}}(a)+k_{\underline{n}}\|<\varepsilon,\] using that each \(\iota_{\underline{n}}^{\underline{r}}\) is contractive. Therefore, we deduce that \[\|\overline{\pi}(a)\overline{q}_{F}\prod_{i\in F^{c}}(I-W_{i})+\sum_{\underline {r}>\underline{n},\underline{r}\perp F}\iota_{\underline{n}}^{\underline{r}} (k_{\underline{n}})\|<\varepsilon. \tag{3.4}\] We will prove two claims before proceeding to the completion of the proof. _Claim 1. The element \(x_{F}:=\sum_{\underline{r}\geq\underline{n},\underline{r}\perp F}\iota_{ \underline{n}}^{\underline{r}}(k_{\underline{n}})\) belongs to \(\mathfrak{J}_{\mathcal{L},F}\)._ _Proof of Claim 1._ First suppose \(k_{\underline{n}}=\|\!\cdot\!\|\!\cdot\!\lim_{n}k_{\underline{n},n}\), where each \(k_{\underline{n},n}\) is a finite sum of rank-one operators in \(\mathcal{K}(X_{\underline{n}}\mathcal{L}_{F})\). Then \(x_{F}\) is the norm-limit of the elements \(\sum_{\underline{r}\geq\underline{n},\underline{r}\perp F}\iota_{\underline{n}} ^{\underline{r}}(k_{\underline{n},n})\). Thus it suffices to show that each operator \(\sum_{\underline{r}\geq\underline{n},\underline{r}\perp F}\iota_{\underline{n} }^{\underline{r}}(k_{\underline{n},n})\) belongs to \(\mathfrak{J}_{\mathcal{L},F}\). In turn, it suffices to show that \[\sum_{\underline{r}\geq\underline{n},\underline{r}\perp F}\iota_{\underline{n} }^{\underline{r}}(\Theta_{\underline{\varsigma}_{\underline{n}},\eta_{\underline{n} }})=\overline{t}_{\underline{n}}(\xi_{\underline{n}})\overline{\pi}(b) \overline{q}_{F}\overline{t}_{\underline{n}}(\eta_{\underline{n}})^{*}, \tag{3.5}\] where \(\xi_{\underline{n}},\eta_{\underline{n}}\in X_{\underline{n}}\) and \(b\in\mathcal{L}_{F}\). First note that if \(\underline{s}\not\geq\underline{n}\) or \(\underline{s}\not\perp F\), then \[\overline{t}_{\underline{n}}(\xi_{\underline{n}})\overline{\pi}(b)\overline{q} _{F}\overline{t}_{\underline{n}}(\eta_{\underline{n}})^{*}\zeta_{\underline{s }}=0\text{ for all }\zeta_{\underline{s}}\in X_{\underline{s}}.\] Indeed, if \(\underline{s}\not\perp\underline{n}\), then \(\overline{t}_{\underline{n}}(\eta_{\underline{n}})^{*}\zeta_{\underline{s}}=0\) by definition. If \(\underline{s}\geq\underline{n}\) but \(\underline{s}\not\perp F\), then \(\operatorname{supp}(\underline{s}-\underline{n})\cap F\neq\emptyset\), and hence \(\overline{q}_{F}\overline{t}_{\underline{n}}(\eta_{\underline{n}})^{*}\zeta_ {\underline{s}}=0\). Thus both sides of (3.5) map \(X_{\underline{s}}\) to \(0\) when \(\underline{s}\not\geq\underline{n}\) or \(\underline{s}\not\perp F\). Next suppose that \(\underline{s}\geq\underline{n}\) and \(\underline{s}\perp F\), and let \(\zeta_{\underline{s}}=\zeta_{\underline{n}}\zeta_{\underline{s}-\underline{n}}\) for some \(\zeta_{\underline{n}}\in X_{\underline{n}}\) and \(\zeta_{\underline{s}-\underline{n}}\in X_{\underline{s}-\underline{n}}\). Then we have that \[\overline{t}_{\underline{n}}(\xi_{\underline{n}})\overline{\pi}(b )\overline{q}_{F}\overline{t}_{\underline{n}}(\eta_{\underline{n}})^{*}\zeta_ {\underline{s}} =\xi_{\underline{n}}(\phi_{\underline{s}-\underline{n}}(b\left< \eta_{\underline{n}},\zeta_{\underline{n}}\right>)\zeta_{\underline{s}- \underline{n}})=\left(\Theta_{\xi_{\underline{n}}b,\eta_{\underline{n}}}(\zeta _{\underline{n}})\right)\zeta_{\underline{s}-\underline{n}}\] \[=\iota_{\underline{n}}^{\underline{s}}(\Theta_{\xi_{\underline{n} }b,\eta_{\underline{n}}})(\zeta_{\underline{n}}\zeta_{\underline{s}-\underline{n }})=\sum_{\underline{r}\geq\underline{n},\underline{r}\perp F}\iota_{\underline{ n}}^{\underline{r}}(\Theta_{\xi_{\underline{n}}b,\eta_{\underline{n}}})\zeta_{ \underline{s}},\] noting that \(\underline{s}-\underline{n}\perp F\). It follows that (3.5) holds, finishing the proof of Claim 1. _Claim 2. The element \(x_{F\cup D}:=\overline{\pi}(a)\overline{q}_{F}\prod_{i\in D}W_{i}\) belongs to \(\mathfrak{J}_{\mathcal{L},F\cup D}\) for each \(\emptyset\neq D\subseteq F^{c}\)._ Proof of Claim 2.: Let us first consider the case where \(D=F^{c}\). For \(\underline{r}=(r_{1},\ldots,r_{d})\in\mathbb{Z}_{+}^{d}\) and \(\zeta_{\underline{r}}\in X_{\underline{r}}\), we have that \[\overline{\pi}(a)\overline{q}_{F}\prod_{i\in F^{c}}W_{i}\zeta_{\underline{r}} =\begin{cases}\phi_{\underline{r}}(a)\zeta_{\underline{r}}&\text{if } \underline{0}\leq\underline{r}\leq m\cdot\underline{1}_{F^{c}},\\ 0&\text{if }r_{i}\geq m+1\text{ for some }i\in F^{c}\text{ or }\underline{r}\not\perp F,\end{cases}\] from which it follows that \[\overline{\pi}(a)\overline{q}_{F}\prod_{i\in F^{c}}W_{i}=\sum_{\underline{0} \leq r\leq m\cdot\underline{1}_{F^{c}}}\phi_{\underline{r}}(a),\] noting that the sum is finite. Fix \(\underline{0}\leq\underline{r}\leq m\cdot\underline{1}_{F^{c}}\) and note that \(\underline{r}\perp F\). Since \(a\in\mathcal{L}_{F}^{(1)}\) and \(\mathcal{L}^{(1)}\) is an (E)-\(2^{d}\)-tuple that is invariant and consists of ideals by Proposition 3.4.3, an application of Proposition 2.5.2 gives that \(\phi_{\underline{r}}(a)\in\mathcal{K}(X_{\underline{r}}\mathcal{L}_{F}^{(1)})\). Additionally, \(\mathcal{L}_{F}^{(1)}\subseteq\mathcal{L}_{\operatorname{inv},F}\) by definition, and so in particular \(\mathcal{L}_{F}^{(1)}\subseteq\mathcal{L}_{[d]}\). Consequently \(\phi_{\underline{r}}(a)\in\mathcal{K}(X_{\underline{r}}\mathcal{L}_{[d]})\). We claim that \(\phi_{\underline{r}}(a)\in\mathfrak{J}_{\mathcal{L},[d]}\) when viewed as an operator in \(\mathcal{L}(\mathcal{F}X)\). First note that \[\phi_{\underline{0}}(a)=\overline{\pi}(a)\overline{q}_{[d]}\in\mathfrak{J}_{ \mathcal{L},[d]},\] as \(a\in\mathcal{L}_{F}^{(1)}\subseteq\mathcal{L}_{[d]}\), so we may assume that \(\underline{r}\neq\underline{0}\). We see that \[\mathcal{K}(X_{\underline{r}}\mathcal{L}_{[d]})\subseteq\mathfrak{J}_{ \mathcal{L},[d]}, \tag{3.6}\] when identifying \(\mathcal{K}(X_{\underline{r}}\mathcal{L}_{[d]})\) within \(\mathcal{L}(\mathcal{F}X)\). Indeed, it suffices to show this for rank-one operators. To this end, take \(k_{\underline{r}}=\Theta_{\xi_{\underline{r}}b,\eta_{\underline{r}}}\) for some \(\xi_{\underline{r}},\eta_{\underline{r}}\in X_{\underline{r}}\) and \(b\in\mathcal{L}_{[d]}\). We claim that \[\Theta_{\xi_{\underline{r}}b,\eta_{\underline{r}}}=\overline{t}_{\underline{r}}( \xi_{\underline{r}})\overline{\pi}(b)\overline{q}_{[d]}\overline{t}_{\underline{ r}}(\eta_{\underline{r}})^{*}, \tag{3.7}\] when we view \(\Theta_{\xi_{\underline{r}}b,\eta_{\underline{r}}}\) as an operator in \(\mathcal{L}(\mathcal{F}X)\). Notice that both sides of (3.7) map every summand \(X_{\underline{s}}\) for which \(\underline{s}\neq\underline{r}\) to \(0\). On the other hand, arguing as we did with \(x_{F}\), we deduce that the right hand side of (3.7) coincides with \(\Theta_{\xi_{\underline{r}}b,\eta_{\underline{r}}}\) on the \(X_{\underline{r}}\) summand. Hence (3.7) and in turn (3.6) hold, and applying for \(\phi_{\underline{r}}(a)\in\mathcal{K}(X_{\underline{r}}\mathcal{L}_{[d]})\) gives that \(\phi_{\underline{r}}(a)\in\mathfrak{J}_{\mathcal{L},[d]}\). We conclude that \[x_{[d]}=\overline{\pi}(a)\overline{q}_{F}\prod_{i\in F^{c}}W_{i}\in\mathfrak{J}_{ \mathcal{L},[d]},\] being a finite sum of elements of \(\mathfrak{J}_{\mathcal{L},[d]}\). Now take \(\emptyset\neq D\subsetneq F^{c}\). For \(\underline{r}=(r_{1},\ldots,r_{d})\in\mathbb{Z}_{+}^{d}\) and \(\zeta_{\underline{r}}\in X_{\underline{r}}\), we have that \[\overline{\pi}(a)\overline{q}_{F}\prod_{i\in D}W_{i}\zeta_{\underline{r}}= \begin{cases}\phi_{\underline{r}}(a)\zeta_{\underline{r}}&\text{if }r_{i}\leq m\text{ for all }i\in D\text{ and }\underline{r}\perp F,\\ 0&\text{if }r_{i}\geq m+1\text{ for some }i\in D\text{ or }\underline{r}\not\perp F.\end{cases}\] It follows that \[\overline{\pi}(a)\overline{q}_{F}\prod_{i\in D}W_{i}=\sum_{\underline{0}\leq r \leq m\perp_{D}}\bigg{(}\sum_{\underline{s}\perp F\cup D}\phi_{\underline{r}+ \underline{s}}(a)\bigg{)},\] noting that the sum over \(\underline{r}\) is finite. Fix \(\underline{0}\leq\underline{r}\leq m\cdot\underline{1}_{D}\) and consider \(\sum_{\underline{s}\perp F\cup D}\phi_{\underline{r}+\underline{s}}(a)\). Since \(a\in\mathcal{L}_{F}^{(1)}\) and \(\mathcal{L}^{(1)}\) is an (E)-\(2^{d}\)-tuple that is invariant and consists of ideals by Proposition 3.4.3, an application of Proposition 2.5.2 gives that \(\phi_{\underline{r}}(a)\in\mathcal{K}(X_{\underline{r}}\mathcal{L}_{F}^{(1)})\), noting that \(\underline{r}\perp F\). Additionally, by definition we have that \[\mathcal{L}_{F}^{(1)}\subseteq\mathcal{L}_{\operatorname{inv},F}\subseteq \cap_{F\subsetneq G}\mathcal{L}_{G}.\] In particular, notice that \(D\cap F^{c}\neq\emptyset\) and hence \(F\subsetneq F\cup D\). Thus \(\mathcal{L}_{F}^{(1)}\subseteq\mathcal{L}_{F\cup D}\), and consequently \(\phi_{\underline{r}}(a)\in\mathcal{K}(X_{\underline{r}}\mathcal{L}_{F\cup D})\). Notice that \[\sum_{\underline{s}\perp F\cup D}\phi_{\underline{r}+\underline{s}}(a)=\sum_{ \underline{s}\perp F\cup D}t_{\underline{r}}^{\underline{r}+\underline{s}}( \phi_{\underline{r}}(a)).\] We claim that \(\sum_{\underline{s}\perp F\cup D}t_{\underline{r}}^{\underline{r}+\underline{ s}}(\phi_{\underline{r}}(a))\in\mathfrak{J}_{\mathcal{L},F\cup D}\). First note that if \(\underline{r}=\underline{0}\), then \[\sum_{\underline{s}\perp F\cup D}t_{\underline{0}}^{\underline{s}}(\phi_{ \underline{0}}(a))=\sum_{\underline{s}\perp F\cup D}\phi_{\underline{s}}(a)= \overline{\pi}(a)\overline{q}_{F\cup D}\in\mathfrak{J}_{\mathcal{L},F\cup D},\] using that \(a\in\mathcal{L}_{F}^{(1)}\subseteq\mathcal{L}_{F\cup D}\) in the final membership. For \(\underline{r}\neq\underline{0}\), we see that \[\sum_{\underline{s}\perp F\cup D}t_{\underline{r}}^{\underline{r}+\underline{ s}}(k_{\underline{r}})\in\mathfrak{J}_{\mathcal{L},F\cup D}\text{ for all }k_{\underline{r}}\in\mathcal{K}(X_{\underline{r}}\mathcal{L}_{F\cup D}). \tag{3.8}\] Indeed, it suffices to show this for \(k_{\underline{r}}=\Theta_{\xi_{\underline{r}}b,\eta_{\underline{r}}}\) for some \(\xi_{\underline{r}},\eta_{\underline{r}}\in X_{\underline{r}}\) and \(b\in\mathcal{L}_{F\cup D}\). Arguing as in the proof of Claim 1 for (3.5), we have that \[\sum_{\underline{s}\perp F\cup D}t_{\underline{r}}^{\underline{r}+\underline{ s}}(\Theta_{\xi_{\underline{r}}b,\eta_{\underline{r}}})=\overline{t}_{\underline{r}}( \xi_{\underline{r}})\overline{\pi}(b)\overline{q}_{F\cup D}\overline{t}_{ \underline{r}}(\eta_{\underline{r}})^{*}\in\mathfrak{J}_{\mathcal{L},F\cup D}.\] Hence (3.8) holds, and applying for \(k_{\underline{r}}=\phi_{\underline{r}}(a)\in\mathcal{K}(X_{\underline{r}} \mathcal{L}_{F\cup D})\) yields that \[\sum_{\underline{s}\perp F\cup D}t_{\underline{r}}^{\underline{r}+\underline{ s}}(\phi_{\underline{r}}(a))\in\mathfrak{J}_{\mathcal{L},F\cup D}.\] Therefore, we conclude that \[x_{F\cup D}=\overline{\pi}(a)\overline{q}_{F}\prod_{i\in D}W_{i}\in\mathfrak{ J}_{\mathcal{L},F\cup D},\] being a finite sum of elements of \(\mathfrak{J}_{\mathcal{L},F\cup D}\). This finishes the proof of Claim 2. We now conclude the proof of the proposition. By Claim 2, we have that \[x:=\sum_{\emptyset\neq D\subseteq F^{c}}(-1)^{|D|}\overline{\pi}(a)\overline{ q}_{F}\prod_{i\in D}W_{i}=\sum_{\emptyset\neq D\subseteq F^{c}}(-1)^{|D|}x_{F \cup D}\in\sum_{F\subsetneq D\subseteq[d]}\mathfrak{J}_{\mathcal{L},D}.\] In total, we have that \[\|\overline{\pi}(a)\overline{q}_{F}+(x+x_{F})\|=\|\overline{\pi}(a)\overline{ q}_{F}\prod_{i\in F^{c}}(I-W_{i})+\sum_{\underline{r}\geq\underline{n},\underline{r} \perp F}t_{\underline{n}}^{\underline{r}}(k_{\underline{n}})\|<\varepsilon,\] using (3.4) in the final inequality. So, employing Claims 1 and 2 in tandem, we have shown that for every \(\varepsilon>0\) the element \(\overline{\pi}(a)\overline{q}_{F}\) is \(\varepsilon\)-close to an element of \(\sum_{F\subseteq D\subseteq[d]}\mathfrak{J}_{\mathcal{L},D}\). Hence \(\overline{\pi}(a)\overline{q}_{F}\in\sum_{F\subseteq D\subseteq[d]}\mathfrak{ J}_{\mathcal{L},D}\), as required. We are now ready to prove that when \(\mathcal{L}\) is in addition partially ordered, the (E)-\(2^{d}\)-tuple \(\mathcal{L}^{(1)}\) is partially ordered, contains \(\mathcal{L}\) and induces the same gauge-invariant ideal of \(\mathcal{N}\mathcal{T}_{X}\). **Proposition 3.4.5**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be an (E)-\(2^{d}\)-tuple of \(X\) that is invariant, partially ordered and consists of ideals. Then \(\mathcal{L}^{(1)}\) is an (E)-\(2^{d}\)-tuple of \(X\) that is invariant, partially ordered, consists of ideals, and is such that \(\mathcal{L}\subseteq\mathcal{L}^{(1)}\) and \(\mathfrak{J}_{\mathcal{L}}^{(\pi_{X},\mathfrak{I}_{X})}=\mathfrak{J}_{ \mathcal{L}^{(1)}}^{(\pi_{X},\mathfrak{I}_{X})}\)._ **Proof.** By Proposition 3.4.3, the family \(\mathcal{L}^{(1)}\) is an (E)-\(2^{d}\)-tuple that is invariant and consists of ideals. For the partial ordering property, it is immediate that \(\{0\}=\mathcal{L}_{\emptyset}^{(1)}\subseteq\mathcal{L}_{F}^{(1)}\) for all \(F\subseteq[d]\). Likewise, we have that \[\mathcal{L}_{F}^{(1)}\subseteq X_{\underline{0}}^{-1}(\cap_{F\subseteq D} \mathcal{L}_{D})=\cap_{F\subset D}\mathcal{L}_{D}\subseteq\mathcal{L}_{[d]}= \mathcal{L}_{[d]}^{(1)}\] for all \(\emptyset\neq F\subsetneq[d]\). Next fix \(\emptyset\neq F\subseteq F^{\prime}\subsetneq[d]\), and let \(a\in\mathcal{L}_{F}^{(1)}\). Notice that \(a\in\mathcal{L}_{F}^{(1)}\subseteq\mathcal{I}_{F}\subseteq\mathcal{I}_{F^{ \prime}}\) because \(\mathcal{I}\) is partially ordered. Next fix \(\underline{m}\perp F^{\prime}\). Since \(F\subseteq F^{\prime}\), we have that \(\underline{m}\perp F\) and thus \[\big{\langle}X_{\underline{m}},aX_{\underline{m}}\big{\rangle}\subseteq\cap_{ F\subsetneq D}\mathcal{L}_{D}\subseteq\cap_{F^{\prime}\subsetneq D}\mathcal{L}_{D},\] using that \(a\in\mathcal{L}_{\mathrm{inv},F}\). It follows that \(a\in\mathcal{L}_{\mathrm{inv},F^{\prime}}\). In order to prove that \(a\in\mathcal{L}_{\mathrm{lim},F^{\prime}}\), we resort to Proposition 3.3.4. More specifically, it suffices to show that \(\overline{\pi}_{X}(a)\overline{q}_{X,F^{\prime}}\in\mathfrak{J}_{\mathcal{L}} ^{(\overline{\pi}_{X},\mathfrak{I}_{X})}\). To this end, note that \(a\in\bigcap\{\phi_{\hat{\imath}}^{-1}(\mathcal{K}(X_{\underline{\imath}})) \mid i\in F^{\prime}\}\), since \(\mathcal{L}^{(1)}\) is an (E)-\(2^{d}\)-tuple, and \(\overline{\pi}_{X}(a)\overline{q}_{X,F}\in\mathfrak{J}_{\mathcal{L}}^{( \overline{\pi}_{X},\mathfrak{I}_{X})}\) by Proposition 3.4.4. An application of Proposition 3.2.5 then gives that \(\overline{\pi}_{X}(a)\overline{q}_{X,F^{\prime}}\in\mathfrak{J}_{\mathcal{L}} ^{(\overline{\pi}_{X},\mathfrak{I}_{X})}\), as required. Hence \(a\in\mathcal{L}_{F^{\prime}}^{(1)}\) and we conclude that \(\mathcal{L}^{(1)}\) is partially ordered. Next we prove that \(\mathcal{L}\subseteq\mathcal{L}^{(1)}\). Notice that \(\mathcal{L}_{\emptyset}\subseteq\mathcal{L}_{\emptyset}^{(1)}\) and \(\mathcal{L}_{[d]}\subseteq\mathcal{L}_{[d]}^{(1)}\) trivially, so fix \(\emptyset\neq F\subsetneq[d]\) and \(a\in\mathcal{L}_{F}\). Since \(\mathcal{L}\) is an (E)-\(2^{d}\)-tuple, we have that \(a\in\mathcal{I}_{F}\). Since \(\mathcal{L}\) is invariant and partially ordered, we have that \[\big{\langle}X_{\underline{m}},aX_{\underline{m}}\big{\rangle}\subseteq \mathcal{L}_{F}\subseteq\cap_{F\subsetneq D}\mathcal{L}_{D}\] for all \(\underline{m}\perp F\), and thus \(a\in\mathcal{L}_{\mathrm{inv},F}\). Note that \(\phi_{\underline{m}}(a)\in\mathcal{K}(X_{\underline{m}}\mathcal{L}_{F})\) for all \(\underline{m}\perp F\), by Proposition 2.5.2. Therefore, by choosing any \(\underline{m}\perp F\) and \(k_{\underline{m}}=-\phi_{\underline{m}}(a)\in\mathcal{K}(X_{\underline{m}} \mathcal{L}_{F})\), we obtain that \(\|\phi_{\underline{m}}(a)+k_{\underline{m}}\|=0\), and thus \(a\in\mathcal{L}_{\mathrm{lim},F}\) by Proposition 3.3.3. Hence \(\mathcal{L}_{F}\subseteq\mathcal{L}_{F}^{(1)}\), as required. Finally, we show that \(\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X},\mathfrak{I}_{X})}=\mathfrak{J }_{\mathcal{L}^{(1)}}^{(\pi_{X},\mathfrak{I}_{X})}\). The forward inclusion is immediate since \(\mathcal{L}\subseteq\mathcal{L}^{(1)}\). On the other hand, we have that \(\overline{\pi}_{X}(\mathcal{L}_{F}^{(1)})\overline{q}_{X,F}\subseteq\mathfrak{J }_{\mathcal{L}}^{(\overline{\pi}_{X},\mathfrak{I}_{X})}\) for all \(F\subseteq[d]\) by Proposition 3.4.4, giving the reverse inclusion and completing the proof. We are now ready to provide a full characterisation of (M)-\(2^{d}\)-tuples. **Theorem 3.4.6**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and suppose that \(\mathcal{L}\) is a \(2^{d}\)-tuple of \(X\). Then \(\mathcal{L}\) is an (M)-\(2^{d}\)-tuple of \(X\) if and only if \(\mathcal{L}\) satisfies the following four conditions:_ * \(\mathcal{L}\) _consists of ideals and_ \(\mathcal{L}\subseteq\mathcal{J}\)_,_ * \(\mathcal{L}\) _is invariant,_ * \(\mathcal{L}\) _is partially ordered,_ * \(\mathcal{L}^{(1)}\subseteq\mathcal{L}\)_._ **Proof.** Assume that \(\mathcal{L}\) is an (M)-\(2^{d}\)-tuple. First we use Propositions 3.2.4 and 3.2.6 together with maximality of \(\mathcal{L}\) to deduce that \(\mathcal{L}\) consists of ideals, that \(\mathcal{L}\subseteq\mathcal{I}\subseteq\mathcal{J}\), and that \(\mathcal{L}\) is invariant and partially ordered. Proposition 3.4.5 together with maximality of \(\mathcal{L}\) gives that \(\mathcal{L}^{(1)}\subseteq\mathcal{L}\), proving the forward implication. Now suppose that \(\mathcal{L}\) is a \(2^{d}\)-tuple that satisfies conditions (i)-(iv). Conditions (i) and (ii) imply that \(\mathcal{L}\) is an (E)-\(2^{d}\)-tuple by Proposition 3.2.3, and so we can consider \(\mathcal{L}^{(1)}\). It remains to check that \(\mathcal{L}\) is maximal. To this end, let \(\mathcal{M}\) be the (M)-\(2^{d}\)-tuple such that \(\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X},\mathfrak{I}_{X})}=\mathfrak{J}_{ \mathcal{M}}^{(\overline{\pi}_{X},\mathfrak{I}_{X})}\), as guaranteed by Proposition 3.2.9. Note also that \(\mathcal{L}\subseteq\mathcal{M}\) by Proposition 3.1.8. It suffices to show that \(\mathcal{M}\subseteq\mathcal{L}\). On one hand, we have that \(\mathcal{M}_{\emptyset}=\mathcal{L}_{\emptyset}=\{0\}\), since both \(\mathcal{M}\) and \(\mathcal{L}\) are (E)-\(2^{d}\)-tuples. In order to show that \(\mathcal{M}_{F}\subseteq\mathcal{L}_{F}\) for all \(\emptyset\neq F\subseteq[d]\), we apply strong (downward) induction on \(|F|\). For the base case, take \(a\in\mathcal{M}_{[d]}\). Then \(\overline{\pi}_{X}(a)\overline{q}_{X,[d]}\in\mathfrak{J}_{\mathcal{M}}^{( \overline{\pi}_{X},\overline{l}_{X})}=\mathfrak{J}_{\mathcal{L}}^{(\overline{ \pi}_{X},\overline{l}_{X})}\). Using the fact that \(\mathcal{NT}_{X}\cong\mathrm{C}^{*}(\overline{\pi},\overline{t})\) canonically for the Fock representation \((\overline{\pi},\overline{t})\), we have that \(\overline{\pi}(a)\overline{q}_{[d]}\in\mathfrak{J}_{\mathcal{L}}^{(\overline {\pi},\overline{t})}\). Conditions (i)-(iii) together with Propositions 3.3.1 and 3.3.2 give that \[\phi_{\underline{0}}(a)=\overline{q}_{[d]}\left(\overline{\pi}(a)\overline{q}_{ [d]}\right)\overline{q}_{[d]}\in\overline{q}_{[d]}^{2}\left(\mathfrak{J}_{ \mathcal{L}}^{(\overline{\pi},\overline{t})}\right)\overline{q}_{[d]}^{2} \subseteq\overline{q}_{[d]}\left(\mathfrak{J}_{\mathcal{L},[d]}^{(\overline {\pi},\overline{t})}\right)\overline{q}_{[d]}=\overline{q}_{[d]}\overline{\pi }(\mathcal{L}_{[d]})\overline{q}_{[d]}=\phi_{\underline{0}}(\mathcal{L}_{[d]}),\] and thus \(a\in\mathcal{L}_{[d]}\). This shows that \(\mathcal{M}_{[d]}\subseteq\mathcal{L}_{[d]}\). Next, suppose that \(\mathcal{M}_{F}\subseteq\mathcal{L}_{F}\) for all \(\emptyset\neq F\subseteq[d]\) with \(|F|\geq n+1\), where \(1\leq n\leq d-1\). Fix \(\emptyset\neq F\subsetneq[d]\) with \(|F|=n\). Since \(\mathcal{M}\) is an (E)-\(2^{d}\)-tuple, we have that \(\mathcal{M}_{F}\subseteq\mathcal{I}_{F}\). Note that \(\mathcal{M}\), being an (M)-\(2^{d}\)-tuple, satisfies conditions (i)-(iv) by the forward implication. In particular, we have that \(\mathcal{M}\) is invariant, so \[\left\langle X_{\underline{m}},\mathcal{M}_{F}X_{\underline{m}}\right\rangle \subseteq\mathcal{M}_{F}\text{ for all }\underline{m}\perp F.\] Likewise, we have that \(\mathcal{M}\) is partially ordered, so \(\left\langle X_{\underline{m}},\mathcal{M}_{F}X_{\underline{m}}\right\rangle \subseteq\cap_{F\subsetneq D}\mathcal{M}_{D}\) for all \(\underline{m}\perp F\). By the inductive hypothesis, we have that \(\mathcal{M}_{D}=\mathcal{L}_{D}\) whenever \(F\subsetneq D\), as \(|D|\geq n+1\). Hence \[\left\langle X_{\underline{m}},\mathcal{M}_{F}X_{\underline{m}}\right\rangle \subseteq\cap_{F\subsetneq D}\mathcal{L}_{D}\text{ for all }\underline{m}\perp F.\] Thus \(\mathcal{M}_{F}\subseteq\mathcal{L}_{\mathrm{inv},F}\). Moreover, by definition we have that \[\overline{\pi}_{X}(\mathcal{M}_{F})\overline{q}_{X,F}\subseteq\mathfrak{J}_{ \mathcal{M}}^{(\overline{\pi}_{X},\overline{l}_{X})}=\mathfrak{J}_{\mathcal{ L}}^{(\overline{\pi}_{X},\overline{l}_{X})}.\] Hence, by applying Proposition 3.3.4, we deduce that \(\mathcal{M}_{F}\subseteq\mathcal{L}_{\mathrm{lim},F}\). In total, we have that \[\mathcal{M}_{F}\subseteq\mathcal{I}_{F}\cap\mathcal{L}_{\mathrm{inv},F}\cap \mathcal{L}_{\mathrm{lim},F}=\mathcal{L}_{F}^{(1)},\] and thus \(\mathcal{M}_{F}\subseteq\mathcal{L}_{F}\) by condition (iv), as required. Induction then completes the proof. Iteration of the \(\mathcal{L}^{(1)}\) construction constitutes the final ingredient for attaining maximality. For if we start with an (E)-\(2^{d}\)-tuple \(\mathcal{L}\) that is invariant, partially ordered and consists of ideals, then iterative applications of Proposition 3.4.5 produce a sequence of (E)-\(2^{d}\)-tuples that are invariant, partially ordered, consist of ideals and satisfy \[\mathcal{L}\subseteq\mathcal{L}^{(1)}\subseteq\cdots\subseteq\mathcal{L}^{(k)} \subseteq\ldots\quad\text{and}\quad\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_ {X},\overline{l}_{X})}=\mathfrak{J}_{\mathcal{L}^{(k)}}^{(\overline{\pi}_{X}, \overline{l}_{X})}\text{ for all }k\in\mathbb{N}.\] Since each \(\mathcal{L}^{(k)}\) induces the same gauge-invariant ideal of \(\mathcal{NT}_{X}\), if the sequence eventually stabilises then it must stabilise to an (M)-\(2^{d}\)-tuple by Theorem 3.4.6. **Theorem 3.4.7**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be an (E)-\(2^{d}\)-tuple of \(X\) that is invariant, partially ordered, and consists of ideals. Fix \(0\leq k\leq d\). Then whenever \(F\subseteq[d]\) satisfies \(|F|=d-k\), we have that \(\mathcal{L}_{F}^{(m)}=\mathcal{L}_{F}^{(k)}\) for all \(m\geq k\). Consequently, \(\mathcal{L}^{(d-1)}\) is the (M)-\(2^{d}\)-tuple that induces \(\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X},\overline{l}_{X})}\)._ **Proof.** We proceed by applying strong induction on \(k\). For the base case, if \(F=[d]\) then \(\mathcal{L}_{[d]}^{(m)}=\mathcal{L}_{[d]}=\mathcal{L}_{[d]}^{(0)}\) for all \(m\geq 0\) by construction. Next, fix \(0\leq N\leq d-1\) and suppose that whenever \(0\leq n\leq N\) and \(D\subseteq[d]\) satisfies \(|D|=d-n\), we have that \(\mathcal{L}_{D}^{(m)}=\mathcal{L}_{D}^{(n)}\) for all \(m\geq n\). Now we fix \(F\subseteq[d]\) such that \(|F|=d-(N+1)\). We must show that \(\mathcal{L}_{F}^{(m)}=\mathcal{L}_{F}^{(N+1)}\) for all \(m\geq N+1\). First note that this is clear if \(F=\emptyset\) (i.e., if \(N=d-1\)), as each \(\mathcal{L}_{\emptyset}^{(m)}=\{0\}\). So, without loss of generality, we may exclude the case where \(N=d-1\) and consider \(\emptyset\neq F\subsetneq[d]\). Fix \(m\geq N+1\). We have already argued that \(\mathcal{L}_{F}^{(N+1)}\subseteq\mathcal{L}_{F}^{(m)}\), so it remains to verify the reverse inclusion. Take \(a\in\mathcal{L}_{F}^{(m)}\). In particular, we have that \(a\in\mathcal{I}_{F}\). Fix \(\underline{m}\perp F\) and note that \(\left\langle X_{\underline{m}},aX_{\underline{m}}\right\rangle\subseteq\cap_{F \subsetneq D}\mathcal{L}_{D}^{(m-1)}\) by definition. Notice that whenever \(F\subsetneq D\), we must have that \(|D|=d-k\) for some \(0\leq k\leq N\). By the inductive hypothesis, we have that \(\mathcal{L}_{D}^{(k)}=\mathcal{L}_{D}^{(N)}=\mathcal{L}_{D}^{(m-1)}\) and hence \(\left\langle X_{\underline{m}},aX_{\underline{m}}\right\rangle\subseteq\cap_{F \subsetneq D}\mathcal{L}_{D}^{(N)}\). In turn, we have that \(a\in\mathcal{L}_{\mathrm{inv},F}^{(N)}\). Notice also that \(\overline{\pi}_{X}(a)\overline{q}_{X,F}\in\mathfrak{J}_{\mathcal{L}^{(m)}}^{( \overline{\pi}_{X},\overline{t}_{X})}=\mathfrak{J}_{\mathcal{L}^{(N)}}^{( \overline{\pi}_{X},\overline{t}_{X})}\), and hence an application of Proposition 3.3.4 yields that \(a\in\mathcal{L}_{\mathrm{lim},F}^{(N)}\). In total, we have that \(a\in\mathcal{I}_{F}\cap\mathcal{L}_{\mathrm{inv},F}^{(N)}\cap\mathcal{L}_{ \mathrm{lim},F}^{(N)}=\mathcal{L}_{F}^{(N+1)}\). This proves that \(\mathcal{L}_{F}^{(m)}=\mathcal{L}_{F}^{(N+1)}\) for all \(m\geq N+1\). By induction we obtain that \(\mathcal{L}_{F}^{(m)}=\mathcal{L}_{F}^{(k)}\) for all \(m\geq k\), as required. Finally, we have that \(\mathcal{L}^{(d-1)}=\mathcal{L}^{(d)}=(\mathcal{L}^{(d-1)})^{(1)}\) by the first claim. Hence, by Proposition 3.4.5 and Theorem 3.4.6, we conclude that \(\mathcal{L}^{(d-1)}\) is the (M)-\(2^{d}\)-tuple inducing \(\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X},\overline{t}_{X})}\), finishing the proof. **Remark 3.4.8**.: Given any (E)-\(2^{d}\)-tuple \(\mathcal{L}\), we now have an algorithm for computing the (M)-\(2^{d}\)-tuple that induces \(\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X},\overline{t}_{X})}\). Specifically, we apply Propositions 3.2.4 and 3.2.6 to pass from \(\mathcal{L}\) to the (E)-\(2^{d}\)-tuple \(\mathrm{PO}(\mathrm{Inv}(\mathcal{L}))\), which is invariant, partially ordered, consists of ideals and satisfies \[\mathcal{L}\subseteq\mathrm{PO}(\mathrm{Inv}(\mathcal{L}))\quad\text{and} \quad\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X},\overline{t}_{X})}= \mathfrak{J}_{\mathrm{PO}(\mathrm{Inv}(\mathcal{L}))}^{(\overline{\pi}_{X}, \overline{t}_{X})}.\] We then apply the \((\mathrm{PO}(\mathrm{Inv}(\mathcal{L})))^{(1)}\) construction iteratively, and use Theorem 3.4.7 to deduce that \((\mathrm{PO}(\mathrm{Inv}(\mathcal{L})))^{(d-1)}\) is the (M)-\(2^{d}\)-tuple that induces \(\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X},\overline{t}_{X})}\). Accordingly, we can modify Theorem 3.2.11 to account for all (E)-\(2^{d}\)-tuples. **Theorem 3.4.9** (\(\mathbb{Z}_{+}^{d}\)-GIUT for (E)-\(2^{d}\)-tuples).: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be an (E)-\(2^{d}\)-tuple of \(X\) and \((\pi,t)\) be a Nica-covariant representation of \(X\). Then \(\mathcal{N}\mathcal{O}(\mathcal{L},X)\cong\mathrm{C}^{*}(\pi,t)\) via a (unique) canonical \(*\)-isomorphism if and only if \((\pi,t)\) admits a gauge action and_ \[\mathcal{L}^{(\pi,t)}=\bigg{(}\,\mathrm{PO}\left(\mathrm{Inv}(\mathcal{L}) \right)\bigg{)}^{(d-1)}.\] Proof.: We have that \((\mathrm{PO}(\mathrm{Inv}(\mathcal{L})))^{(d-1)}\) is the (M)-\(2^{d}\)-tuple that induces \(\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X},\overline{t}_{X})}\) by Remark 3.4.8. In particular, we have that \(\mathcal{N}\mathcal{O}((\mathrm{PO}(\mathrm{Inv}(\mathcal{L})))^{(d-1)},X)= \mathcal{N}\mathcal{O}(\mathcal{L},X)\), and Theorem 3.2.11 finishes the proof. ## 4. NT-\(2^{d}\)-tuples and gauge-invariant ideals of \(\mathcal{NT}_{X}\) The (M)-\(2^{d}\)-tuples of \(X\) parametrise the equivariant quotients that lie in-between \(\mathcal{NT}_{X}\) and \(\mathcal{N}\mathcal{O}_{X}\). We now pass to the parametrisation of the quotients that may not be injective on \(X\). We will circumvent this by "deleting the kernel", i.e., by utilising the quotient product system construction explored in Section 2. ### NT-\(2^{d}\)-tuples We begin by defining some auxiliary objects. **Definition 4.1.1**.: Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Fix \(\emptyset\neq F\subseteq[d]\) and let \(I\subseteq A\) be an ideal. We define the following subsets of \(A\): * \(X_{F}^{-1}(I):=\bigcap\{X_{\underline{n}}^{-1}(I)\mid\underline{0}\neq\underline {n}\leq\underline{1}_{F}\}=\{a\in A\mid\big{\langle}X_{\underline{n}},aX_{ \underline{n}}\big{\rangle}\subseteq I\text{ for all }\underline{0}\neq\underline{n}\leq \underline{1}_{F}\}\), * \(J_{F}(I,X):=\{a\in A\mid[\phi_{\underline{i}}(a)]_{I}\in\mathcal{K}([X_{ \underline{i}}]_{I})\text{ for all }i\in[d],aX_{F}^{-1}(I)\subseteq I\}\). Notice that both \(X_{F}^{-1}(I)\) and \(J_{F}(I,X)\) are ideals of \(A\). These objects will play similar roles to the ideals \(X^{-1}(I)\) and \(J(I,X)\) for a C*-correspondence \(X\) over \(A\). Let us collect some properties of \(X_{F}^{-1}(I)\) and \(J_{F}(I,X)\). **Proposition 4.1.2**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(I\subseteq A\) be an ideal that is positively invariant for \(X\). Fix \(\emptyset\neq F\subseteq[d]\) and \(a\in A\). Then the following are equivalent:_ * \([a]_{I}\in\bigcap\{\ker[\phi_{\underline{i}}]_{I}\mid i\in F\}\)_;_ * \(\big{\langle}X_{\underline{i}},aX_{\underline{i}}\big{\rangle}\subseteq I\text{ for all }i\in F\)_;_ * \(\big{\langle}X_{\underline{n}},aX_{\underline{n}}\big{\rangle}\subseteq I\text{ for all }\underline{n}\neq\underline{0}\text{ satisfying }\operatorname{supp}\underline{n}\subseteq F\)_._ _Consequently, we have that \(X_{F}^{-1}(I)=\bigcap\{X_{\underline{i}}^{-1}(I)\mid i\in F\}\)._ **Proof.** We will prove [(i)\(\Rightarrow\)(ii)\(\Rightarrow\)(iii)\(\Rightarrow\)(ii)\(\Rightarrow\)(i)]. First assume that \([a]_{I}\in\bigcap\{\ker[\phi_{\underline{i}}]_{I}\mid i\in F\}\). Fixing \(i\in F\), we have that \[[\phi_{\underline{i}}(a)\xi_{\underline{i}}]_{I}=[\phi_{\underline{i}}]_{I}([a ]_{I})[\xi_{\underline{i}}]_{I}=0\text{ for all }\xi_{\underline{i}}\in X_{ \underline{i}},\] which in turn yields that \(aX_{\underline{i}}\subseteq X_{\underline{i}}I\). An application of [34, Proposition 1.3] then gives that \(\big{\langle}X_{\underline{i}},aX_{\underline{i}}\big{\rangle}\subseteq I\), as required. Next, assume that \(\big{\langle}X_{\underline{i}},aX_{\underline{i}}\big{\rangle}\subseteq I\) for all \(i\in F\). We must prove that \(\big{\langle}X_{\underline{n}},aX_{\underline{n}}\big{\rangle}\subseteq I\) for all \(\underline{n}\neq\underline{0}\) satisfying \(\operatorname{supp}\underline{n}\subseteq F\). We proceed by induction on \(|\underline{n}|\). If \(|\underline{n}|=1\), then \(\underline{n}=\underline{i}\) for some \(i\in F\), in which case \(\big{\langle}X_{\underline{i}},aX_{\underline{i}}\big{\rangle}\subseteq I\) by assumption. Now suppose the claim holds for all \(\underline{n}\neq\underline{0}\) satisfying \(\operatorname{supp}\underline{n}\subseteq F\) and \(|\underline{n}|=N\) for some \(N\in\mathbb{N}\). Fix \(\underline{m}\neq\underline{0}\) satisfying \(\operatorname{supp}\underline{m}\subseteq F\) and \(|\underline{m}|=N+1\). We may write \(\underline{m}\) in the form \(\underline{m}=\underline{n}+\underline{i}\) for some \(i\in F\) and some \(\underline{n}\neq\underline{0}\) satisfying \(\operatorname{supp}\underline{n}\subseteq F\) and \(|\underline{n}|=N\). We then have that \[\big{\langle}X_{\underline{m}},aX_{\underline{m}}\big{\rangle}=\big{\langle}X _{\underline{n}}\otimes_{A}X_{\underline{i}},aX_{\underline{n}}\otimes_{A}X_{ \underline{i}}\big{\rangle}\subseteq[\big{\langle}X_{\underline{i}},\phi_{ \underline{i}}(\big{\langle}X_{\underline{n}},aX_{\underline{n}}\big{\rangle} )X_{\underline{i}})]\subseteq[\big{\langle}X_{\underline{i}},IX_{\underline{i }}\big{\rangle}]\subseteq I,\] using the inductive hypothesis for \(\underline{n}\) and positive invariance of \(I\). Hence \(\big{\langle}X_{\underline{m}},aX_{\underline{m}}\big{\rangle}\subseteq I\), and by induction we are done. Finally, assume that \(\big{\langle}X_{\underline{n}},aX_{\underline{n}}\big{\rangle}\subseteq I\) for all \(\underline{n}\neq\underline{0}\) satisfying \(\operatorname{supp}\underline{n}\subseteq F\). In particular, we have that \(\big{\langle}X_{\underline{i}},aX_{\underline{i}}\big{\rangle}\subseteq I\) for all \(i\in F\). Fixing \(i\in F\), we have that \(aX_{\underline{i}}\subseteq X_{\underline{i}}I\) by [34, Proposition 1.3], and therefore \([\phi_{\underline{i}}]_{I}([a]_{I})[\xi_{\underline{i}}]_{I}=[\phi_{\underline {i}}(a)\xi_{\underline{i}}]_{I}=0\) for all \(\xi_{\underline{i}}\in X_{\underline{i}}\). Hence \([a]_{I}\in\ker[\phi_{\underline{i}}]_{I}\), from which it follows that \([a]_{I}\in\bigcap\{\ker[\phi_{\underline{i}}]_{I}\mid i\in F\}\), finishing the proof of the equivalences. The last claim follows from the equivalence of items (ii) and (iii). The next proposition relates the ideals \(X_{F}^{-1}(I)\) and \(J_{F}(I,X)\) to ideals of \([A]_{I}\), when \(I\) is positively invariant. This result is the higher-rank analogue of [34, Lemma 5.2]. **Proposition 4.1.3**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(I\subseteq A\) be an ideal that is positively invariant for \(X\). Then the following hold for all \(\emptyset\neq F\subseteq[d]\):_ * \(X_{F}^{-1}(I)=[\cdot]_{I}^{-1}(\bigcap\{\ker[\phi_{\underline{i}}]_{I}\mid i \in F\})\)_._ * \(J_{F}(I,X)=[\cdot]_{I}^{-1}(\mathcal{J}_{F}([X]_{I}))\)_._ * \(X_{F}^{-1}(I)\cap J_{F}(I,X)=I\)_._ **Proof.** Fix \(\emptyset\neq F\subseteq[d]\). Firstly, note that \(a\in[\cdot]_{I}^{-1}(\bigcap\{\ker[\phi_{\underline{i}}]_{I}\mid i\in F\})\) if and only if \([a]_{I}\in\bigcap\{\ker[\phi_{\underline{i}}]_{I}\mid i\in F\}\), which in turn is equivalent to having that \(a\in X_{F}^{-1}(I)\) by Proposition 4.1.2. Thus \(X_{F}^{-1}(I)=[\cdot]_{I}^{-1}(\bigcap\{\ker[\phi_{\underline{i}}]_{I}\mid i \in F\})\), proving item (i). Next, assume that \(a\in J_{F}(I,X)\). We must show that \[[a]_{I}\in\mathcal{J}_{F}([X]_{I})=(\bigcap_{i\in F}\ker[\phi_{\underline{i}}]_ {I})^{\perp}\cap(\bigcap_{i\in[d]}[\phi_{\underline{i}}]_{I}^{-1}(\mathcal{K}([ X_{\underline{i}}]_{I}))).\] By definition, we have that \([\phi_{\underline{i}}]_{I}[a]_{I}=[\phi_{\underline{i}}(a)]_{I}\in\mathcal{K}([ X_{\underline{i}}]_{I})\) for all \(i\in[d]\) and \(aX_{F}^{-1}(I)\subseteq I\). In particular, we have that \([a]_{I}\in\bigcap_{i\in[d]}[\phi_{\underline{i}}]_{I}^{-1}(\mathcal{K}([X_{ \underline{i}}]_{I}))\). Now take \([b]_{I}\in\bigcap_{i\in F}\ker[\phi_{\underline{i}}]_{I}\). By item (i), we have that \(b\in[\cdot]_{I}^{-1}(\bigcap_{i\in F}\ker[\phi_{\underline{i}}]_{I})=X_{F}^{-1}(I)\), and so \(ab\in I\). In particular, we have that \([a]_{I}[b]_{I}=[ab]_{I}=0\), which in turn implies that \([a]_{I}\in(\bigcap_{i\in F}\ker[\phi_{\underline{i}}]_{I})^{\perp}\), as required. Now assume that \(a\in[\cdot]_{I}^{-1}(\mathcal{J}_{F}([X]_{I}))\). Then \([\phi_{\underline{i}}]_{I}[a]_{I}=[\phi_{\underline{i}}(a)]_{I}\in\mathcal{K}([ X_{\underline{i}}]_{I})\) for all \(i\in[d]\) by definition. Take \(b\in X_{F}^{-1}(I)\). By item (i), we have that \([b]_{I}\in\bigcap_{i\in F}\ker[\phi_{\underline{i}}]_{I}\). By definition, we have that \([a]_{I}\in(\bigcap_{i\in F}\ker[\phi_{\underline{i}}]_{I})^{\perp}\), so \([ab]_{I}=[a]_{I}[b]_{I}=0\) and hence \(ab\in I\). Thus \(aX_{F}^{-1}(I)\subseteq I\) and hence in total \(a\in J_{F}(I,X)\), proving item (ii). Using items (i) and (ii), and that \(\mathcal{J}_{F}([X]_{I})\subseteq(\bigcap_{i\in F}\ker[\phi_{\underline{i}}]_{I})^{\perp}\), we obtain that \[X_{F}^{-1}(I)\cap J_{F}(I,X)=[\cdot]_{I}^{-1}\bigg{(}(\bigcap_{i\in F}\ker[ \phi_{\underline{i}}]_{I})\cap\mathcal{J}_{F}([X]_{I})\bigg{)}=[\cdot]_{I}^{-1}( \{0\})=I,\] proving item (iii). We are now ready to introduce the objects that will implement the parametrisation of the gauge-invariant ideals of \(\mathcal{NT}_{X}\). **Definition 4.1.4**.: Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \(\mathcal{L}\) be a \(2^{d}\)-tuple of \(X\). We say that \(\mathcal{L}\) is an _NT-\(2^{d}\)-tuple (of \(X\))_ if the following four conditions hold: 1. \(\mathcal{L}\) consists of ideals and \(\mathcal{L}_{F}\subseteq J_{F}(\mathcal{L}_{\emptyset},X)\) for all \(\emptyset\neq F\subseteq[d]\), 2. \(\mathcal{L}\) is \(X\)-invariant, 3. \(\mathcal{L}\) is partially ordered, 4. \([\cdot]_{\mathcal{L}_{\emptyset}}^{-1}\big{(}[\mathcal{L}_{F}]_{\mathcal{L}_{ \emptyset}}^{(1)}\big{)}\subseteq\mathcal{L}_{F}\) for all \(F\subseteq[d]\), where \([\mathcal{L}_{F}]_{\mathcal{L}_{\emptyset}}=\mathcal{L}_{F}/\mathcal{L}_{ \emptyset}\subseteq[A]_{\mathcal{L}_{\emptyset}}\). To make sense of condition (iv), first note that conditions (i) and (ii) imply that \(\mathcal{L}_{\emptyset}\) is an ideal of \(A\) that is positively invariant for \(X\). Hence we can make sense of \([X]_{\mathcal{L}_{\emptyset}}\) as a strong compactly aligned product system with coefficients in \([A]_{\mathcal{L}_{\emptyset}}\) by Proposition 2.5.5. Condition (iii) implies in particular that \(\mathcal{L}_{\emptyset}\subseteq\mathcal{L}_{F}\) for all \(F\subseteq[d]\), and so by condition (i) we have that \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}:=\{[\mathcal{L}_{F}]_{\mathcal{L}_{ \emptyset}}\}_{F\subseteq[d]}\) is a \(2^{d}\)-tuple of \([X]_{\mathcal{L}_{\emptyset}}\) that consists of ideals. An application of Proposition 4.1.3 gives that \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\subseteq\mathcal{J}([X]_{\mathcal{L}_ {\emptyset}})\), while item (ii) implies that \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\) is \([X]_{\mathcal{L}_{\emptyset}}\)-invariant. Hence we have that \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\) is an (E)-\(2^{d}\)-tuple by Proposition 3.2.3, and so we can consider the family \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}^{(1)}\). Note also that condition (iv) holds automatically for \(F=\emptyset\) and \(F=[d]\). When the left action of each fibre of \(X\) is by compacts, condition (iv) simplifies as follows. **Proposition 4.1.5**.: _Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\) and suppose that \(\phi_{\underline{n}}(A)\subseteq\mathcal{K}(X_{\underline{n}})\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). Then a \(2^{d}\)-tuple \(\mathcal{L}\) of \(X\) is an NT-\(2^{d}\)-tuple of \(X\) if and only if it satisfies conditions (i)-(iii) of Definition 4.1.4 and_ \[\bigg{(}\bigcap_{\underline{n}\perp F}X_{\underline{n}}^{-1}(J_{F}(\mathcal{ L}_{\emptyset},X))\bigg{)}\cap\mathcal{L}_{\mathrm{inv},F}\cap\mathcal{L}_{ \mathrm{lim},F}\subseteq\mathcal{L}_{F}\text{ for all }\emptyset\neq F \subsetneq[d].\] **Proof.** Note that \(X\) is automatically strong compactly aligned by Corollary 2.5.3. Without loss of generality, we may assume that \(\mathcal{L}\) satisfies conditions (i)-(iii) of Definition 4.1.4. Since condition (iv) of Definition 4.1.4 holds automatically for \(F=\emptyset\) and \(F=[d]\), and intersections are preserved under pre-images, it suffices to show that the following items hold for fixed \(\emptyset\neq F\subsetneq[d]\): 1. \([\cdot]_{\mathcal{L}_{\emptyset}}^{-1}(\mathcal{I}_{F}([X]_{\mathcal{L}_{ \emptyset}}))=\bigcap_{\underline{n}\perp F}X_{\underline{n}}^{-1}(J_{F}( \mathcal{L}_{\emptyset},X))\), 2. \([\cdot]_{\mathcal{L}_{\emptyset}}^{-1}([\mathcal{L}]_{\mathcal{L}_{\emptyset },\mathrm{inv},F})=\mathcal{L}_{\mathrm{inv},F}\), 3. \([\cdot]_{\mathcal{L}_{\emptyset}}^{-1}([\mathcal{L}]_{\mathcal{L}_{\emptyset },\mathrm{lim},F})=\mathcal{L}_{\mathrm{lim},F}\). For the first item, recall that \[\mathcal{I}_{F}([X]_{\mathcal{L}_{\emptyset}})=\bigcap_{\underline{n}\perp F}[ X_{\underline{n}}]_{\mathcal{L}_{\emptyset}}^{-1}(\mathcal{J}_{F}([X]_{ \mathcal{L}_{\emptyset}}))\quad\text{and}\quad\mathcal{J}_{F}([X]_{\mathcal{ L}_{\emptyset}})=[J_{F}(\mathcal{L}_{\emptyset},X)]_{\mathcal{L}_{\emptyset}},\] where the latter holds by Proposition 4.1.3. Note that \(\mathcal{L}_{\emptyset}\subseteq J_{F}(\mathcal{L}_{\emptyset},X)\) by positive invariance of \(\mathcal{L}_{\emptyset}\). Item (i) now follows as a consequence of Lemma 2.2.9, taking \(I:=\mathcal{L}_{\emptyset}\) and \(J:=J_{F}(\mathcal{L}_{\emptyset},X)\). For the second item, recall that \[[\mathcal{L}]_{\mathcal{L}_{\emptyset},\mathrm{inv},F}:=\bigcap_{\underline{m} \perp F}[X_{\underline{m}}]_{\mathcal{L}_{\emptyset}}^{-1}(\cap_{F\subsetneq D}[ \mathcal{L}_{D}]_{\mathcal{L}_{\emptyset}}).\] It is routine to check that \(\cap_{F\subset D}[\mathcal{L}_{D}]_{\mathcal{L}_{\emptyset}}=[\cap_{F\subset D }\mathcal{L}_{D}]_{\mathcal{L}_{\emptyset}}\). By the partial ordering property of \(\mathcal{L}\), we have that \(\mathcal{L}_{\emptyset}\subseteq\cap_{F\subset D}\mathcal{L}_{D}\). Consequently, item (ii) follows as a consequence of Lemma 2.2.9, taking \(I:=\mathcal{L}_{\emptyset}\) and \(J:=\cap_{F\subset D}\mathcal{L}_{D}\). Finally, item (iii) follows by a direct application of Lemma 2.2.8, which applies since \(\phi_{\underline{n}}(A)\subseteq\mathcal{K}(X_{\underline{n}})\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). This completes the proof. **Remark 4.1.6**.: Let \(X\) be as in Proposition 4.1.5, and let \(\mathcal{L}\) be a \(2^{d}\)-tuple of \(X\) that satisfies conditions (i)-(iii) of Definition 4.1.4. Note that the assumption that \(\phi_{\underline{n}}(A)\subseteq\mathcal{K}(X_{\underline{n}})\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\) is only used to obtain that \([\cdot\,]_{\mathcal{L}_{\emptyset}}^{-1}([\mathcal{L}]_{\mathcal{L}_{\emptyset}, \mathrm{lim},F})=\mathcal{L}_{\mathrm{lim},F}\). In other words, it is true that \[[\cdot\,]_{\mathcal{L}_{\emptyset}}^{-1}(\mathcal{I}_{F}([X]_{\mathcal{L}_{ \emptyset}}))=\bigcap_{\underline{n}\perp F}X_{\underline{n}}^{-1}(J_{F}( \mathcal{L}_{\emptyset},X))\quad\text{and}\quad[\cdot\,]_{\mathcal{L}_{ \emptyset}}^{-1}([\mathcal{L}]_{\mathcal{L}_{\emptyset},\mathrm{inv},F})= \mathcal{L}_{\mathrm{inv},F},\] even when \(X\) is (just) strong compactly aligned. NT-\(2^{d}\)-tuples represent the higher-rank analogue of Katsura's T-pairs. **Proposition 4.1.7**.: _Let \(X=\{X_{n}\}_{n\in\mathbb{Z}_{+}}\) be a product system with coefficients in a C*-algebra \(A\). Then the NT-\(2\)-tuples of \(X\) are exactly the T-pairs of \(X_{1}\)._ Proof.: First let \(\mathcal{L}=\{\mathcal{L}_{\emptyset},\mathcal{L}_{\{1\}}\}\) be an NT-\(2\)-tuple of \(X\). Then \(\mathcal{L}\) consists of ideals. Since \(\mathcal{L}\) is partially ordered, we have that \(\mathcal{L}_{\emptyset}\subseteq\mathcal{L}_{\{1\}}\). Invariance of \(\mathcal{L}\) for \(X\) gives in particular that \([\langle X_{1},\mathcal{L}_{\emptyset}X_{1}\rangle]\subseteq\mathcal{L}_{\emptyset}\) and hence \(\mathcal{L}_{\emptyset}\) is positively invariant for \(X_{1}\). Lastly, we have that \[\mathcal{L}_{\{1\}}\subseteq J_{\{1\}}(\mathcal{L}_{\emptyset},X)\equiv J( \mathcal{L}_{\emptyset},X_{1}).\] We conclude that \(\mathcal{L}\) is a T-pair of \(X_{1}\), as required. Now suppose that \(\mathcal{L}=\{\mathcal{L}_{\emptyset},\mathcal{L}_{\{1\}}\}\) is a T-pair of \(X_{1}\). By definition, this means that \(\mathcal{L}\) consists of ideals, \(\mathcal{L}_{\emptyset}\) is positively invariant for \(X_{1}\) and \(\mathcal{L}_{\emptyset}\subseteq\mathcal{L}_{\{1\}}\subseteq J(\mathcal{L}_{ \emptyset},X_{1})\). From this it is clear that \(\mathcal{L}\) is partially ordered, so condition (iii) of Definition 4.1.4 holds. Likewise, we have that \[\mathcal{L}_{\{1\}}\subseteq J(\mathcal{L}_{\emptyset},X_{1})\equiv J_{\{1 \}}(\mathcal{L}_{\emptyset},X),\] so condition (i) of Definition 4.1.4 holds. To show that \(\mathcal{L}\) is \(X\)-invariant, it suffices to show that \(\langle X_{n},\mathcal{L}_{\emptyset}X_{n}\rangle\subseteq\mathcal{L}_{ \emptyset}\) for all \(n\in\mathbb{Z}_{+}\). This is immediate by inducting on \(n\). Lastly, note that condition (iv) of Definition 4.1.4 holds trivially, since there are no proper, non-empty subsets of \(\{1\}\). Thus \(\mathcal{L}\) is an NT-\(2\)-tuple, completing the proof. The following proposition links Definition 4.1.4 and Theorem 3.4.6. Moreover, it shows that (M)-\(2^{d}\)-tuples form a subclass of NT-\(2^{d}\)-tuples. **Proposition 4.1.8**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{L}\) be a \(2^{d}\)-tuple of \(X\) consisting of ideals satisfying \(\mathcal{L}_{\emptyset}\subseteq\mathcal{L}_{F}\) for all \(F\subseteq[d]\), and assume that \(\mathcal{L}_{\emptyset}\) is positively invariant for \(X\). Then \(\mathcal{L}\) is an NT-\(2^{d}\)-tuple of \(X\) if and only if \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\) is an (M)-\(2^{d}\)-tuple of \([X]_{\mathcal{L}_{\emptyset}}\)._ Proof.: Since \(\mathcal{L}_{\emptyset}\) is an ideal that is positively invariant for \(X\), we can make sense of the strong compactly aligned product system \([X]_{\mathcal{L}_{\emptyset}}\). Moreover, the fact that \(\mathcal{L}\) consists of ideals satisfying \(\mathcal{L}_{\emptyset}\subseteq\mathcal{L}_{F}\) for all \(F\subseteq[d]\) gives that \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\) is a \(2^{d}\)-tuple of \([X]_{\mathcal{L}_{\emptyset}}\) that consists of ideals. The forward implication follows by applying Proposition 4.1.3 and resorting to the characterisation of (M)-\(2^{d}\)-tuples obtained in Theorem 3.4.6. For the converse, assume that \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\) is an (M)-\(2^{d}\)-tuple of \([X]_{\mathcal{L}_{\emptyset}}\). It suffices to show that \(\mathcal{L}\) satisfies the four conditions of Definition 4.1.4. Firstly, we have that \(\mathcal{L}\) consists of ideals by assumption. Fix \(\emptyset\neq F\subseteq[d]\). Since \(\mathcal{L}_{\emptyset}\) is positively invariant for \(X\), we have that \[J_{F}(\mathcal{L}_{\emptyset},X)=[\cdot\,]_{\mathcal{L}_{\emptyset}}^{-1}( \mathcal{J}_{F}([X]_{\mathcal{L}_{\emptyset}}))\] by Proposition 4.1.3. Since \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\) is an (M)-\(2^{d}\)-tuple of \([X]_{\mathcal{L}_{\emptyset}}\), an application of Theorem 3.4.6 gives that \([\mathcal{L}_{F}]_{\mathcal{L}_{\emptyset}}\subseteq\mathcal{J}_{F}([X]_{ \mathcal{L}_{\emptyset}})\), from which it follows that \[\mathcal{L}_{F}\subseteq[\cdot\,]_{\mathcal{L}_{\emptyset}}^{-1}(\mathcal{J}_{ F}([X]_{\mathcal{L}_{\emptyset}}))=J_{F}(\mathcal{L}_{\emptyset},X),\] proving that condition (i) of Definition 4.1.4 holds. To see that \(\mathcal{L}\) is \(X\)-invariant, fix \(F\subseteq[d]\) and \(\underline{n}\perp F\). We must show that \(\big{\langle}X_{\underline{n}},\mathcal{L}_{F}X_{\underline{n}}\big{\rangle} \subseteq\mathcal{L}_{F}\). By Theorem 3.4.6, we have that \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\) is \([X]_{\mathcal{L}_{\emptyset}}\)-invariant, and therefore \[[\langle X_{\underline{n}},\mathcal{L}_{F}X_{\underline{n}}\rangle]_{\mathcal{L}_ {\emptyset}}=\big{\langle}[X_{\underline{n}}]_{\mathcal{L}_{\emptyset}},[ \mathcal{L}_{F}]_{\mathcal{L}_{\emptyset}}[X_{\underline{n}}]_{\mathcal{L}_{ \emptyset}}\big{\rangle}\subseteq[\mathcal{L}_{F}]_{\mathcal{L}_{\emptyset}}.\] Hence \(\big{\langle}X_{\underline{n}},\mathcal{L}_{F}X_{\underline{n}}\big{\rangle} \subseteq\mathcal{L}_{F}+\mathcal{L}_{\emptyset}\). However, recall that \(\mathcal{L}_{F}\) is an ideal that contains \(\mathcal{L}_{\emptyset}\), so \(\mathcal{L}_{F}+\mathcal{L}_{\emptyset}=\mathcal{L}_{F}\) and therefore \(\big{\langle}X_{\underline{n}},\mathcal{L}_{F}X_{\underline{n}}\big{\rangle} \subseteq\mathcal{L}_{F}\). Thus condition (ii) of Definition 4.1.4 holds. Next we check that \(\mathcal{L}\) is partially ordered. To this end, fix \(F\subseteq D\subseteq[d]\). By Theorem 3.4.6, we have that \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\) is partially ordered and therefore \([\mathcal{L}_{F}]_{\mathcal{L}_{\emptyset}}\subseteq[\mathcal{L}_{D}]_{ \mathcal{L}_{\emptyset}}\). Since \(\mathcal{L}_{\emptyset}\subseteq\mathcal{L}_{D}\), we obtain that \(\mathcal{L}_{F}\subseteq\mathcal{L}_{D}\), showing that condition (iii) of Definition 4.1.4 holds. Finally, condition (iv) of Definition 4.1.4 follows from condition (iv) of Theorem 3.4.6 applied to \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\), and the proof is complete. The interplay between NT-\(2^{d}\)-tuples and (M)-\(2^{d}\)-tuples allows the transferal of properties of (M)-\(2^{d}\)-tuples to the general setting, towards the complete parametrisation of the gauge-invariant ideals of \(\mathcal{NT}_{X}\). To explore this further, we examine the interaction between NT-\(2^{d}\)-tuples and Nica-covariant representations. The following proposition extends [34, Lemma 5.10]. **Proposition 4.1.9**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \((\pi,t)\) be a Nica-covariant representation of \(X\) and let \(\mathcal{L}^{(\pi,t)}\) be the associated \(2^{d}\)-tuple of \(X\) (see Definition 3.1.12). Then the following hold:_ 1. \(\mathcal{L}_{\emptyset}^{(\pi,t)}\) _is positively invariant for_ \(X\)_._ 2. \(\ker t_{\underline{n}}=X_{\underline{n}}\mathcal{L}_{\emptyset}^{(\pi,t)}\) _for all_ \(\underline{n}\in\mathbb{Z}_{+}^{d}\)_._ 3. _There exists an injective Nica-covariant representation_ \((\dot{\pi},\dot{t})\) _of_ \([X]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\) _on_ \(\mathrm{C}^{*}(\pi,t)\) _such that_ \(\pi=\dot{\pi}\circ[\,\cdot\,]_{\mathcal{L}_{\emptyset}^{(\pi,t)}},t_{ \underline{n}}=\dot{t}_{\underline{n}}\circ[\,\cdot\,]_{\mathcal{L}_{ \emptyset}^{(\pi,t)}}\) _and_ \(\psi_{\underline{n}}=\dot{\psi}_{\underline{n}}\circ[\,\cdot\,]_{\mathcal{L}_{ \emptyset}^{(\pi,t)}}|_{\mathcal{K}(X_{\underline{n}})}\) _for all_ \(\underline{n}\in\mathbb{Z}_{+}^{d}\)_, and therefore_ \(\mathrm{C}^{*}(\dot{\pi},\dot{t})=\mathrm{C}^{*}(\pi,t)\)_. If_ \((\pi,t)\) _admits a gauge action, then so does_ \((\dot{\pi},\dot{t})\)_._ 4. _For each_ \(\emptyset\neq F\subseteq[d]\)_, if_ \(a\in\mathcal{L}_{F}^{(\pi,t)}\) _then_ \([\phi_{\underline{i}}(a)]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\in\mathcal{K}([X _{\underline{i}}]_{\mathcal{L}_{\emptyset}^{(\pi,t)}})\) _for all_ \(i\in[d]\)_, and_ \[\dot{\pi}([a]_{\mathcal{L}_{\emptyset}^{(\pi,t)}})+\sum\{(-1)^{|\underline{n}| }\dot{\psi}_{\underline{n}}([\phi_{\underline{n}}(a)]_{\mathcal{L}_{ \emptyset}^{(\pi,t)}})\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F} \}=0.\] 5. _For each_ \(\emptyset\neq F\subseteq[d]\)_, we have that_ \(a\in\mathcal{L}_{F}^{(\pi,t)}\) _if and only if for every_ \(\underline{0}\neq\underline{n}\leq\underline{1}_{F}\) _there exists_ \(k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}})\) _such that_ \([\phi_{\underline{n}}(a)]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}=[k_{\underline{n }}]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\)_, satisfying_ \[\pi(a)+\sum\{(-1)^{|\underline{n}|}\psi_{\underline{n}}(k_{\underline{n}})\mid \underline{0}\neq\underline{n}\leq\underline{1}_{F}\}=0.\] 6. _For each_ \(F\subseteq[d]\)_, we have that_ \([\mathcal{L}_{F}^{(\pi,t)}]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}=\mathcal{L}_{F} ^{(\dot{\pi},\dot{t})}\)_._ **Proof.** (i) We have that \(\mathcal{L}^{(\pi,t)}\) is \(X\)-invariant by Proposition 3.1.13, and thus in particular \(\mathcal{L}_{\emptyset}^{(\pi,t)}\) is positively invariant for \(X\). (ii) This follows by [34, Lemma 5.10 (ii)], since \((\pi,t_{\underline{n}})\) is a representation of \(X_{\underline{n}}\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). (iii) Existence and injectivity of \((\dot{\pi},\dot{t})\) follow by [34, Lemma 5.10 (iii)], applied to each \((\pi,t_{\underline{n}})\). By the form of \(\dot{t}_{\underline{n}}\) and Lemma 2.2.2, it also follows that \(\psi_{\underline{n}}=\dot{\psi}_{\underline{n}}\circ[\,\cdot\,]_{\mathcal{L}_{ \emptyset}^{(\pi,t)}}|_{\mathcal{K}(X_{\underline{n}})}\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). To see that \((\dot{\pi},\dot{t})\) is Nica-covariant, fix \(\underline{n},\underline{m}\in\mathbb{Z}_{+}^{d}\setminus\{\underline{0}\}\). By Lemma 2.2.2, it suffices to check (2.5) for \([k_{\underline{n}}]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\) and \([k_{\underline{m}}]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\), where \(k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}})\) and \(k_{\underline{m}}\in\mathcal{K}(X_{\underline{m}})\). Letting \(\{\underline{n}^{+\underline{m}}\}_{\underline{n},\underline{m}\in\mathbb{Z}_{ +}^{d}}\) denote the connecting \(*\)-homomorphisms of \(X\) and letting \(\{\underline{j}_{\underline{n}}^{\underline{n}+\underline{m}}\}_{\underline{n}, \underline{m}\in\mathbb{Z}_{+}^{d}}\) denote those of \([X]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\), by Proposition 2.4.4 and Nica-covariance of \((\pi,t)\) we have that \[\dot{\psi}_{\underline{n}}([k_{\underline{n}}]_{\mathcal{L}_{ \emptyset}^{(\pi,t)}})\dot{\psi}_{\underline{m}}([k_{\underline{m}}]_{\mathcal{L} _{\emptyset}^{(\pi,t)}}) =\psi_{\underline{n}}(k_{\underline{n}})\psi_{\underline{m}}(k_{ \underline{m}})\] \[=\dot{\psi}_{\underline{n}\vee\underline{m}}([\underline{n}_{ \underline{n}}^{\vee\underline{m}}(k_{\underline{n}})\underline{n}_{ \underline{m}}^{\vee\underline{m}}(k_{\underline{m}})]_{\mathcal{L}_{ \emptyset}^{(\pi,t)}})\] \[=\dot{\psi}_{\underline{n}\vee\underline{m}}(j_{\underline{n}}^{ \vee\underline{m}}([k_{\underline{n}}]_{\mathcal{L}_{\emptyset}^{(\pi,t)}})j_{ \underline{m}}^{\underline{n}\vee\underline{m}}([k_{\underline{m}}]_{\mathcal{L}_{ \emptyset}^{(\pi,t)}})),\] as required. If \((\pi,t)\) admits a gauge action, then this is inherited by \((\dot{\pi},\dot{t})\) since \(\mathrm{C}^{*}(\dot{\pi},\dot{t})=\mathrm{C}^{*}(\pi,t)\), finishing the proof of item (iii). (iv) Fix \(\emptyset\neq F\subseteq[d]\) and \(a\in\mathcal{L}_{F}^{(\pi,t)}\). Then \(\pi(a)\in B_{(\underline{0},\underline{1}_{F}]}^{(\pi,t)}\) by definition, and thus \(\dot{\pi}([a]_{\mathcal{L}_{\emptyset}^{(\pi,t)}})\in B_{(\underline{0}, \underline{1}_{F}]}^{(\dot{\pi},\dot{t})}\) by item (iii). In turn, we have that \(\dot{\pi}([a]_{\mathcal{L}_{\emptyset}^{(\pi,t)}})\dot{q}_{F}=0\) by (2.12). An application of (2.13) gives that \[[\phi_{\underline{1}}(a)]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}=[\phi_{\underline {1}}]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}([a]_{\mathcal{L}_{\emptyset}^{(\pi,t)}})\in\mathcal{K}([X_{\underline{1}}]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}) \text{ for all }i\in[d],\] using the fact that \((\dot{\pi},\dot{t})\) is injective and Nica-covariant by item (iii). Applying (2.14) for \((\dot{\pi},\dot{t})\), we obtain that \[\dot{\pi}([a]_{\mathcal{L}_{\emptyset}^{(\pi,t)}})+\sum\{(-1)^{[\underline{n} ]}\dot{\psi}_{\underline{n}}([\phi_{\underline{n}}(a)]_{\mathcal{L}_{ \emptyset}^{(\pi,t)}})\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F} \}=0,\] showing that item (iv) holds. (v) Fix \(\emptyset\neq F\subseteq[d]\). The reverse implication is immediate, so assume that \(a\in\mathcal{L}_{F}^{(\pi,t)}\). By item (iv), we have that \[\dot{\pi}([a]_{\mathcal{L}_{\emptyset}^{(\pi,t)}})+\sum\{(-1)^{[\underline{n} ]}\dot{\psi}_{\underline{n}}([\phi_{\underline{n}}(a)]_{\mathcal{L}_{ \emptyset}^{(\pi,t)}})\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F} \}=0.\] An application of Lemma 2.2.2 gives that \([\phi_{\underline{n}}(a)]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}=[k_{\underline{ n}}]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\) for some \(k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}})\), for all \(\underline{0}\neq\underline{n}\leq\underline{1}_{F}\). By item (iii), we have that \[0 =\dot{\pi}([a]_{\mathcal{L}_{\emptyset}^{(\pi,t)}})+\sum\{(-1)^{ [\underline{n}]}\dot{\psi}_{\underline{n}}([\phi_{\underline{n}}(a)]_{ \mathcal{L}_{\emptyset}^{(\pi,t)}})\mid\underline{0}\neq\underline{n}\leq \underline{1}_{F}\}\] \[=\dot{\pi}([a]_{\mathcal{L}_{\emptyset}^{(\pi,t)}})+\sum\{(-1)^{ [\underline{n}]}\dot{\psi}_{\underline{n}}([k_{\underline{n}}]_{\mathcal{L}_{ \emptyset}^{(\pi,t)}})\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F}\}\] \[=\pi(a)+\sum\{(-1)^{[\underline{n}]}\psi_{\underline{n}}(k_{ \underline{n}})\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F}\},\] showing that item (v) holds. (vi) First recall that \(\mathcal{L}_{F}^{(\pi,t)}\) is an ideal satisfying \(\mathcal{L}_{\emptyset}^{(\pi,t)}\subseteq\mathcal{L}_{F}^{(\pi,t)}\) for all \(F\subseteq[d]\) by Proposition 3.1.13. Thus we can make sense of the \(2^{d}\)-tuple \([\mathcal{L}^{(\pi,t)}]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\) of \([X]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\). The claim holds trivially when \(F=\emptyset\) (because \((\dot{\pi},\dot{t})\) is injective), so fix \(\emptyset\neq F\subseteq[d]\) and take \(a\in\mathcal{L}_{F}^{(\pi,t)}\). Then item (iv) yields that \([a]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\in\mathcal{L}_{F}^{(\dot{\pi},\dot{t})}\). This shows that \([\mathcal{L}_{F}^{(\pi,t)}]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\subseteq \mathcal{L}_{F}^{(\dot{\pi},\dot{t})}\). Now take \([a]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\in\mathcal{L}_{F}^{(\dot{\pi},\dot{t})}\). Then by definition and Lemma 2.2.2, for every \(\underline{0}\neq\underline{n}\leq\underline{1}_{F}\) there exists \(k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}})\) such that \[\dot{\pi}([a]_{\mathcal{L}_{\emptyset}^{(\pi,t)}})=\sum\{\dot{\psi}_{ \underline{n}}([k_{\underline{n}}]_{\mathcal{L}_{\emptyset}^{(\pi,t)}})\mid \underline{0}\neq\underline{n}\leq\underline{1}_{F}\}.\] Using item (iii), we obtain that \[\pi(a)=\dot{\pi}([a]_{\mathcal{L}_{\emptyset}^{(\pi,t)}})=\sum\{\dot{\psi}_{ \underline{n}}([k_{\underline{n}}]_{\mathcal{L}_{\emptyset}^{(\pi,t)}})\mid \underline{0}\neq\underline{n}\leq\underline{1}_{F}\}=\sum\{\psi_{\underline{n} }(k_{\underline{n}})\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F}\}.\] This shows that \(a\in\mathcal{L}_{F}^{(\pi,t)}\), and hence \([a]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\in[\mathcal{L}_{F}^{(\pi,t)}]_{\mathcal{ L}_{\emptyset}^{(\pi,t)}}\). Consequently \(\mathcal{L}_{F}^{(\dot{\pi},\dot{t})}\subseteq[\mathcal{L}_{F}^{(\pi,t)}]_{ \mathcal{L}_{\emptyset}^{(\pi,t)}}\) and hence \([\mathcal{L}_{F}^{(\pi,t)}]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}=\mathcal{L}_{F}^ {(\dot{\pi},\dot{t})}\) for all \(F\subseteq[d]\), finishing the proof. **Remark 4.1.10**.: Let \(\mathcal{L}^{(\pi,t)}\) be the \(2^{d}\)-tuple of \(X\) associated with a Nica-covariant representation \((\pi,t)\) of \(X\), and let \((\dot{\pi},\dot{t})\) be the injective Nica-covariant representation of \([X]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\) defined in item (iii) of Proposition 4.1.9. By Proposition 3.1.16, we have that \((\dot{\pi},\dot{t})\) is an \(\mathcal{L}^{(\dot{\pi},\dot{t})}\)-relative CNP-representation of \([X]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\), giving the following commutative diagram of canonical \(*\)-epimorphisms. On the other hand, since \(\mathcal{L}_{\emptyset}^{(\pi,t)}\) is positively invariant we obtain a canonical \(*\)-epimorphism \(\mathcal{N}\mathcal{T}_{X}\to\mathcal{N}\mathcal{T}_{[X]_{\mathcal{L}_{ \emptyset}^{(\pi,t)}}}\) that lifts the quotient map \(X\to[X]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\). We have that \([\mathcal{L}^{(\pi,t)}]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}=\mathcal{L}^{( \dot{\pi},\dot{t})}\) by item (vi) of Proposition 4.1.9, and we have that \(\mathrm{C}^{*}(\pi,t)=\mathrm{C}^{*}(\dot{\pi},\dot{t})\) by item (iii) of Proposition 4.1.9. Hence we obtain the following commutative diagram of canonical \(*\)-epimorphisms. By Proposition 3.1.17 and Theorem 3.2.11, (M)-\(2^{d}\)-tuples are exactly of the form \(\mathcal{L}^{(\pi,t)}\) for some injective Nica-covariant representation \((\pi,t)\) that admits a gauge action. This is extended to NT-\(2^{d}\)-tuples by allowing \((\pi,t)\) to be non-injective. We introduce the following notation. **Definition 4.1.11**.: Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). For an NT-\(2^{d}\)-tuple \(\mathcal{L}\) of \(X\), we define the maps \[\pi^{\mathcal{L}}\colon A\to\mathcal{N}\mathcal{O}([\mathcal{L}]_{\mathcal{L }_{\emptyset}},[X]_{\mathcal{L}_{\emptyset}});\pi^{\mathcal{L}}(a)=\pi^{[ \mathcal{L}]_{\mathcal{L}_{\emptyset}}}_{[X]_{\mathcal{L}_{\emptyset}}}([a]_{ \mathcal{L}_{\emptyset}})\text{ for all }a\in A,\] where \((\pi^{[\mathcal{L}]_{\mathcal{L}_{\emptyset}}}_{[X]_{\mathcal{L}_{\emptyset}}},t^{[\mathcal{L}]_{\mathcal{L}_{\emptyset}}}_{[X]_{\mathcal{L}_{\emptyset}}})\) denotes the universal \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\)-relative CNP-representation of \([X]_{\mathcal{L}_{\emptyset}}\). Checking that \((\pi^{\mathcal{L}},t^{\mathcal{L}})\) is a Nica-covariant representation is routine, as it is obtained from the canonical \(*\)-epimorphism \[\mathcal{N}\mathcal{T}_{X}\to\mathcal{N}\mathcal{T}_{[X]_{\mathcal{L}_{ \emptyset}}}\to\mathcal{N}\mathcal{O}([\mathcal{L}]_{\mathcal{L}_{\emptyset}},[X]_{\mathcal{L}_{\emptyset}}),\] where we use that \(\mathcal{L}_{\emptyset}\) is positively invariant for the existence of the first map, and that \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\) is an (M)-\(2^{d}\)-tuple (and thus in particular relative) of \([X]_{\mathcal{L}_{\emptyset}}\) by Proposition 4.1.8 for the existence of the second map. Additionally, notice that \((\pi^{\mathcal{L}},t^{\mathcal{L}})\) admits a gauge action and that \[\mathrm{C}^{*}(\pi^{\mathcal{L}},t^{\mathcal{L}})=\mathrm{C}^{*}(\pi^{[ \mathcal{L}]_{\mathcal{L}_{\emptyset}}}_{[X]_{\mathcal{L}_{\emptyset}}},t^{[ \mathcal{L}]_{\mathcal{L}_{\emptyset}}}_{[X]_{\mathcal{L}_{\emptyset}}})= \mathcal{N}\mathcal{O}([\mathcal{L}]_{\mathcal{L}_{\emptyset}},[X]_{\mathcal{ L}_{\emptyset}}).\] **Proposition 4.1.12**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). If \(\mathcal{L}\) is an NT-\(2^{d}\)-tuple of \(X\), then \(\mathcal{L}^{(\pi^{\mathcal{L}},t^{\mathcal{L}})}=\mathcal{L}\)._ _Consequently, a \(2^{d}\)-tuple \(\mathcal{L}\) of \(X\) is an NT-\(2^{d}\)-tuple if and only if \(\mathcal{L}=\mathcal{L}^{(\pi,t)}\) for some Nica-covariant representation \((\pi,t)\) of \(X\) that admits a gauge action._ **Proof.** For the first claim, we denote the universal \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\)-relative CNP-representation of \([X]_{\mathcal{L}_{\emptyset}}\) by \((\tilde{\pi},\tilde{t})\). First we show that \(\mathcal{L}_{\emptyset}^{(\pi^{\mathcal{L}},t^{\mathcal{L}})}=\mathcal{L}_{\emptyset}\). We have that \[a\in\mathcal{L}_{\emptyset}^{(\pi^{\mathcal{L}},t^{\mathcal{L}})}\iff\pi^{ \mathcal{L}}(a)=0\iff\tilde{\pi}([a]_{\mathcal{L}_{\emptyset}})=0\iff[a]_{ \mathcal{L}_{\emptyset}}=0\iff a\in\mathcal{L}_{\emptyset},\] using that \((\tilde{\pi},\tilde{t})\) is injective by Proposition 3.2.1, since \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\) is an (M)-\(2^{d}\)-tuple of \([X]_{\mathcal{L}_{\emptyset}}\) by Proposition 4.1.8. Hence \(\mathcal{L}_{\emptyset}^{(\pi^{\mathcal{L}},t^{\mathcal{L}})}=\mathcal{L}_{\emptyset}\). Recall that \(\mathrm{C}^{*}(\tilde{\pi},\tilde{t})=\mathcal{N}\mathcal{O}([\mathcal{L}]_{ \mathcal{L}_{\emptyset}},[X]_{\mathcal{L}_{\emptyset}})\) by definition, and thus by applying Theorem 3.2.11 for the (M)-\(2^{d}\)-tuple \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\) of \([X]_{\mathcal{L}_{\emptyset}}\) and the Nica-covariant representation \((\tilde{\pi},\tilde{t})\) of \([X]_{\mathcal{L}_{\emptyset}}\), we obtain that \(\mathcal{L}^{(\tilde{\pi},\tilde{t})}=[\mathcal{L}]_{\mathcal{L}_{\emptyset}}\). Hence, for \(\emptyset\neq F\subseteq[d]\), we have that \[a\in\mathcal{L}_{F}^{(\pi^{\mathcal{L}},t^{\mathcal{L}})} \iff\pi^{\mathcal{L}}(a)=\sum\{\psi_{\underline{n}}^{\mathcal{L}}(k_ {\underline{n}})\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F}\}\text { for some }k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}})\] \[\iff\tilde{\pi}([a]_{\mathcal{L}_{\emptyset}})=\sum\{\tilde{\psi }_{\underline{n}}([k_{\underline{n}}]_{\mathcal{L}_{\emptyset}})\mid \underline{0}\neq\underline{n}\leq\underline{1}_{F}\}\text{ for some }k_{\underline{n}}\in\mathcal{K}(X_{ \underline{n}})\] \[\iff[a]_{\mathcal{L}_{\emptyset}}\in\mathcal{L}_{F}^{(\tilde{\pi},\tilde{t})}=[\mathcal{L}_{F}]_{\mathcal{L}_{\emptyset}}\] \[\iff a\in\mathcal{L}_{F}+\mathcal{L}_{\emptyset}=\mathcal{L}_{F},\] and so \(\mathcal{L}_{F}^{(\pi^{\mathcal{L}},t^{\mathcal{L}})}=\mathcal{L}_{F}\), completing the proof of the first part. For the second part, if \(\mathcal{L}\) is an NT-\(2^{d}\)-tuple of \(X\) then \(\mathcal{L}=\mathcal{L}^{(\pi,t)}\) for \((\pi,t):=(\pi^{\mathcal{L}},t^{\mathcal{L}})\). Conversely, if \(\mathcal{L}=\mathcal{L}^{(\pi,t)}\) for some Nica-covariant representation \((\pi,t)\) of \(X\) that admits a gauge action, let \((\hat{\pi},\hat{t})\) be the injective Nica-covariant representation of \([X]_{\mathcal{L}_{\emptyset}}\) guaranteed by item (iii) of Proposition 4.1.9. Then \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}=[\mathcal{L}^{(\pi,t)}]_{\mathcal{L}_ {\emptyset}^{(\pi,t)}}=\mathcal{L}^{(\hat{\pi},\hat{t})}\) by item (vi) of Proposition 4.1.9. However, we have that \(\mathcal{L}^{(\hat{\pi},\hat{t})}\) is an (M)-\(2^{d}\)-tuple of \([X]_{\mathcal{L}_{\emptyset}}\) by Proposition 3.1.17, and thus \(\mathcal{L}\) is an NT-\(2^{d}\)-tuple of \(X\) by Proposition 4.1.8. Consequently, we have an extension of Proposition 3.1.16 for describing the kernels of induced \(*\)-representations that may not be injective on \(X\). **Proposition 4.1.13**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \((\pi,t)\) be a Nica-covariant representation of \(X\) that admits a gauge action, and let \(\mathcal{L}^{(\pi,t)}\) be the associated NT-\(2^{d}\)-tuple of \(X\). Then_ \[\ker\pi\times t=\langle\overline{\pi}_{X}(a)+\sum_{\underline{0} \neq\underline{n}\leq\underline{1}_{F}}(-1)^{|\underline{n}|}\overline{\psi}_{X,\underline{n}}(k_{\underline{n}})\mid F\subseteq[d],a\in\mathcal{L}_{F}^{( \pi,t)},k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}}),\] \[[\phi_{\underline{n}}(a)]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}=[k_ {\underline{n}}]_{\mathcal{L}_{\emptyset}^{(\pi,t)}}\text{ for all }\underline{0}\neq\underline{n}\leq \underline{1}_{F}\rangle.\] **Proof.** We denote the ideal on the right hand side by \(\mathfrak{J}\). For notational convenience, we drop the superscript \((\pi,t)\) and write \(\mathcal{L}:=\mathcal{L}^{(\pi,t)}\). We begin by proving that \(\mathfrak{J}\subseteq\ker\pi\times t\). To this end, it suffices to show that \(\ker\pi\times t\) contains the generators of \(\mathfrak{J}\). The generators of \(\mathfrak{J}\) that are indexed by \(F=\emptyset\) have the form \(\overline{\pi}_{X}(a)\) for some \(a\in\mathcal{L}_{\emptyset}\equiv\ker\pi\). In this case we have that \((\pi\times t)(\overline{\pi}_{X}(a))=\pi(a)=0\) trivially, so \(\overline{\pi}_{X}(a)\in\ker\pi\times t\), as required. Next, fix \(\emptyset\neq F\subseteq[d],a\in\mathcal{L}_{F}\) and \(k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}})\) such that \([\phi_{\underline{n}}(a)]_{\mathcal{L}_{\emptyset}}=[k_{\underline{n}}]_{ \mathcal{L}_{\emptyset}}\) for all \(\underline{0}\neq\underline{n}\leq\underline{1}_{F}\). Let \((\dot{\pi},\dot{t})\) be the injective Nica-covariant representation of \([X]_{\mathcal{L}_{\emptyset}}\) admitting a gauge action, as guaranteed by item (iii) of Proposition 4.1.9. We then have that \[(\pi\times t)(\overline{\pi}_{X}(a)+\sum\{(-1)^{|\underline{n}|} \overline{\psi}_{X,\underline{n}}(k_{\underline{n}})\mid\underline{0}\neq \underline{n}\leq\underline{1}_{F}\})=\] \[=\pi(a)+\sum\{(-1)^{|\underline{n}|}\psi_{\underline{n}}(k_{ \underline{n}})\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F}\}\] \[=\dot{\pi}([a]_{\mathcal{L}_{\emptyset}})+\sum\{(-1)^{|\underline{n }|}\dot{\psi}_{\underline{n}}([k_{\underline{n}}]_{\mathcal{L}_{\emptyset}}) \mid\underline{0}\neq\underline{n}\leq\underline{1}_{F}\}=0,\] where we have used item (iv) of Proposition 4.1.9 to obtain the final equality. This shows that \(\mathfrak{J}\subseteq\ker\pi\times t\). We have that \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}=\mathcal{L}^{(\dot{\pi},\dot{t})}\) by item (vi) of Proposition 4.1.9, and thus by applying Proposition 3.1.16 for \((\dot{\pi},\dot{t})\) we obtain a canonical \(*\)-isomorphism \[\Phi\colon\mathcal{N}\mathcal{O}([\mathcal{L}]_{\mathcal{L}_{\emptyset}},[X]_{ \mathcal{L}_{\emptyset}})\to\mathrm{C}^{*}(\dot{\pi},\dot{t})=\mathrm{C}^{*}( \pi,t).\] By considering the representation \((\pi^{\mathcal{L}},t^{\mathcal{L}})\) of Definition 4.1.11, and the canonical quotient map \(Q_{\mathfrak{J}}\colon\mathcal{NT}_{X}\to\mathcal{NT}_{X}/\mathfrak{J}\), we obtain the following commutative diagram of \(*\)-epimorphisms. Note that \(\Psi\) exists because \[\mathfrak{J}\subseteq\ker\pi\times t=\ker\Phi\circ(\pi^{\mathcal{L}}\times t ^{\mathcal{L}})=\ker\pi^{\mathcal{L}}\times t^{\mathcal{L}}.\] Therefore, to prove that \(\ker\pi\times t\subseteq\mathfrak{J}\), it suffices to show that \(\Psi\) is injective. To this end, we define maps \(\tilde{\pi}\) and \(\tilde{t}_{\underline{n}}\) by \[\tilde{\pi}\colon[A]_{\mathcal{L}_{\mathfrak{g}}}\to\mathcal{NT}_{X}/ \mathfrak{J};\tilde{\pi}([a]_{\mathcal{L}_{\mathfrak{g}}})=Q_{\mathfrak{J}}( \overline{\pi}_{X}(a)),\] \[\tilde{t}_{\underline{n}}\colon[X_{\underline{n}}]_{\mathcal{L}_{ \mathfrak{g}}}\to\mathcal{NT}_{X}/\mathfrak{J};\tilde{t}_{\underline{n}}([ \underline{\xi_{\underline{n}}}]_{\mathcal{L}_{\mathfrak{g}}})=Q_{\mathfrak{J }}(\overline{t}_{X,\underline{n}}(\underline{\xi_{\underline{n}}})),\] for all \(a\in A,\xi_{\underline{n}}\in X_{\underline{n}}\) and \(\underline{n}\in\mathbb{Z}_{+}^{d}\setminus\{\underline{0}\}\). These maps are well-defined because \(\overline{\pi}_{X}(\mathcal{L}_{\mathfrak{g}})\subseteq\mathfrak{J}\) and \(\overline{t}_{X,\underline{n}}(X_{\underline{n}}\mathcal{L}_{\mathfrak{g}}) \subseteq\mathfrak{J}\). It is routine to check that \((\tilde{\pi},\tilde{t})\) is a Nica-covariant representation of \([X]_{\mathcal{L}_{\mathfrak{g}}}\), since \(\tilde{\psi}_{\underline{n}}\circ[\,\cdot\,]_{\mathcal{L}_{\mathfrak{g}}}|_{ \mathcal{K}(X_{\underline{n}})}=Q_{\mathfrak{J}}\circ\overline{\psi}_{X, \underline{n}}\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\) by definition of \((\tilde{\pi},\tilde{t})\). Additionally, we have that \(\mathrm{C}^{*}(\tilde{\pi},\tilde{t})=\mathcal{NT}_{X}/\mathfrak{J}\). We claim that \((\tilde{\pi},\tilde{t})\) is an \([\mathcal{L}]_{\mathcal{L}_{\mathfrak{g}}}\)-relative CNP representation. To see this, fix \(\emptyset\neq F\subseteq[d]\) and \(a\in\mathcal{L}_{F}\). For each \(\underline{0}\neq\underline{n}\leq\underline{1}_{F}\), there exists \(k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}})\) such that \([\phi_{\underline{n}}(a)]_{\mathcal{L}_{\mathfrak{g}}}=[k_{\underline{n}}]_{ \mathcal{L}_{\mathfrak{g}}}\) by item (iv) of Proposition 4.1.9 and Lemma 2.2.2. Hence we obtain that \[\tilde{\pi}([a]_{\mathcal{L}_{\mathfrak{g}}})\tilde{q}_{F} =\tilde{\pi}([a]_{\mathcal{L}_{\mathfrak{g}}})+\sum\{(-1)^{| \underline{n}|}\tilde{\psi}_{\underline{n}}([\phi_{\underline{n}}(a)]_{ \mathcal{L}_{\mathfrak{g}}})\mid\underline{0}\neq\underline{n}\leq\underline{ 1}_{F}\}\] \[=Q_{\mathfrak{J}}(\overline{\pi}_{X}(a))+\sum\{(-1)^{|\underline{n} |}Q_{\mathfrak{J}}(\overline{\psi}_{X,\underline{n}}(k_{\underline{n}}))\mid \underline{0}\neq\underline{n}\leq\underline{1}_{F}\}\] \[=Q_{\mathfrak{J}}(\overline{\pi}_{X}(a)+\sum\{(-1)^{|\underline{ n}|}\overline{\psi}_{X,\underline{n}}(k_{\underline{n}})\mid\underline{0}\neq\underline{n} \leq\underline{1}_{F}\})=0,\] using that \(\overline{\pi}_{X}(a)+\sum\{(-1)^{|\underline{n}|}\overline{\psi}_{X, \underline{n}}(k_{\underline{n}})\mid\underline{0}\neq\underline{n}\leq \underline{1}_{F}\}\in\mathfrak{J}\) in the final equality. This shows that \((\tilde{\pi},\tilde{t})\) is an \([\mathcal{L}]_{\mathcal{L}_{\mathfrak{g}}}\)-relative CNP-representation, and hence universality of \(\mathcal{NO}([\mathcal{L}]_{\mathcal{L}_{\mathfrak{g}}},[X]_{\mathcal{L}_{ \mathfrak{g}}})\) guarantees a (unique) canonical \(*\)-epimorphism \[\tilde{\Phi}\colon\mathcal{NO}([\mathcal{L}]_{\mathcal{L}_{\mathfrak{g}}},[X]_{ \mathcal{L}_{\mathfrak{g}}})\to\mathcal{NT}_{X}/\mathfrak{J}=\mathrm{C}^{*}( \tilde{\pi},\tilde{t}).\] It is routine to check that \(\tilde{\Phi}\circ\Psi=\mathrm{id}_{\mathcal{NT}_{X}/\mathfrak{J}}\) and thus \(\Psi\) is injective, as required. ### Parametrising the gauge-invariant ideals of \(\mathcal{NT}_{X}\) We can now pass to the parametrisation of gauge-invariant ideals by NT-\(2^{d}\)-tuples. For an NT-\(2^{d}\)-tuple \(\mathcal{L}\) of \(X\), we write \[\mathfrak{J}^{\mathcal{L}}:=\ker\pi^{\mathcal{L}}\times t^{\mathcal{L}},\text{ for }\pi^{\mathcal{L}}\times t^{\mathcal{L}}\colon\mathcal{NT}_{X}\to\mathcal{NO}([ \mathcal{L}]_{\mathcal{L}_{\mathfrak{g}}},[X]_{\mathcal{L}_{\mathfrak{g}}}).\] It is clear that \(\mathfrak{J}^{\mathcal{L}}\) is a gauge-invariant ideal of \(\mathcal{NT}_{X}\), since \((\pi^{\mathcal{L}},t^{\mathcal{L}})\) admits a gauge action and \(\pi^{\mathcal{L}}\times t^{\mathcal{L}}\) is equivariant. On the other hand, for a gauge-invariant ideal \(\mathfrak{J}\) of \(\mathcal{NT}_{X}\), we write \[\mathcal{L}^{\mathfrak{J}}:=\mathcal{L}^{(Q_{\mathfrak{J}}\circ\overline{\pi}_{X },Q_{\mathfrak{J}}\circ\overline{t}_{X})},\text{ for }Q_{\mathfrak{J}}\colon\mathcal{NT}_{X}\to\mathcal{NT}_{X}/\mathfrak{J}.\] The \(2^{d}\)-tuple \(\mathcal{L}^{\mathfrak{J}}\) is an NT-\(2^{d}\)-tuple of \(X\) by Proposition 4.1.12. The following proposition shows that these correspondences are mutually inverse. **Proposition 4.2.1**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Then the following hold:_ * _If_ \(\mathcal{L}\) _is an NT-_\(2^{d}\)_-tuple of_ \(X\)_, then_ \[\mathcal{NT}_{X}/\mathfrak{J}^{\mathcal{L}}\cong\mathcal{NO}([\mathcal{L}]_{ \mathcal{L}_{\mathfrak{g}}},[X]_{\mathcal{L}_{\mathfrak{g}}})\quad\text{and} \quad\mathcal{L}^{\mathfrak{J}^{\mathcal{L}}}=\mathcal{L}.\] * _If_ \(\mathfrak{J}\subseteq\mathcal{NT}_{X}\) _is a gauge-invariant ideal, then_ \[\mathcal{NT}_{X}/\mathfrak{J}\cong\mathcal{NO}([\mathcal{L}^{\mathfrak{J}}]_{ \mathcal{L}^{\mathfrak{g}}_{\mathfrak{g}}},[X]_{\mathcal{L}^{\mathfrak{g}}_{ \mathfrak{g}}})\quad\text{and}\quad\mathfrak{J}^{\mathfrak{J}^{\mathfrak{J}}}= \mathfrak{J}.\] **Proof.** (i) Since \(\mathfrak{J}^{\mathcal{L}}\equiv\ker\pi^{\mathcal{L}}\times t^{\mathcal{L}}\), we have that \(\mathcal{NT}_{X}/\mathfrak{J}^{\mathcal{L}}\cong\mathcal{NO}([\mathcal{L}]_{ \mathcal{L}_{\emptyset}},[X]_{\mathcal{L}_{\emptyset}})\) by a canonical \(*\)-isomorphism \(\Phi\). We then have that \[\mathcal{L}^{\mathfrak{J}^{\mathcal{L}}}\equiv\mathcal{L}^{(Q_{\mathfrak{J}^{ \mathcal{L}}}\circ\overline{\pi}_{X},Q_{\mathfrak{J}^{\mathcal{L}}}\circ \overline{t}_{X})}=\mathcal{L}^{(\Phi^{-1}\circ\pi^{\mathcal{L}},\Phi^{-1} \circ t^{\mathcal{L}})}=\mathcal{L}^{(\pi^{\mathcal{L}},t^{\mathcal{L}})}= \mathcal{L},\] using Proposition 4.1.12 in the last equality. (ii) Note that \((Q_{\mathfrak{J}}\circ\overline{\pi}_{X},Q_{\mathfrak{J}}\circ\overline{t}_{X})\) is a Nica-covariant representation of \(X\) that admits a gauge action and satisfies \(\mathrm{C}^{*}(Q_{\mathfrak{J}}\circ\overline{\pi}_{X},Q_{\mathfrak{J}}\circ \overline{t}_{X})=\mathcal{NT}_{X}/\mathfrak{J}\). Hence, applying Remark 4.1.10 for the representation \((\pi,t):=(Q_{\mathfrak{J}}\circ\overline{\pi}_{X},Q_{\mathfrak{J}}\circ \overline{t}_{X})\), we obtain the following commutative diagram of canonical \(*\)-epimorphisms. We have that \(\dot{\pi}\times\dot{t}=\Psi\circ Q\), where \((\dot{\pi},\dot{t})\) is defined as in item (iii) of Proposition 4.1.9, again taking \((\pi,t):=(Q_{\mathfrak{J}}\circ\overline{\pi}_{X},Q_{\mathfrak{J}}\circ \overline{t}_{X})\). Since \(\mathcal{L}^{\mathfrak{J}}\) is an NT-\(2^{d}\)-tuple of \(X\), we have that \([\mathcal{L}^{\mathfrak{J}}]_{\mathcal{L}^{\mathfrak{J}}_{\emptyset}}\) is an (M)-\(2^{d}\)-tuple of \([X]_{\mathcal{L}^{\mathfrak{J}}_{\emptyset}}\) by Proposition 4.1.8. We have that \([\mathcal{L}^{\mathfrak{J}}]_{\mathcal{L}^{\mathfrak{J}}_{\emptyset}}= \mathcal{L}^{(\dot{\pi},\dot{t})}\) by item (vi) of Proposition 4.1.9, and hence an application of Theorem 3.2.11 yields that \(\Psi\) is a \(*\)-isomorphism. By definition we have that \(\mathfrak{J}^{\mathcal{L}^{\mathfrak{J}}}\equiv\ker\pi^{\mathcal{L}^{ \mathfrak{J}}}\times t^{\mathcal{L}^{\mathfrak{J}}}\). Therefore, we obtain that \[\mathfrak{J}=\ker Q_{\mathfrak{J}}=\ker\Psi\circ(\pi^{\mathcal{L}^{\mathfrak{ J}}}\times t^{\mathcal{L}^{\mathfrak{J}}})=\ker\pi^{\mathcal{L}^{\mathfrak{ J}}}\times t^{\mathcal{L}^{\mathfrak{J}}}\equiv\mathfrak{J}^{\mathcal{L}^{ \mathfrak{J}}},\] and the proof is complete. Using Propositions 4.1.12 and 4.1.13, we arrive at a concrete description of \(\mathfrak{J}^{\mathcal{L}}\). **Proposition 4.2.2**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). If \(\mathcal{L}\) is an NT-\(2^{d}\)-tuple of \(X\), then we have that_ \[\mathfrak{J}^{\mathcal{L}}=\langle\overline{\pi}_{X}(a)+\sum_{ \underline{0}\neq\underline{n}\leq 1_{F}}(-1)^{|\underline{n}|}\overline{\psi}_{X, \underline{n}}(k_{\underline{n}})\mid F\subseteq[d],a\in\mathcal{L}_{F},k_{ \underline{n}}\in\mathcal{K}(X_{\underline{n}}),\] \[[\phi_{\underline{n}}(a)]_{\mathcal{L}_{\emptyset}}=[k_{ \underline{n}}]_{\mathcal{L}_{\emptyset}}\text{ for all }\underline{0}\neq\underline{n}\leq 1_{F}\rangle.\] _If in addition \(\mathcal{L}\) is a relative \(2^{d}\)-tuple of \(X\), then \(\mathfrak{J}^{\mathcal{L}}=\mathfrak{J}^{(\overline{\pi}_{X},\overline{t}_{X})}\)._ **Proof.** For the first part we apply Proposition 4.1.13 for \((\pi,t):=(\pi^{\mathcal{L}},t^{\mathcal{L}})\), noting that \(\mathcal{L}^{(\pi^{\mathcal{L}},t^{\mathcal{L}})}=\mathcal{L}\) by Proposition 4.1.12. Now assume that \(\mathcal{L}\) is in addition a relative \(2^{d}\)-tuple of \(X\), and recall that \[\mathfrak{J}^{(\overline{\pi}_{X},\overline{t}_{X})}_{\mathcal{L}}=\langle \overline{\pi}_{X}(a)+\sum_{\underline{0}\neq\underline{n}\leq 1_{F}}(-1)^{|\underline{n}|}\overline{\psi}_{X, \underline{n}}(\phi_{\underline{n}}(a))\mid a\in\mathcal{L}_{F},F\subseteq[d]\rangle.\] It is clear that \(\mathfrak{J}^{(\overline{\pi}_{X},\overline{t}_{X})}_{\mathcal{L}}\subseteq \mathfrak{J}^{\mathcal{L}}\) by the first part. For the reverse inclusion, first note that \[\overline{\pi}_{X}(\mathcal{L}_{\emptyset})\subseteq\mathfrak{J}^{(\overline{ \pi}_{X},\overline{t}_{X})}_{\mathcal{L}}\] by definition. Next fix \(\emptyset\neq F\subseteq[d],a\in\mathcal{L}_{F}\) and \(k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}})\) satisfying \([\phi_{\underline{n}}(a)]_{\mathcal{L}_{\emptyset}}=[k_{\underline{n}}]_{ \mathcal{L}_{\emptyset}}\) for all \(0\neq\underline{n}\leq 1_{F}\). By the first part, it suffices to show that \[\overline{\pi}_{X}(a)+\sum_{\underline{0}\neq\underline{n}\leq 1_{F}}(-1)^{| \underline{n}|}\overline{\psi}_{X,\underline{n}}(k_{\underline{n}})\in \mathfrak{J}^{(\overline{\pi}_{X},\overline{t}_{X})}_{\mathcal{L}}.\] Fixing \(\underline{0}\neq\underline{n}\leq 1_{F}\), we have that \([\phi_{\underline{n}}(a)-k_{\underline{n}}]_{\mathcal{L}_{\emptyset}}=0\) by assumption. Since \(\mathcal{L}\) is a relative \(2^{d}\)-tuple of \(X\), we also have that \(\phi_{\underline{n}}(a)\in\mathcal{K}(X_{\underline{n}})\) and hence \(\phi_{\underline{n}}(a)-k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}})\). By Lemma 2.2.2 we have that \[\ker\{[\cdot]_{\mathcal{L}_{\emptyset}}:\mathcal{K}(X_{\underline{n}})\to\mathcal{ K}([X_{\underline{n}}]_{\mathcal{L}_{\emptyset}})\}=\mathcal{K}(X_{\underline{n}} \mathcal{L}_{\emptyset}),\] and therefore \(\phi_{\underline{n}}(a)-k_{\underline{n}}=k^{\prime}_{\underline{n}}\) for some \(k^{\prime}_{\underline{n}}\in\mathcal{K}(X_{\underline{n}}\mathcal{L}_{\emptyset})\). Notice that \[\overline{\psi}_{X,\underline{n}}(\mathcal{K}(X_{\underline{n}}\mathcal{L}_{ \emptyset}))\subseteq\langle\overline{\pi}_{X}(\mathcal{L}_{\emptyset}) \rangle\subseteq\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X},\overline{ \mathfrak{J}}_{X})}\] for all \(\underline{0}\neq\underline{n}\leq\underline{1}_{F}\). In total, we have that \[\overline{\pi}_{X}(a)+\sum_{\underline{0}\neq\underline{n}\leq \underline{1}_{F}}(-1)^{|\underline{n}|}\overline{\psi}_{X,\underline{n}}(k_{ \underline{n}})=\] \[\qquad\qquad=\left(\overline{\pi}_{X}(a)+\sum_{\underline{0} \neq\underline{n}\leq\underline{1}_{F}}(-1)^{|\underline{n}|}\overline{\psi}_ {X,\underline{n}}(\phi_{\underline{n}}(a))\right)-\sum_{\underline{0}\neq \underline{n}\leq\underline{1}_{F}}(-1)^{|\underline{n}|}\overline{\psi}_{X, \underline{n}}(k^{\prime}_{\underline{n}})\] \[\qquad\qquad\in\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X}, \overline{\mathfrak{J}}_{X})}+\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X}, \overline{\mathfrak{J}}_{X})}=\mathfrak{J}_{\mathcal{L}}^{(\overline{\pi}_{X}, \overline{\mathfrak{J}}_{X})},\] as required. We now present the main theorem. **Theorem 4.2.3**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Then the set of NT-\(2^{d}\)-tuples of \(X\) corresponds bijectively to the set of gauge-invariant ideals of \(\mathcal{NT}_{X}\) by the mutually inverse maps \(\mathcal{L}\mapsto\mathfrak{J}^{\mathcal{L}}\) and \(\mathfrak{J}\mapsto\mathcal{L}^{\mathfrak{I}}\), for all NT-\(2^{d}\)-tuples \(\mathcal{L}\) of \(X\) and all gauge-invariant ideals \(\mathfrak{I}\) of \(\mathcal{NT}_{X}\). Moreover, these maps respect inclusions._ **Proof.** The fact that the maps are well-defined follows from the discussion preceding Proposition 4.2.1. The fact that the maps are mutual inverses is guaranteed by Proposition 4.2.1. It remains to see that the maps preserve inclusions. To this end, first take NT-\(2^{d}\)-tuples \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) of \(X\) and suppose that \(\mathcal{L}_{1}\subseteq\mathcal{L}_{2}\). We must show that \(\mathfrak{J}^{\mathcal{L}_{1}}\subseteq\mathfrak{J}^{\mathcal{L}_{2}}\). It suffices to show that \(\mathfrak{J}^{\mathcal{L}_{2}}\) contains the generators of \(\mathfrak{J}^{\mathcal{L}_{1}}\), recalling their form from Proposition 4.2.2. Firstly, we have that \[\overline{\pi}_{X}(\mathcal{L}_{1,\emptyset})\subseteq\overline{\pi}_{X}( \mathcal{L}_{2,\emptyset})\subseteq\mathfrak{J}^{\mathcal{L}_{2}}\] by definition. Next, fix \(\emptyset\neq F\subseteq[d],a\in\mathcal{L}_{1,F}\) and \(k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}})\) such that \([k_{\underline{n}}]_{\mathcal{L}_{1,\emptyset}}=[\phi_{\underline{n}}(a)]_{ \mathcal{L}_{1,\emptyset}}\) for all \(\underline{0}\neq\underline{n}\leq\underline{1}_{F}\). For each \(\underline{0}\neq\underline{n}\leq\underline{1}_{F}\), we make the identification \[\mathcal{L}([[X_{\underline{n}}]_{\mathcal{L}_{1,\emptyset}}]_{\mathcal{L}_{2,\emptyset}/\mathcal{L}_{1,\emptyset}})\cong\mathcal{L}([X_{\underline{n}}]_{ \mathcal{L}_{2,\emptyset}}),\] so that \[[\cdot]_{\mathcal{L}_{2,\emptyset}}=[\cdot]_{\mathcal{L}_{2,\emptyset}/ \mathcal{L}_{1,\emptyset}}\circ[\cdot]_{\mathcal{L}_{1,\emptyset}},\] e.g., [34, p. 112]. Under this identification, we obtain that \[[k_{\underline{n}}]_{\mathcal{L}_{2,\emptyset}}=[\phi_{\underline{n}}(a)]_{ \mathcal{L}_{2,\emptyset}}\text{ for all }\underline{0}\neq\underline{n}\leq\underline{1}_{F}.\] Since \(a\in\mathcal{L}_{1,F}\subseteq\mathcal{L}_{2,F}\), it then follows that \[\overline{\pi}_{X}(a)+\sum_{\underline{0}\neq\underline{n}\leq\underline{1}_{F }}(-1)^{|\underline{n}|}\overline{\psi}_{X,\underline{n}}(k_{\underline{n}}) \in\mathfrak{J}^{\mathcal{L}_{2}},\] as required. Finally, fix gauge-invariant ideals \(\mathfrak{J}_{1}\) and \(\mathfrak{J}_{2}\) of \(\mathcal{NT}_{X}\) such that \(\mathfrak{J}_{1}\subseteq\mathfrak{J}_{2}\). It then follows that \[\mathcal{L}_{F}^{\mathfrak{J}_{1}}\equiv\mathcal{L}_{F}^{(Q_{3_{1}}\overline{ \sigma}_{X},Q_{3_{1}}\overline{\sigma}_{X})}\subseteq\mathcal{L}_{F}^{(Q_{3_{2}} \overline{\sigma}_{X},Q_{3_{2}}\overline{\sigma}_{X})}\equiv\mathcal{L}_{F}^{ \mathfrak{J}_{2}}\text{ for all }F\subseteq[d],\] since \(Q_{3_{1}}\) and \(Q_{3_{2}}\) preserve the indices of the cores and \(Q_{3_{2}}\) factors through \(Q_{3_{1}}\). **Remark 4.2.4**.: To summarise, we have that the mappings \[\mathcal{L}\mapsto\ker\pi^{\mathcal{L}}\times t^{\mathcal{L}}\text{ for all NT-$2^{d}$-tuples $\mathcal{L}$ of $X$},\] \[\mathfrak{J}\mapsto\mathcal{L}^{(Q_{3}\overline{\sigma}_{X},Q_{3}\overline{ \sigma}_{X})}\text{ for all gauge-invariant ideals $\mathfrak{J}\subseteq\mathcal{NT}_{X}$},\] are mutual inverses and respect inclusions, where \(Q_{\mathfrak{Z}}\colon\mathcal{NT}_{X}\to\mathcal{NT}_{X}/\mathfrak{Z}\) is the quotient map. The gauge-invariant ideal \(\ker\pi^{\mathcal{L}}\times t^{\mathcal{L}}\) of \(\mathcal{NT}_{X}\) is given by \[\ker\pi^{\mathcal{L}}\times t^{\mathcal{L}}=\langle\overline{\pi}_{X}(a)+ \sum_{\underline{0}\neq\underline{n}\leq\underline{1}_{F}}(-1)^{[\underline{n }]}\overline{\psi}_{X,\underline{n}}(k_{\underline{n}})\ |F\subseteq[d],a\in\mathcal{L}_{F},k_{\underline{n}}\in\mathcal{K}(X_{ \underline{n}}),\] \[[\phi_{\underline{n}}(a)]_{\mathcal{L}_{\emptyset}}=[k_{\underline{n}}]_{ \mathcal{L}_{\emptyset}}\ \text{for all}\ \underline{0}\neq\underline{n}\leq\underline{1}_{F}\rangle,\] by Proposition 4.2.2, where \(\mathcal{L}\) is an NT-\(2^{d}\)-tuple of \(X\). The NT-\(2^{d}\)-tuple \(\mathcal{L}^{(Q_{\mathfrak{Z}}\circ\overline{\pi}_{X},Q_{\mathfrak{Z}}\circ \overline{\mathfrak{Z}}_{X})}\) of \(X\) is given by \[\mathcal{L}_{F}^{(Q_{\mathfrak{Z}}\circ\overline{\pi}_{X},Q_{\mathfrak{Z}} \circ\overline{\mathfrak{Z}}_{X})}=\begin{cases}\ker Q_{\mathfrak{Z}}\circ \overline{\pi}_{X}&\text{if}\ F=\emptyset,\\ (Q_{\mathfrak{Z}}\circ\overline{\pi}_{X})^{-1}(B^{(Q_{\mathfrak{Z}}\circ \overline{\pi}_{X},Q_{\mathfrak{Z}}\circ\overline{\mathfrak{Z}}_{X})}_{( \underline{0},\underline{1}_{F}]})&\text{if}\ \emptyset\neq F\subseteq[d],\end{cases}\] where \(\mathfrak{Z}\) is a gauge-invariant ideal of \(\mathcal{NT}_{X}\). Note that the set of gauge-invariant ideals of \(\mathcal{NT}_{X}\) carries a canonical lattice structure, determined by the operations \[\mathfrak{J}_{1}\vee\mathfrak{J}_{2}:=\mathfrak{J}_{1}+\mathfrak{J}_{2}\quad \text{and}\quad\mathfrak{J}_{1}\wedge\mathfrak{J}_{2}:=\mathfrak{J}_{1}\cap \mathfrak{J}_{2},\] for all gauge-invariant ideals \(\mathfrak{J}_{1},\mathfrak{J}_{2}\subseteq\mathcal{NT}_{X}\). This, in tandem with Theorem 4.2.3, allows us to impose a canonical lattice structure on the set of NT-\(2^{d}\)-tuples of \(X\), promoting the bijection to a lattice isomorphism. **Definition 4.2.5**.: Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). We equip the set of NT-\(2^{d}\)-tuples of \(X\) with the lattice structure determined by the operations \[\mathcal{L}_{1}\vee\mathcal{L}_{2}:=\mathcal{L}^{\mathfrak{Z}^{\mathfrak{L}_{ 1}+\mathfrak{Z}^{\mathcal{L}_{2}}}}\quad\text{and}\quad\mathcal{L}_{1}\wedge \mathcal{L}_{2}:=\mathcal{L}^{\mathfrak{Z}^{\mathfrak{L}_{1}\cap\mathfrak{Z} ^{\mathcal{L}_{2}}}},\] for all NT-\(2^{d}\)-tuples \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) of \(X\). Next we describe the operations \(\wedge\) and \(\vee\) on the set of NT-\(2^{d}\)-tuples of \(X\). The operation \(\wedge\) is intersection, in accordance with [34, Proposition 5.8]. **Proposition 4.2.6**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) be NT-\(2^{d}\)-tuples of \(X\). Then_ \[(\mathcal{L}_{1}\wedge\mathcal{L}_{2})_{F}=\mathcal{L}_{1,F}\cap\mathcal{L}_{2, F}\ \text{for all}\ F\subseteq[d].\] **Proof.** For notational convenience we set \(\mathfrak{J}:=\mathfrak{J}^{\mathcal{L}_{1}}\cap\mathfrak{J}^{\mathcal{L}_{2}}\), so that \(\mathcal{L}^{\mathfrak{J}}\equiv\mathcal{L}_{1}\wedge\mathcal{L}_{2}\). For \(F=\emptyset\) and \(a\in A\), we have that \[a\in\mathcal{L}_{\emptyset}^{\mathfrak{J}}\iff\overline{\pi}_{X}(a)\in \mathfrak{J}\equiv\mathfrak{J}^{\mathcal{L}_{1}}\cap\mathfrak{J}^{\mathcal{L} _{2}}\iff a\in\mathcal{L}_{\emptyset}^{\mathfrak{J}^{\mathcal{L}_{1}}}\cap \mathcal{L}_{\emptyset}^{\mathfrak{J}^{\mathcal{L}_{2}}}=\mathcal{L}_{1, \emptyset}\cap\mathcal{L}_{2,\emptyset},\] using item (i) of Proposition 4.2.1 in the final equality. Hence \((\mathcal{L}_{1}\wedge\mathcal{L}_{2})_{\emptyset}=\mathcal{L}_{1,\emptyset} \cap\mathcal{L}_{2,\emptyset}\). Next, fix \(\emptyset\neq F\subseteq[d]\). Since the parametrisation of Theorem 4.2.3 preserves inclusions, we have that \((\mathcal{L}_{1}\wedge\mathcal{L}_{2})_{F}\subseteq\mathcal{L}_{1,F}\cap \mathcal{L}_{2,F}\). For the reverse inclusion, take \(a\in\mathcal{L}_{1,F}\cap\mathcal{L}_{2,F}\). Since \((\mathcal{L}_{1}\wedge\mathcal{L}_{2})_{F}\) is an ideal, it suffices to show that \(aa^{*}\in(\mathcal{L}_{1}\wedge\mathcal{L}_{2})_{F}\). Since \(a\in\mathcal{L}_{1,F}\cap\mathcal{L}_{2,F}\), there exist \(f,g\in B^{(\overline{\pi}_{X},\overline{t}_{X})}_{(\underline{0},\underline{1}_ {F}]}\) such that \[\overline{\pi}_{X}(a)+f\in\mathfrak{J}^{\mathcal{L}_{1}}\quad\text{and}\quad \overline{\pi}_{X}(a)+g\in\mathfrak{J}^{\mathcal{L}_{2}}.\] Consider the element \[h:=\overline{\pi}_{X}(a)g^{*}+f\overline{\pi}_{X}(a)^{*}+fg^{*}.\] Note that \[\overline{\pi}_{X}(A)B^{(\overline{\pi}_{X},\overline{t}_{X})}_{(\underline{0},\underline{1}_{F}]}\subseteq B^{(\overline{\pi}_{X},\overline{t}_{X})}_{( \underline{0},\underline{1}_{F}]}\quad\text{and}\quad B^{(\overline{\pi}_{X}, \overline{t}_{X})}_{(\underline{0},\underline{1}_{F}]}\overline{\pi}_{X}(A) \subseteq B^{(\overline{\pi}_{X},\overline{t}_{X})}_{(\underline{0},\underline{1} _{F}]},\] and recall that \(B^{(\overline{\pi}_{X},\overline{t}_{X})}_{(\underline{0},\underline{1}_{F}]}\) is a C*-algebra. Hence \(h\in B^{(\overline{\pi}_{X},\overline{t}_{X})}_{(\underline{0},\underline{1}_ {F}]}\), and we obtain that \[\overline{\pi}_{X}(aa^{*})+h=(\overline{\pi}_{X}(a)+f)(\overline{\pi}_{X}(a)+g)^{ *}\in\mathfrak{J}^{\mathcal{L}_{1}}\cap\mathfrak{J}^{\mathcal{L}_{2}}\equiv \mathfrak{J}.\] By definition this means that \(aa^{*}\in\mathcal{L}_{F}^{\mathfrak{J}}\equiv(\mathcal{L}_{1}\wedge\mathcal{L}_{2})_ {F}\), as required. We have the following characterisation of the operation \(\vee\). **Proposition 4.2.7**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) be NT-\(2^{d}\)-tuples of \(X\). Then_ \[(\mathcal{L}_{1}\vee\mathcal{L}_{2})_{F}=\begin{cases}\overline{\pi}_{X}^{-1}( \mathfrak{J}^{\mathcal{L}_{1}}+\mathfrak{J}^{\mathcal{L}_{2}})&\text{ if }F=\emptyset,\\ \left[\cdot\right]^{-1}_{(\mathcal{L}_{1}\vee\mathcal{L}_{2})_{\emptyset}} \left[\big{(}\left(\mathcal{L}_{1,F}+\mathcal{L}_{2,F}+(\mathcal{L}_{1}\vee \mathcal{L}_{2})_{\emptyset}\right)/(\mathcal{L}_{1}\vee\mathcal{L}_{2})_{ \emptyset}\big{)}^{(d-1)}\right]&\text{ if }\emptyset\neq F\subseteq[d].\end{cases}\] **Proof.** For notational convenience, we set \(\mathcal{L}:=\mathcal{L}_{1}\vee\mathcal{L}_{2}\) and \(\mathfrak{J}:=\mathfrak{J}^{\mathcal{L}_{1}}+\mathfrak{J}^{\mathcal{L}_{2}}\), so that \(\mathcal{L}\equiv\mathcal{L}^{3}\). For \(F=\emptyset\) and \(a\in A\), we have that \[a\in\mathcal{L}_{\emptyset}\iff Q_{\mathfrak{J}}(\overline{\pi}_{X}(a))=0\iff \overline{\pi}_{X}(a)\in\mathfrak{J}\iff a\in\overline{\pi}_{X}^{-1}(\mathfrak{ J})=\overline{\pi}_{X}^{-1}(\mathfrak{J}^{\mathcal{L}_{1}}+\mathfrak{J}^{ \mathcal{L}_{2}}).\] Consequently, we obtain that \(\mathcal{L}_{\emptyset}=\overline{\pi}_{X}^{-1}(\mathfrak{J}^{\mathcal{L}_{1}} +\mathfrak{J}^{\mathcal{L}_{2}})\), as required. Next, in a slight abuse of notation, we denote by \(\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{\emptyset}\) the \(2^{d}\)-tuple of \(X\) defined by \[(\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{\emptyset})_{F}:=\mathcal{L}_{1, F}+\mathcal{L}_{2,F}+\mathcal{L}_{\emptyset}\text{ for all }F\subseteq[d].\] Note that \(\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{\emptyset}\) consists of ideals and that \(\mathcal{L}_{\emptyset}\subseteq(\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{ \emptyset})_{F}\) for all \(F\subseteq[d]\), so we can make sense of the \(2^{d}\)-tuple \([\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{\emptyset}]_{\mathcal{L}_{ \emptyset}}\) of \([X]_{\mathcal{L}_{\emptyset}}\). First we check that the family of ideals \([\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{\emptyset}]_{\mathcal{L}_{ \emptyset}}\) is an (E)-\(2^{d}\)-tuple of \([X]_{\mathcal{L}_{\emptyset}}\) that is invariant and partially ordered. The fact that \([\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{\emptyset}]_{\mathcal{L}_{ \emptyset}}\) is invariant and partially ordered follows from the corresponding properties of the NT-\(2^{d}\)-tuples \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\), as well as the fact that \(\mathcal{L}_{\emptyset}\) is positively invariant for \(X\). Next recall that \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\) is an (M)-\(2^{d}\)-tuple, and thus it is contained in \(\mathcal{I}([X]_{\mathcal{L}_{\emptyset}})\). Hence, in order to show that \([\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{\emptyset}]_{\mathcal{L}_{ \emptyset}}\) is an (E)-\(2^{d}\)-tuple, it suffices to show that \[[\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{\emptyset}]_{\mathcal{L}_{ \emptyset}}\subseteq[\mathcal{L}]_{\mathcal{L}_{\emptyset}}.\] In turn, it suffices to show that \[\mathcal{L}_{1,F}+\mathcal{L}_{2,F}+\mathcal{L}_{\emptyset}\subseteq\mathcal{ L}_{F}\text{ for all }F\subseteq[d]. \tag{4.1}\] This is immediate, since \(\mathfrak{J}^{\mathcal{L}_{1}},\mathfrak{J}^{\mathcal{L}_{2}}\subseteq \mathfrak{J}\) and the parametrisation of Theorem 4.2.3 respects inclusions. Thus \(\mathcal{L}_{1},\mathcal{L}_{2}\subseteq\mathcal{L}\), and by definition \(\mathcal{L}_{\emptyset}\subseteq\mathcal{L}_{F}\), showing that (4.1) holds. Hence we may consider the \((d-1)\)-iteration \([\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{\emptyset}]^{(d-1)}_{\mathcal{L }_{\emptyset}}\). For notational convenience, let \(\mathcal{L}^{\prime}\) be the \(2^{d}\)-tuple of \(X\) defined by \[\mathcal{L}^{\prime}_{F}:=\begin{cases}\mathcal{L}_{\emptyset}&\text{if }F=\emptyset,\\ \left[\cdot\right]^{-1}_{\mathcal{L}_{\emptyset}}\big{(}[\mathcal{L}_{1,F}+ \mathcal{L}_{2,F}+\mathcal{L}_{\emptyset}]^{(d-1)}_{\mathcal{L}_{\emptyset}} \big{)}&\text{if }\emptyset\neq F\subseteq[d],\end{cases}\] as per the statement of the proposition. Note that \(\mathcal{L}^{\prime}\) consists of ideals and satisfies \(\mathcal{L}^{\prime}_{\emptyset}\subseteq\mathcal{L}^{\prime}_{F}\) for all \(F\subseteq[d]\). Moreover, we have that \([\mathcal{L}^{\prime}]_{\mathcal{L}_{\emptyset}}=[\mathcal{L}_{1}+\mathcal{L}_{2 }+\mathcal{L}_{\emptyset}]^{(d-1)}_{\mathcal{L}_{\emptyset}}\) is the (M)-\(2^{d}\)-tuple of \([X]_{\mathcal{L}_{\emptyset}}\) that induces \(\mathfrak{J}_{[\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{\emptyset}]_{ \mathcal{L}_{\emptyset}}}\) by Theorem 3.4.7. In particular, \(\mathcal{L}^{\prime}\) is an NT-\(2^{d}\)-tuple of \(X\) by an application of Proposition 4.1.8. It now suffices to show that \(\mathfrak{J}=\mathfrak{J}^{\mathcal{L}^{\prime}}\), as the parametrisation of Theorem 4.2.3 then yields that \[\mathcal{L}\equiv\mathcal{L}^{3}=\mathcal{L}^{3^{\mathcal{L}^{\prime}}}= \mathcal{L}^{\prime},\] as required. To this end, we construct the following commutative diagram of \(*\)-epimorphisms, where \(Q\) is the usual lift of \(X\to[X]_{\mathcal{L}_{\emptyset}}\). The maps \[Q_{\mathfrak{J}}\equiv(Q_{\mathfrak{J}}\circ\overline{\pi}_{X})\times(Q_{ \mathfrak{J}}\circ\overline{t}_{X})\quad\text{and}\quad Q_{\mathfrak{J}^{c}} \equiv(Q_{\mathfrak{J}^{c}}\circ\overline{\pi}_{X})\times(Q_{\mathfrak{J}^{c ^{\prime}}}\circ\overline{t}_{X})\] are the canonical quotient maps. The map \(\Phi_{\mathfrak{J}}\) is induced by the injective Nica-covariant representation of \([X]_{\mathcal{L}_{\emptyset}}\) obtained by applying item (iii) of Proposition 4.1.9 for \((\pi,t):=(Q_{\mathfrak{J}}\circ\overline{\pi}_{X},Q_{\mathfrak{J}}\circ \overline{t}_{X})\). The map \(\Phi_{\mathfrak{J}^{c}}\) is obtained analogously, using \((Q_{\mathfrak{J}^{c^{\prime}}}\circ\overline{\pi}_{X},Q_{\mathfrak{J}^{c^{ \prime}}}\circ\overline{t}_{X})\) in place of \((Q_{\mathfrak{J}}\circ\overline{\pi}_{X},Q_{\mathfrak{J}}\circ\overline{t}_ {X})\) and the fact that \(\mathcal{L}^{\prime}_{\emptyset}=\mathcal{L}_{\emptyset}\). Since each map is canonical, it suffices to show that the kernel \(\ker\Phi_{\mathfrak{J}}=Q(\mathfrak{J})\) coincides with the kernel \(\ker\Phi_{\mathfrak{J}^{c}}=Q(\mathfrak{J}^{c^{\prime}})\). Then \(\mathcal{N}\mathcal{T}_{X}/\mathfrak{J}\cong\mathcal{N}\mathcal{T}_{X}/ \mathfrak{J}^{c^{\prime}}\) by the map \(f+\mathfrak{J}\mapsto f+\mathfrak{J}^{c^{\prime}}\) for all \(f\in\mathcal{N}\mathcal{T}_{X}\). To this end, we have the following three claims. _Claim 1. With the aforementioned notation, we have that_ \[Q(\mathfrak{J}^{\mathcal{L}^{\prime}})=\mathfrak{J}_{[\mathcal{L}^{\prime}]_{ \mathcal{L}_{\emptyset}}}^{(\overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}}^{ \prime},\overline{t}_{[X]_{\mathcal{L}_{\emptyset}}})}.\] Proof of Claim 1.: By an application of item (i) of Proposition 4.2.1, we have a canonical \(*\)-isomorphism \[\mathcal{N}\mathcal{T}_{X}/\mathfrak{J}^{\mathcal{L}^{\prime}}\cong\mathcal{ NO}([\mathcal{L}^{\prime}]_{\mathcal{L}^{\prime}_{\emptyset}},[X]_{ \mathcal{L}^{\prime}_{\emptyset}})=\mathcal{N}\mathcal{T}_{[X]_{\mathcal{L}_{ \emptyset}}}/\mathfrak{J}_{[\mathcal{L}^{\prime}]_{\mathcal{L}_{\emptyset}}}^{ (\overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}}^{\prime},\overline{t}_{[X]_{ \mathcal{L}_{\emptyset}}})},\] using that \(\mathcal{L}^{\prime}\) is an NT-\(2^{d}\)-tuple and that \(\mathcal{L}^{\prime}_{\emptyset}=\mathcal{L}_{\emptyset}\). This \(*\)-isomorphism is induced by \(Q\) in the sense that it maps \(f+\mathfrak{J}^{\mathcal{L}^{\prime}}\) to \(Q(f)+\mathfrak{J}_{[\mathcal{L}^{\prime}]_{\mathcal{L}_{\emptyset}}}^{( \overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}}^{\prime},\overline{t}_{[X]_{ \mathcal{L}_{\emptyset}}})}\) for all \(f\in\mathcal{N}\mathcal{T}_{X}\). It follows that \(Q(\mathfrak{J}^{\mathcal{L}^{\prime}})=\mathfrak{J}_{[\mathcal{L}^{\prime}]_{ \mathcal{L}_{\emptyset}}}^{(\overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}}^{ \prime},\overline{t}_{[X]_{\mathcal{L}_{\emptyset}}})}\), as required. _Claim 2. Let \(\mathcal{L}^{\prime}_{1}\) and \(\mathcal{L}^{\prime}_{2}\) be the \(2^{d}\)-tuples of \(X\) defined by \(\mathcal{L}^{\prime}_{1,F}:=\mathcal{L}_{1,F}+\mathcal{L}_{\emptyset}\) and \(\mathcal{L}^{\prime}_{2,F}:=\mathcal{L}_{2,F}+\mathcal{L}_{\emptyset}\) for all \(F\subseteq[d]\). Then \([\mathcal{L}^{\prime}_{1}]_{\mathcal{L}_{\emptyset}}\) and \([\mathcal{L}^{\prime}_{2}]_{\mathcal{L}_{\emptyset}}\) are (E)-\(2^{d}\)-tuples of \([X]_{\mathcal{L}_{\emptyset}}\) that consist of ideals, and_ \[\mathfrak{J}_{[\mathcal{L}^{\prime}]_{\mathcal{L}_{\emptyset}}}^{(\overline{ \pi}_{[X]_{\mathcal{L}_{\emptyset}}}^{\prime},\overline{t}_{[X]_{\mathcal{L}_{ \emptyset}}})}=\mathfrak{J}_{[\mathcal{L}^{\prime}_{1}]_{\mathcal{L}_{\emptyset}}}^ {(\overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}},\overline{t}_{[X]_{\mathcal{ L}_{\emptyset}}})}+\mathfrak{J}_{[\mathcal{L}^{\prime}_{2}]_{\mathcal{L}_{ \emptyset}}}^{(\overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}},\overline{t}_{[X]_{ \mathcal{L}_{\emptyset}}})}.\] Proof of Claim 2.: Both \([\mathcal{L}^{\prime}_{1}]_{\mathcal{L}_{\emptyset}}\) and \([\mathcal{L}^{\prime}_{2}]_{\mathcal{L}_{\emptyset}}\) consist of ideals of \([A]_{\mathcal{L}_{\emptyset}}\) and are (E)-\(2^{d}\)-tuples of \([X]_{\mathcal{L}_{\emptyset}}\), since both are contained in the (E)-\(2^{d}\)-tuple \([\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{\emptyset}]_{\mathcal{L}_{ \emptyset}}\) of \([X]_{\mathcal{L}_{\emptyset}}\). Moreover, it is routine to check that \([\mathcal{L}^{\prime}_{1}]_{\mathcal{L}_{\emptyset}}+[\mathcal{L}^{\prime}_{2}]_{ \mathcal{L}_{\emptyset}}=[\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{\emptyset}]_{ \mathcal{L}_{\emptyset}}\). Hence we obtain that \[\mathfrak{J}_{[\mathcal{L}^{\prime}]_{\mathcal{L}_{\emptyset}}}^{(\overline{ \pi}_{[X]_{\mathcal{L}_{\emptyset}}}^{\prime},\overline{t}_{[X]_{\mathcal{L}_{ \emptyset}}})}=\mathfrak{J}_{[\mathcal{L}_{1}+\mathcal{L}_{2}+\mathcal{L}_{ \emptyset}]_{\mathcal{L}_{\emptyset}}}^{(\overline{\pi}_{[X]_{\mathcal{L}_{ \emptyset}}}^{\prime},\overline{t}_{[X]_{\mathcal{L}_{\emptyset}}})}=\mathfrak{J}_{ [\mathcal{L}^{\prime}_{1}]_{\mathcal{L}_{\emptyset}}+[\mathcal{L}^{\prime}_{2}]_{ \mathcal{L}_{\emptyset}}}^{(\overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}}, \overline{t}_{[X]_{\mathcal{L}_{\emptyset}}})}=\mathfrak{J}_{[\mathcal{L}^{\prime}_{1}]_{ \mathcal{L}_{\emptyset}}+[\mathcal{L}^{\prime}_{2}]_{\mathcal{L}_{\emptyset}}}^{( \overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}},\overline{t}_{[X]_{\mathcal{L}_{ \emptyset}}})}=\mathfrak{J}_{[\mathcal{L}^{\prime}_{1}]_{\mathcal{L}_{\emptyset}}+[ \mathcal{L}^{\prime}_{2}]_{\mathcal{L}_{\emptyset}}}^{(\overline{\pi}_{[X]_{ \mathcal{L}_{\emptyset}}},\overline{t}_{[X]_{\mathcal{L}_{\emptyset}}})}+\mathfrak{J}_{[ \mathcal{L}^{\prime}_{2}]_{\mathcal{L}_{\emptyset}}}^{(\overline{\pi}_{[X]_{ \mathcal{L}_{\emptyset}}},\overline{t}_{[X]_{\mathcal{L}_{\emptyset}}})},\] using Proposition 3.1.5 in the final equality. _Claim 3. With the aforementioned notation, we have that_ \[Q(\mathfrak{J})=\mathfrak{J}_{[\mathcal{L}^{\prime}_{1}]_{\mathcal{L}_{\emptyset}}}^{( \overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}},\overline{t}_{[X]_{\mathcal{L}_{ \emptyset}}})}+\mathfrak{J}_{[\mathcal{L}^{\prime}_{2}]_{\mathcal{L}_{\emptyset}}}^{( \overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}},\overline{t}_{[X]_{\mathcal{L}_{ \emptyset}}})}.\] Proof of Claim 3.: For notational convenience, we will denote the right hand side by \(\mathfrak{J}^{\prime}\). For the forward inclusion, we show that \(Q(f)\in\mathfrak{J}^{\prime}\) for all generators \(f\) of \(\mathfrak{J}^{\mathcal{L}_{1}}\). The same holds for \(\mathfrak{J}^{\mathcal{L}_{2}}\) by symmetry, giving that \(Q(\mathfrak{J})\subseteq\mathfrak{J}^{\prime}\). To this end, we resort to Proposition 4.2.2. Note that \[Q(\overline{\pi}_{X}(\mathcal Consequently, we obtain that \[Q(\overline{\pi}_{X}(a)+\sum_{\underline{0}\neq\underline{n}\leq \underline{1}_{F}}(-1)^{|\underline{m}|}\overline{\psi}_{X,\underline{n}}(k_{ \underline{n}})) =\overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}}([a]_{\mathcal{L}_{ \emptyset}})+\sum_{\underline{0}\neq\underline{n}\leq\underline{1}_{F}}(-1)^{| \underline{m}|}\overline{\psi}_{[X]_{\mathcal{L}_{\emptyset}}\underline{n}}([k_ {\underline{n}}]_{\mathcal{L}_{\emptyset}})\] \[=\overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}}([a]_{\mathcal{L} _{\emptyset}})+\sum_{\underline{0}\neq\underline{n}\leq\underline{1}_{F}}(-1)^{ |\underline{m}|}\overline{\psi}_{[X]_{\mathcal{L}_{\emptyset}}\underline{n}}([ \phi_{\underline{n}}(a)]_{\mathcal{L}_{\emptyset}})\] \[=\overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}}([a]_{\mathcal{L} _{\emptyset}})\overline{q}_{[X]_{\mathcal{L}_{\emptyset}},F}\in\mathfrak{I}_{[ \mathcal{L}_{1}^{\prime}]_{\mathcal{L}_{\emptyset}}}^{(\overline{\pi}_{[X]_{ \mathcal{L}_{\emptyset}}},\overline{\mathfrak{I}}_{[X]_{\mathcal{L}_{ \emptyset}}})}\subseteq\mathfrak{J}^{\prime},\] using that \(a\in\mathcal{L}_{1,F}\subseteq\mathcal{L}_{1,F}^{\prime}\). For the reverse inclusion, it suffices to show that \(Q(\mathfrak{J})\) contains the generators of \(\mathfrak{I}_{[\mathcal{L}_{1}^{\prime}]_{\mathcal{L}_{\emptyset}}}^{( \overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}},\overline{\mathfrak{I}}_{[X]_{ \mathcal{L}_{\emptyset}}})}\). The same holds for \(\mathfrak{I}_{[\mathcal{L}_{1}^{\prime}]_{\mathcal{L}_{\emptyset}}}^{( \overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}},\overline{\mathfrak{I}}_{[X]_{ \mathcal{L}_{\emptyset}}})}\) by symmetry, concluding the proof of the claim. Fix \(\emptyset\neq F\subseteq[d]\) and \(a\in\mathcal{L}_{1,F}^{\prime}\equiv\mathcal{L}_{1,F}+\mathcal{L}_{\emptyset}\). Then in particular \(a\in\mathcal{L}_{F}\) by (4.1). By item (v) of Proposition 4.1.9, for each \(\underline{0}\neq\underline{n}\leq\underline{1}_{F}\) there exists \(k_{\underline{n}}\in\mathcal{K}(X_{\underline{n}})\) such that \([\phi_{\underline{n}}(a)]_{\mathcal{L}_{\emptyset}}=[k_{\underline{n}}]_{ \mathcal{L}_{\emptyset}}\) and \[\overline{\pi}_{X}(a)+\sum_{\underline{0}\neq\underline{n}\leq\underline{1}_{ F}}(-1)^{|\underline{n}|}\overline{\psi}_{X,\underline{n}}(k_{\underline{n}})\in \mathfrak{J}\equiv\mathfrak{J}^{\mathcal{L}}.\] Thus we have that \[\overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}}([a]_{\mathcal{L}_{ \emptyset}})\overline{q}_{[X]_{\mathcal{L}_{\emptyset}},F} =\overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}}([a]_{\mathcal{L }_{\emptyset}})+\sum_{\underline{0}\neq\underline{n}\leq\underline{1}_{F}}(-1 )^{|\underline{m}|}\overline{\psi}_{[X]_{\mathcal{L}_{\emptyset}}\underline{n} }([\phi_{\underline{n}}(a)]_{\mathcal{L}_{\emptyset}})\] \[=Q\big{(}\overline{\pi}_{X}(a)+\sum_{\underline{0}\neq\underline{ n}\leq\underline{1}_{F}}(-1)^{|\underline{n}|}\overline{\psi}_{X,\underline{n}}(k_{ \underline{n}})\big{)}\in Q(\mathfrak{J}),\] as required. Using Claims 1, 2 and 3 (and adopting the nomenclature therein), we conclude that \[\ker\Phi_{\mathfrak{J}^{\mathcal{L}^{\prime}}}=Q(\mathfrak{J}^{\mathcal{L}^{ \prime}})=\mathfrak{I}_{[\mathcal{L}^{\prime}]_{\mathcal{L}_{\emptyset}}}^{( \overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}},\overline{\mathfrak{I}}_{[X]_{ \mathcal{L}_{\emptyset}}})}=\mathfrak{I}_{[\mathcal{L}^{\prime}_{1}]_{\mathcal{ L}_{\emptyset}}}^{(\overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}}, \overline{\mathfrak{I}}_{[X]_{\mathcal{L}_{\emptyset}}})}+\mathfrak{I}_{[ \mathcal{L}^{\prime}_{2}]_{\mathcal{L}_{\emptyset}},\overline{\mathfrak{I}}_{[X]_ {\mathcal{L}_{\emptyset}}})}^{(\overline{\pi}_{[X]_{\mathcal{L}_{\emptyset}}}, \overline{\mathfrak{I}}_{[X]_{\mathcal{L}_{\emptyset}}})}=Q(\mathfrak{J})= \ker\Phi_{\mathfrak{J}},\] and the proof is complete. By making minor changes to Theorem 4.2.3, we can parametrise the gauge-invariant ideals of \(\mathcal{NO}(\mathcal{K},X)\) for any relative \(2^{d}\)-tuple \(\mathcal{K}\) of \(X\). In particular, we can parametrise the gauge-invariant ideals of \(\mathcal{NO}_{X}\). We begin with a definition. **Definition 4.2.8**.: Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{K}\) be a relative \(2^{d}\)-tuple of \(X\) and let \(\mathcal{L}\) be a \(2^{d}\)-tuple of \(X\). We say that \(\mathcal{L}\) is a _\(\mathcal{K}\)-relative NO-\(2^{d}\)-tuple (of \(X\))_ if \(\mathcal{L}\) is an NT-\(2^{d}\)-tuple of \(X\) and \(\mathcal{K}\subseteq\mathcal{L}\). We refer to the \(\mathcal{I}\)-relative NO-\(2^{d}\)-tuples of \(X\) simply as _NO-\(2^{d}\)-tuples (of \(X\))._ It will follow from Theorem 4.2.11 that the set of \(\mathcal{K}\)-relative NO-\(2^{d}\)-tuples of \(X\) is non-empty. As we have seen in Proposition 4.1.7, NT-\(2^{d}\)-tuples constitute the higher-rank analogue of Katsura's T-pairs, and the same holds for NO-\(2^{d}\)-tuples and Katsura's O-pairs. **Proposition 4.2.9**.: _Let \(X=\{X_{n}\}_{n\in\mathbb{Z}_{+}}\) be a product system with coefficients in a C*-algebra \(A\). Then the NO-\(2\)-tuples of \(X\) are exactly the O-pairs of \(X_{1}\)._ **Proof.** This is immediate by Proposition 4.1.7, since \(\mathcal{I}_{\{1\}}=\mathcal{J}_{\{1\}}=J_{X_{1}}\). The lattice operations of Definition 4.2.5 restrict to the set of \(\mathcal{K}\)-relative NO-\(2^{d}\)-tuples of \(X\). **Proposition 4.2.10**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(\mathcal{K}\) be a relative \(2^{d}\)-tuple of \(X\) and let \(\mathcal{L}_{1}\) and \(\mathcal{L}_{2}\) be \(\mathcal{K}\)-relative NO-\(2^{d}\)-tuples of \(X\). Then \(\mathcal{L}_{1}\vee\mathcal{L}_{2}\) and \(\mathcal{L}_{1}\wedge\mathcal{L}_{2}\) are \(\mathcal{K}\)-relative NO-\(2^{d}\)-tuples of \(X\)._ **Proof.** For \(F\subseteq[d]\), we have that \(\mathcal{K}_{F}\subseteq\mathcal{L}_{1,F}\) and \(\mathcal{K}_{F}\subseteq\mathcal{L}_{2,F}\) by definition. Hence \[\mathcal{K}_{F}\subseteq\mathcal{L}_{1,F}\cap\mathcal{L}_{2,F}=(\mathcal{L}_{1} \wedge\mathcal{L}_{2})_{F},\] using Proposition 4.2.6 in the equality. Hence \(\mathcal{L}_{1}\wedge\mathcal{L}_{2}\) is a \(\mathcal{K}\)-relative NO-\(2^{d}\)-tuple of \(X\). Next, note that \[\mathcal{K}_{\emptyset}\subseteq\mathcal{L}_{1,\emptyset}\subseteq\overline{ \pi}_{X}^{-1}(\mathfrak{J}^{\mathcal{L}_{1}}+\mathfrak{J}^{\mathcal{L}_{2}}) =(\mathcal{L}_{1}\vee\mathcal{L}_{2})_{\emptyset}\] by Proposition 4.2.7. Now fix \(\emptyset\neq F\subseteq[d]\). Since \(\mathcal{K}_{F}\subseteq\mathcal{L}_{1,F}+\mathcal{L}_{2,F}+(\mathcal{L}_{1} \vee\mathcal{L}_{2})_{\emptyset}\), we obtain that \[[\mathcal{K}_{F}]_{(\mathcal{L}_{1}\vee\mathcal{L}_{2})_{\emptyset}}\subseteq[ \mathcal{L}_{1,F}+\mathcal{L}_{2,F}+(\mathcal{L}_{1}\vee\mathcal{L}_{2})_{ \emptyset}]_{(\mathcal{L}_{1}\vee\mathcal{L}_{2})_{\emptyset}}\subseteq[ \mathcal{L}_{1,F}+\mathcal{L}_{2,F}+(\mathcal{L}_{1}\vee\mathcal{L}_{2})_{ \emptyset}]_{(\mathcal{L}_{1}\vee\mathcal{L}_{2})_{\emptyset}}^{(d-1)},\] and by Proposition 4.2.7 we have that \(\mathcal{K}_{F}\subseteq(\mathcal{L}_{1}\vee\mathcal{L}_{2})_{F}\), completing the proof. For a relative \(2^{d}\)-tuple \(\mathcal{K}\) of \(X\), we write \(Q_{\mathcal{K}}\colon\mathcal{NT}_{X}\to\mathcal{NO}(\mathcal{K},X)\) for the canonical quotient map. Equivariance of \(Q_{\mathcal{K}}\) gives that \(\mathfrak{J}\) is a gauge-invariant ideal of \(\mathcal{NO}(\mathcal{K},X)\) if and only if \(\mathfrak{J}=Q_{\mathcal{K}}(\mathfrak{J}^{\mathcal{I}})\) for the gauge-invariant ideal \(\mathfrak{J}^{\prime}:=Q_{\mathcal{K}}^{-1}(\mathfrak{J})\) of \(\mathcal{NT}_{X}\). With this we can adapt the parametrisation of Theorem 4.2.3 to account for \(\mathcal{NO}(\mathcal{K},X)\). **Theorem 4.2.11**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \(\mathcal{K}\) be a relative \(2^{d}\)-tuple of \(X\). Equip the set of \(\mathcal{K}\)-relative NO-\(2^{d}\)-tuples of \(X\) with the lattice structure of Definition 4.2.5 (suitably restricted) and equip the set of gauge-invariant ideals of \(\mathcal{NO}(\mathcal{K},X)\) with the usual lattice structure. Then these sets are isomorphic as lattices via the map_ \[\mathcal{L}\mapsto Q_{\mathcal{K}}(\mathfrak{J}^{\mathcal{L}}),\text{ for the canonical quotient map }Q_{\mathcal{K}}\colon\mathcal{NT}_{X}\to\mathcal{NO}(\mathcal{K},X), \tag{4.2}\] _for all \(\mathcal{K}\)-relative NO-\(2^{d}\)-tuples \(\mathcal{L}\) of \(X\). Moreover, this map preserves inclusions._ **Proof.** First note that the proposed lattice structure on the set of \(\mathcal{K}\)-relative NO-\(2^{d}\)-tuples of \(X\) is well-defined by Proposition 4.2.10. The comments preceding the statement of the theorem show that (4.2) constitutes a well-defined map. Next we check that the mapping is injective and surjective. To this end, first we show that \(\mathfrak{J}_{\mathcal{K}}^{(\overline{\pi}_{X},\overline{\mathfrak{J}}_{X}) }\subseteq\mathfrak{J}^{\mathcal{L}}\) whenever \(\mathcal{L}\) is a \(\mathcal{K}\)-relative NO-\(2^{d}\)-tuple of \(X\). Fix \(F\subseteq[d]\) and \(a\in\mathcal{K}_{F}\). It suffices to show that \(\overline{\pi}_{X}(a)\overline{q}_{X,F}\in\mathfrak{J}^{\mathcal{L}}\). Since \(\mathcal{L}\) is a \(\mathcal{K}\)-relative NO-\(2^{d}\)-tuple of \(X\), we have that \(a\in\mathcal{K}_{F}\subseteq\mathcal{L}_{F}\). Likewise, because \(\phi_{\underline{n}}(a)\in\mathcal{K}(X_{\underline{n}})\) for all \(\underline{0}\neq\underline{n}\leq\underline{1}_{F}\), by Proposition 4.2.2 we obtain that \[\overline{\pi}_{X}(a)\overline{q}_{X,F}=\overline{\pi}_{X}(a)+\sum\{(-1)^{| \underline{n}|}\overline{\psi}_{X,\underline{n}}(\phi_{\underline{n}}(a))\mid \underline{0}\neq\underline{n}\leq\underline{1}_{F}\}\in\mathfrak{J}^{ \mathcal{L}},\] as required. Consequently, we have the following commutative diagram of canonical \(*\)-epimorphisms, so that \(\ker\Phi=Q_{\mathcal{K}}(\mathfrak{J}^{\mathcal{L}})\). Thus we obtain a \(*\)-isomorphism \[\tilde{\Phi}\colon\mathcal{NO}(\mathcal{K},X)/Q_{\mathcal{K}}(\mathfrak{J}^{ \mathcal{L}})\to\mathcal{NT}_{X}/\mathfrak{J}^{\mathcal{L}};\tilde{\Phi}(f+Q_{ \mathcal{K}}(\mathfrak{J}^{\mathcal{L}}))=\Phi(f)\text{ for all }f\in\mathcal{NO}( \mathcal{K},X). \tag{4.3}\] For injectivity of the map (4.2), suppose we have \(\mathcal{K}\)-relative NO-\(2^{d}\)-tuples \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\) of \(X\) such that \(Q_{\mathcal{K}}(\mathfrak{J}^{\mathcal{L}})=Q_{\mathcal{K}}(\mathfrak{J}^{ \mathcal{L}^{\prime}})\). Applying (4.3) for \(\mathcal{L}\) and \(\mathcal{L}^{\prime}\), we obtain a \(*\)-isomorphism \[\mathcal{NT}_{X}/\mathfrak{J}^{\mathcal{L}}\to\mathcal{NT}_{X}/\mathfrak{J}^{ \mathcal{L}^{\prime}};Q_{\mathfrak{J}^{\mathcal{L}}}(f)\mapsto Q_{\mathfrak{J}^{ \mathcal{L}^{\prime}}}(f)\text{ for all }f\in\mathcal{NT}_{X}.\] In turn, it follows that \(\mathfrak{J}^{\mathcal{L}}=\mathfrak{J}^{\mathcal{L}^{\prime}}\) and hence \(\mathcal{L}=\mathcal{L}^{\prime}\) by Theorem 4.2.3. For surjectivity of the map (4.2), let \(\mathfrak{J}\) be a gauge-invariant ideal of \(\mathcal{NO}(\mathcal{K},X)\). Then \(Q_{\mathcal{K}}^{-1}(\mathfrak{J})\) is a gauge-invariant ideal of \(\mathcal{NT}_{X}\) and thus \(Q_{\mathcal{K}}^{-1}(\mathfrak{J})=\mathfrak{J}^{\mathcal{L}}\) for a unique NT-\(2^{d}\)-tuple \(\mathcal{L}\) of \(X\) by Theorem 4.2.3. It suffices to show that \(\mathcal{K}\subseteq\mathcal{L}\). To this end, let \(\mathcal{L}^{\prime}\) be the NT-\(2^{d}\)-tuple so that \[\mathfrak{J}^{\mathcal{L}^{\prime}}=\mathfrak{J}_{\mathcal{K}}^{(\overline{\pi}_{ X},\overline{\mathfrak{J}}_{X})}.\] We have that \(\mathcal{K}\subseteq\mathcal{L}^{\prime}\) by definition of \(\mathcal{L}^{\prime}\). Moreover, we have that \[\mathfrak{J}^{\mathcal{L}^{\prime}}=\mathfrak{J}_{\mathcal{K}}^{(\pi_{X},\vec{t }_{X})}\subseteq Q_{\mathcal{K}}^{-1}(\mathfrak{J})=\mathfrak{J}^{\mathcal{L}},\] and so \(\mathcal{L}^{\prime}\subseteq\mathcal{L}\) since the parametrisation of Theorem 4.2.3 respects inclusions. Thus \(\mathcal{K}\subseteq\mathcal{L}^{\prime}\subseteq\mathcal{L}\), as required. The map (4.2) respects inclusions and the lattice structure since it is a restriction of the first parametrisation map of Theorem 4.2.3 (followed by the \(*\)-homomorphism \(Q_{\mathcal{K}}\)), which satisfies these properties. A direct consequence of Theorem 4.2.11 is that, if \(\mathfrak{J}\) is a gauge-invariant ideal of \(\mathcal{NT}_{X}\), then \(\mathcal{L}^{3}\) is a \(\mathcal{K}\)-relative NO-\(2^{d}\)-tuple if and only if the quotient map \(Q_{\mathfrak{J}}\colon\mathcal{NT}_{X}\to\mathcal{NT}_{X}/\mathfrak{J}\) factors through the quotient map \(Q_{\mathcal{K}}\colon\mathcal{NT}_{X}\to\mathcal{NO}(\mathcal{K},X)\). Applying Theorem 4.2.11 for \(\mathcal{K}=\mathcal{I}\) provides the parametrisation of the gauge-invariant ideals of \(\mathcal{NO}_{X}\). **Corollary 4.2.12**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Equip the set of NO-\(2^{d}\)-tuples of \(X\) with the lattice structure of Definition 4.2.5 (suitably restricted) and equip the set of gauge-invariant ideals of \(\mathcal{NO}_{X}\) with the usual lattice structure. Then these sets are isomorphic as lattices via the map_ \[\mathcal{L}\mapsto Q_{\mathcal{I}}(\mathfrak{J}^{\mathcal{L}}),\text{ for the canonical quotient map }Q_{\mathcal{I}}\colon\mathcal{NT}_{X}\to\mathcal{NO}_{X}, \tag{4.4}\] _for all NO-\(2^{d}\)-tuples \(\mathcal{L}\) of \(X\). Moreover, this map respects inclusions._ Theorem 4.2.3 recaptures the parametrisation of Katsura [34], as presented in Theorem 2.2.10. More generally, Theorem 4.2.11 recaptures [34, Proposition 11.9]. ## 5. Applications The applications that we consider in this section pertain to product systems over \(\mathbb{Z}_{+}^{d}\) that are regular, or arise from C*-dynamical systems, or from strong finitely aligned higher-rank graphs, or whose fibres (apart from the coefficient algebra) admit finite frames. We begin by exploring the situation where an ideal can be placed as the \(\mathcal{L}_{\emptyset}\)-member of an NO-\(2^{d}\)-tuple \(\mathcal{L}\), which will be helpful for further examples. ### Participation in an NO-\(2^{d}\)-tuple If \(I\subseteq A\) is an ideal that is positively invariant for \(X\), then the quotient map \(X\to[X]_{I}\) induces a canonical \(*\)-epimorphism \[[\,\cdot\,]_{I}\colon\mathcal{NT}_{X}\to\mathcal{NT}_{[X]_{I}},\] due to the universality of \(\mathcal{NT}_{X}\). It is well known that this map does not in general descend to a canonical \(*\)-epimorphism \(\mathcal{NO}_{X}\to\mathcal{NO}_{[X]_{I}}\), even for \(d=1\) (see Example 5.3.6 for a counterexample). However, using the NO-\(2^{d}\)-tuple machinery, we can determine precisely when this occurs. To this end, we introduce the following definition, modelled after [34, Definition 4.8]. **Definition 5.1.1**.: Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). We say that an ideal \(I\subseteq A\) is _negatively invariant (for \(X\))_ if \[\mathcal{I}_{F}\cap X_{F}^{-1}(I)\subseteq I\text{ for all }\emptyset\neq F \subseteq[d].\] Definition 5.1.1 leads to the following natural extension of [34, Proposition 5.3]. **Proposition 5.1.2**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal. Then \(I\) is negatively invariant for \(X\) if and only if \(\mathcal{I}_{F}\subseteq J_{F}(I,X)\) for all \(\emptyset\neq F\subseteq[d]\)._ **Proof.** Assume that \(I\) is negatively invariant for \(X\). Fix \(\emptyset\neq F\subseteq[d]\) and take \(a\in\mathcal{I}_{F}\). Then \(\phi_{\underline{i}}(a)\in\mathcal{K}(X_{\underline{i}})\) for all \(i\in[d]\) and so \([\phi_{\underline{i}}(a)]_{I}\in\mathcal{K}([X_{\underline{i}}]_{I})\) for all \(i\in[d]\) by Lemma 2.2.2. Moreover, we have that \[aX_{F}^{-1}(I)\subseteq\mathcal{I}_{F}\cap X_{F}^{-1}(I)\subseteq I.\] Hence \(a\in J_{F}(I,X)\) and we conclude that \(\mathcal{I}_{F}\subseteq J_{F}(I,X)\) for all \(\emptyset\neq F\subseteq[d]\), as required. Now assume that \(\mathcal{I}_{F}\subseteq J_{F}(I,X)\) for all \(\emptyset\neq F\subseteq[d]\). Fix \(\emptyset\neq F\subseteq[d]\) and take an element \(a\in\mathcal{I}_{F}\cap X_{F}^{-1}(I)\). We have that \(a\in J_{F}(I,X)\) by assumption, so \(aX_{F}^{-1}(I)\subseteq I\). Since \(a\in X_{F}^{-1}(I)\), we obtain that \(a\in I\). It follows that \(I\) is negatively invariant, completing the proof. In order to motivate further the importance of the \(\mathcal{L}_{\emptyset}\)-member of an NO-\(2^{d}\)-tuple \(\mathcal{L}\), we give the following proposition. **Proposition 5.1.3**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \(\mathcal{L}\) be an NO-\(2^{d}\)-tuple of \(X\). If \(\mathcal{L}\) satisfies_ \[\mathcal{I}_{F}([X]_{\mathcal{L}_{\emptyset}})\subseteq(\mathcal{I}_{F}(X)+ \mathcal{L}_{\emptyset})/\mathcal{L}_{\emptyset}\text{ for all }F\subseteq[d],\] _then \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}=\mathcal{I}([X]_{\mathcal{L}_{ \emptyset}})\) and thus \(\mathcal{N}\mathcal{O}([\mathcal{L}]_{\mathcal{L}_{\emptyset}},[X]_{\mathcal{ L}_{\emptyset}})=\mathcal{N}\mathcal{O}_{[X]_{\mathcal{L}_{\emptyset}}}\)._ **Proof.** By assumption, for all \(F\subseteq[d]\) we have that \[\mathcal{I}_{F}([X]_{\mathcal{L}_{\emptyset}})\subseteq(\mathcal{I}_{F}(X)+ \mathcal{L}_{\emptyset})/\mathcal{L}_{\emptyset}\subseteq(\mathcal{L}_{F}+ \mathcal{L}_{\emptyset})/\mathcal{L}_{\emptyset}=[\mathcal{L}_{F}]_{\mathcal{ L}_{\emptyset}}\subseteq\mathcal{I}_{F}([X]_{\mathcal{L}_{\emptyset}}),\] using that \(\mathcal{L}\) is an NO-\(2^{d}\)-tuple in the second inclusion, and that \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\) is an (M)-\(2^{d}\)-tuple of \([X]_{\mathcal{L}_{\emptyset}}\) by Proposition 4.1.8 in the final inclusion. Hence \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}=\mathcal{I}([X]_{\mathcal{L}_{ \emptyset}})\) and thus by definition \(\mathcal{N}\mathcal{O}([\mathcal{L}]_{\mathcal{L}_{\emptyset}},[X]_{\mathcal{ L}_{\emptyset}})=\mathcal{N}\mathcal{O}_{[X]_{\mathcal{L}_{\emptyset}}}\), as required. **Definition 5.1.4**.: Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal. We say that \(I\)_participates in an NO-\(2^{d}\)-tuple (of \(X\))_ if there exists an NO-\(2^{d}\)-tuple \(\mathcal{L}\) of \(X\) such that \(\mathcal{L}_{\emptyset}=I\). If \(I\) is a positively invariant ideal for \(X\), then negative invariance of \(I\) is necessary and sufficient for the quotient map \(X\to[X]_{I}\) to induce a canonical \(*\)-epimorphism between the corresponding Cuntz-Nica-Pimsner algebras. **Proposition 5.1.5**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal. Then the following are equivalent:_ * \(I\) _participates in an NO-\(2^{d}\)-tuple of_ \(X\)_;_ * \(I\) _is positively and negatively invariant for_ \(X\)_;_ * \(I\) _is positively invariant for_ \(X\) _and the quotient map_ \(X\to[X]_{I}\) _lifts to a (unique) canonical_ \(*\)_-epimorphism_ \(\mathcal{N}\mathcal{O}_{X}\to\mathcal{N}\mathcal{O}_{[X]_{I}}\)_._ **Proof.** [(i)\(\Rightarrow\)(ii)]: Assume that \(I\) participates in an NO-\(2^{d}\)-tuple \(\mathcal{L}\) of \(X\). Then in particular \(I=\mathcal{L}_{\emptyset}\) is positively invariant for \(X\). On the other hand, since \(\mathcal{I}(X)\subseteq\mathcal{L}\) we have that \[\mathcal{I}_{F}(X)\subseteq\mathcal{L}_{F}\subseteq J_{F}(\mathcal{L}_{ \emptyset},X)=J_{F}(I,X)\text{ for all }\emptyset\neq F\subseteq[d].\] Proposition 5.1.2 then gives that \(I\) is negatively invariant for \(X\). [(ii)\(\Rightarrow\)(iii)]: Assume that \(I\) is positively and negatively invariant for \(X\) and let \(\mathcal{L}\) be the \(2^{d}\)-tuple of \(X\) (consisting of ideals) defined by \[\mathcal{L}_{F}:=\mathcal{I}_{F}(X)+I\text{ for all }F\subseteq[d].\] Since \(\mathcal{I}(X)\) is invariant and \(I\) is positively invariant for \(X\), we have that \(\mathcal{L}\) is invariant. Since \(\mathcal{I}(X)\) is partially ordered, we also have that \(\mathcal{L}\) is partially ordered. Moreover, by Proposition 5.1.2 we have that \[I\subseteq\mathcal{L}_{F}\equiv\mathcal{I}_{F}(X)+I\subseteq J_{F}(I,X)\text{ for all }\emptyset\neq F\subseteq[d],\] where we also use that \(I\) is positively invariant and so \(I\subseteq J_{F}(I,X)\). Therefore, the family \([\mathcal{L}]_{I}\) is an invariant and partially ordered \(2^{d}\)-tuple of \([X]_{I}\) that consists of ideals and satisfies \([\mathcal{L}]_{I}\subseteq\mathcal{J}([X]_{I})\) by Proposition 4.1.3. Hence \([\mathcal{L}]_{I}\) is an (E)-\(2^{d}\)-tuple of \([X]_{I}\) by Proposition 3.2.3. Consequently we have the canonical \(*\)-epimorphisms \[\mathcal{N}\mathcal{T}_{X}\overset{[\cdot]_{I}}{\longrightarrow}\mathcal{N} \mathcal{T}_{[X]_{I}}\overset{Q}{\longrightarrow}\mathcal{N}\mathcal{O}([ \mathcal{L}]_{I},[X]_{I})\longrightarrow\mathcal{N}\mathcal{O}_{[X]_{I}},\] where the final \(*\)-epimorphism follows from the co-universal property of \(\mathcal{N}\mathcal{O}_{[X]_{I}}\). In order to deduce the required \(*\)-epimorphism, it suffices to close the following diagram by a canonical \(*\)-epimorphism. Hence it suffices to show that the kernel \(\ker Q_{\mathcal{I}(X)}=\mathfrak{J}_{\mathcal{I}(X)}^{(\overline{\pi}_{X}, \overline{\mathfrak{I}}_{X})}\) is contained in the kernel \(\ker Q\circ[\cdot]_{I}\). To this end, recall that the generators of \(\ker Q_{\mathcal{I}(X)}=\mathfrak{J}_{\mathcal{I}(X)}^{(\overline{\pi}_{X}, \overline{\mathfrak{I}}_{X})}\) are of the form \(\overline{\pi}_{X}(a)\overline{q}_{X,F}\), where \(a\in\mathcal{I}_{F}(X)\) and \(F\subseteq[d]\). Fix \(F\subseteq[d]\) and \(a\in\mathcal{I}_{F}(X)\subseteq\mathcal{L}_{F}\), so that \([a]_{I}\in[\mathcal{L}_{F}]_{I}\). Then by definition we have that \[[\overline{\pi}_{X}(a)\overline{q}_{X,F}]_{I}=\overline{\pi}_{[X]_{I}}([a]_{I })\overline{q}_{[X]_{I},F}\in\mathfrak{J}_{[\mathcal{L}]_{I}}^{(\overline{\pi }_{[X]_{I}},\overline{\mathfrak{I}}_{[X]_{I}})}=\ker Q,\] and thus \(\overline{\pi}_{X}(a)\overline{q}_{X,F}\in\ker Q\circ[\cdot]_{I}\). It follows that \(\ker Q_{\mathcal{I}(X)}\subseteq\ker Q\circ[\cdot]_{I}\), as required. [(iii)\(\Rightarrow\)(i)]: Assume that \(I\) is positively invariant for \(X\) and that the quotient map \(X\to[X]_{I}\) lifts to a (unique) canonical \(*\)-epimorphism \(\Phi\colon\mathcal{NO}_{X}\to\mathcal{NO}_{[X]_{I}}\). Then \(\ker\Phi\subseteq\mathcal{NO}_{X}\) is a gauge-invariant ideal, and we can consider the gauge-invariant ideal \[\mathfrak{J}:=Q_{\mathcal{I}(X)}^{-1}(\ker\Phi)\subseteq\mathcal{NT}_{X}.\] In particular we have that \(\mathfrak{J}^{\mathcal{I}(X)}\subseteq\mathfrak{J}\), and so \(\mathcal{L}^{\mathfrak{J}}\) is an NO-\(2^{d}\)-tuple of \(X\) since the parametrisation of Theorem 4.2.3 respects inclusions. We obtain a sequence of canonical \(*\)-isomorphisms \[\mathcal{NO}([\mathcal{L}^{\mathfrak{J}}]_{\mathcal{L}_{0}^{\mathfrak{J}}} ^{\mathfrak{J}},[X]_{\mathcal{L}_{0}^{\mathfrak{J}}})\cong\mathcal{NT}_{X}/ \mathfrak{J}\cong\mathcal{NO}_{X}/Q_{\mathcal{I}(X)}(\mathfrak{J})\equiv \mathcal{NO}_{X}/\ker\Phi\cong\mathcal{NO}_{[X]_{I}},\] where the first \(*\)-isomorphism is given by item (ii) of Proposition 4.2.1. By restricting to the coefficient algebras, we have that \([A]_{\mathcal{L}_{0}^{\mathfrak{J}}}\cong[A]_{I}\) by the map \(a+\mathcal{L}_{0}^{\mathfrak{J}}\mapsto a+I\) for all \(a\in A\). Thus \(\mathcal{L}_{0}^{\mathfrak{J}}=I\), as required. We now collect in one place the main results of this subsection and their consequences. **Corollary 5.1.6**.: _Let \(X\) be a strong compactly aligned product system with coefficients in a C*-algebra \(A\). Let \(I\subseteq A\) be an ideal that is positively invariant for \(X\) and satisfies_ \[\mathcal{I}_{F}([X]_{I})=(\mathcal{I}_{F}(X)+I)/I\text{ for all }F\subseteq[d].\] _Let the \(2^{d}\)-tuple \(\mathcal{L}\) of \(X\) be defined by \(\mathcal{L}_{F}:=\mathcal{I}_{F}(X)+I\) for all \(F\subseteq[d]\). Then the following hold:_ * _The_ \(2^{d}\)_-tuple_ \(\mathcal{L}\) _is an NO-_\(2^{d}\)_-tuple of_ \(X\)_._ * \(\mathcal{NO}_{X}/Q_{\mathcal{I}}(\mathfrak{J}^{\mathcal{L}})\cong\mathcal{NO} _{[X]_{I}}\) _canonically, where_ \(Q_{\mathcal{I}}\colon\mathcal{NT}_{X}\to\mathcal{NO}_{X}\) _is the quotient map._ * _If_ \(I\subseteq\bigcap\{\phi_{\underline{\imath}}^{-1}(\mathcal{K}(X_{\underline{ \imath}}))\mid i\in[d]\}\)_, then_ \[Q_{\mathcal{I}}(\mathfrak{J}^{\mathcal{L}})=\langle Q_{\mathcal{I}}(\overline {\pi}_{X}(I))\rangle=\overline{\operatorname{span}}\{t_{X,\underline{\imath }\underline{\imath}}^{\mathcal{I}}(X_{\underline{\imath}\underline{\imath}}) \pi_{X}^{\mathcal{I}}(I)t_{X,\underline{\imath}\underline{\imath}}^{\mathcal{I }}(X_{\underline{\imath}\underline{\imath}})^{*}\mid\underline{\imath}, \underline{\imath}\in\mathbb{Z}_{+}^{d}\},\] _and thus_ \(\mathcal{NO}_{IXI}\) _is a hereditary, full C*-subalgebra of_ \(Q_{\mathcal{I}}(\mathfrak{J}^{\mathcal{L}})\)_._ **Proof.** (i) By assumption we have that \([\mathcal{L}]_{I}=\mathcal{I}([X]_{I})\) and that \(\mathcal{L}_{0}=I\). Thus \([\mathcal{L}]_{I}\) is an (M)-\(2^{d}\)-tuple of \([X]_{I}\), and in turn \(\mathcal{L}\) is an NT-\(2^{d}\)-tuple of \(X\) by Proposition 4.1.8. Moreover, \(\mathcal{I}(X)\subseteq\mathcal{L}\) by definition and thus \(\mathcal{L}\) is an NO-\(2^{d}\)-tuple, proving item (i). (ii) By item (i) of Proposition 4.2.1, we have the canonical \(*\)-isomorphisms \[\mathcal{NO}_{X}/Q_{\mathcal{I}}(\mathfrak{J}^{\mathcal{L}})\cong\mathcal{NT}_{X }/\mathfrak{J}^{\mathcal{L}}\cong\mathcal{NO}([\mathcal{L}]_{I},[X]_{I})= \mathcal{NO}_{[X]_{I}},\] proving item (ii). (iii) Assume that \(I\subseteq\bigcap\{\phi_{\underline{i}}^{-1}(\mathcal{K}(X_{\underline{i}}))\mid i \in[d]\}\). This ensures that \(\mathcal{L}\) is both an NT-\(2^{d}\)-tuple and a relative \(2^{d}\)-tuple of \(X\). Hence \(\mathfrak{J}^{\mathcal{L}}=\mathfrak{J}_{\mathcal{L}}^{\left(\overline{\pi}_{X },\overline{t}_{X}\right)}\) by Proposition 4.2.2. Since \(\pi_{X}^{\mathcal{I}}(\mathcal{I}_{F}(X))q_{X,F}^{\mathcal{I}}=\{0\}\) for all \(F\subseteq[d]\), we have that \[Q_{\mathcal{I}}(\mathfrak{J}^{\mathcal{L}}) =Q_{\mathcal{I}}(\left\langle\overline{\pi}_{X}(\mathcal{L}_{F}) \overline{q}_{X,F}\mid F\subseteq[d]\right\rangle)\] \[=\left\langle\pi_{X}^{\mathcal{I}}(\mathcal{L}_{F})q_{X,F}^{ \mathcal{I}}\mid F\subseteq[d]\right\rangle\] \[=\left\langle\pi_{X}^{\mathcal{I}}(I),\pi_{X}^{\mathcal{I}}(I)q_ {X,F}^{\mathcal{I}}\mid\emptyset\neq F\subseteq[d]\right\rangle.\] To see that it suffices to show that the right hand side contains the generators of the left hand side. To this end, fix \(a\in I\) and \(\emptyset\neq F\subseteq[d]\). An application of Proposition 2.5.10 gives that \[\pi_{X}^{\mathcal{I}}(a)q_{X,F}^{\mathcal{I}}=\pi_{X}^{\mathcal{I}}(a)+\sum\{ (-1)^{|\underline{n}|}\psi_{X,\underline{n}}^{\mathcal{I}}(\phi_{\underline{ n}}(a))\mid\underline{0}\neq\underline{n}\leq\underline{1}_{F}\}.\] Fixing \(\underline{0}\neq\underline{n}\leq\underline{1}_{F}\), we have that \(\underline{n}=\underline{1}_{D}\) for some \(\emptyset\neq D\subseteq F\). Hence we obtain that \[\psi_{X,\underline{n}}^{\mathcal{I}}(\phi_{\underline{n}}(a))=\left\|\cdot \right\|\!-\!\lim_{\lambda}\psi_{X,\underline{n}}^{\mathcal{I}}(\phi_{ \underline{n}}(a)e_{D,\lambda})=\left\|\cdot\right\|\!-\!\lim_{\lambda}\pi_{X }^{\mathcal{I}}(a)\psi_{X,\underline{n}}^{\mathcal{I}}(e_{D,\lambda})\in \left\langle Q_{\mathcal{I}}(\overline{\pi}_{X}(I))\right\rangle,\] where we use the nomenclature of Proposition 2.5.7 in conjunction with [16, (2.4)]. Hence \(\psi_{X,\underline{n}}^{\mathcal{I}}(\phi_{\underline{n}}(a))\in\left\langle Q _{\mathcal{I}}(\overline{\pi}_{X}(I))\right\rangle\) for all \(\underline{0}\neq\underline{n}\leq\underline{1}_{F}\), and therefore \[\pi_{X}^{\mathcal{I}}(a)q_{X,F}^{\mathcal{I}}\in\left\langle Q_{\mathcal{I}}( \overline{\pi}_{X}(I))\right\rangle,\] as required. In total, we conclude that \(Q_{\mathcal{I}}(\mathfrak{J}^{\mathcal{L}})=\left\langle Q_{\mathcal{I}}( \overline{\pi}_{X}(I))\right\rangle\). For the final equality, we have that \[\left\langle Q_{\mathcal{I}}(\overline{\pi}_{X}(I))\right\rangle=\left\langle \pi_{X}^{\mathcal{I}}(I)\right\rangle=\overline{\operatorname{span}}\{t_{X, \underline{n}}^{\mathcal{I}}(X_{\underline{n}})\pi_{X}^{\mathcal{I}}(I)t_{X, \underline{m}}^{\mathcal{I}}(X_{\underline{m}})^{*}\mid\underline{n}, \underline{m}\in\mathbb{Z}_{+}^{d}\},\] using item (i) of Proposition 2.6.7 in the second equality. The last assertion of item (iii) follows from Proposition 2.6.10, finishing the proof. ### Regular product systems Let \(X\) be a regular product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\). Recall that \(X\) is automatically strong compactly aligned by Corollary 2.5.3. Also, an ideal \(I\subseteq A\) is negatively invariant for \(X\) if and only if \(X_{F}^{-1}(I)\subseteq I\) for all \(\emptyset\neq F\subseteq[d]\), since \(\mathcal{I}_{F}=A\) for all \(\emptyset\neq F\subseteq[d]\). The gauge-invariant ideals of \(\mathcal{NO}_{X}\) can be parametrised particularly succinctly. We start with some auxiliary results. **Proposition 5.2.1**.: _Let \(X\) be a regular product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal. Then the following hold:_ * _If_ \(I\) _is negatively invariant for_ \(X\)_, then_ \(J_{F}(I,X)=A\) _for all_ \(\emptyset\neq F\subseteq[d]\)_._ * _If_ \(I\) _is positively and negatively invariant for_ \(X\)_, then_ \([X]_{I}\) _is regular._ **Proof.** For item (i), fix \(\emptyset\neq F\subseteq[d]\). We have that \(\mathcal{I}_{F}(X)=A\) by regularity. Since \(I\) is negatively invariant, Proposition 5.1.2 yields that \(A=\mathcal{I}_{F}(X)\subseteq J_{F}(I,X)\), as required. For item (ii), Lemma 2.2.2 yields that \([\phi_{\underline{n}}]_{I}([A]_{I})\subseteq\mathcal{K}([X_{\underline{n}}]_{I})\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\), using regularity of \(X\). For injectivity, it suffices to show that \([\phi_{\underline{i}}]_{I}\) is injective for all \(i\in[d]\) by Proposition 2.5.1. Accordingly, fix \(i\in[d]\) and \([a]_{I}\in\ker[\phi_{\underline{i}}]_{I}\). Then we have that \(\left\langle X_{\underline{i}},aX_{\underline{i}}\right\rangle\subseteq I\) by Proposition 4.1.2, and so \(a\in X_{\{i\}}^{-1}(I)\subseteq I\), since \(I\) is negatively invariant and \(X\) is regular. Consequently \([a]_{I}=0\), showing that \([\phi_{\underline{i}}]_{I}\) is injective. In total, we have that \([X]_{I}\) is regular, as required. **Corollary 5.2.2**.: _Let \(X\) be a regular product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\) and let \(\mathcal{L}\) be a \(2^{d}\)-tuple of \(X\). Then \(\mathcal{L}\) is an NO-\(2^{d}\)-tuple of \(X\) if and only if \(\mathcal{L}_{\emptyset}\) is a positively and negatively invariant ideal for \(X\) and \(\mathcal{L}_{F}=A\) for all \(\emptyset\neq F\subseteq[d]\)._ **Proof.** Assume that \({\mathcal{L}}\) is an NO-\(2^{d}\)-tuple of \(X\), so that \({\mathcal{I}}(X)\subseteq{\mathcal{L}}\). We have that \({\mathcal{L}}_{\emptyset}\) is a positively and negatively invariant ideal for \(X\) by Proposition 5.1.5. Next fix \(\emptyset\neq F\subseteq[d]\). By regularity of \(X\), we have that \(A={\mathcal{I}}_{F}(X)\subseteq{\mathcal{L}}_{F}\). Thus \({\mathcal{L}}_{F}=A\) for all \(\emptyset\neq F\subseteq[d]\), proving the forward implication. Conversely, assume that \({\mathcal{L}}_{\emptyset}\) is a positively and negatively invariant ideal for \(X\) and \({\mathcal{L}}_{F}=A\) for all \(\emptyset\neq F\subseteq[d]\). We start by showing that \({\mathcal{L}}\) is an NT-\(2^{d}\)-tuple of \(X\). Note that \({\mathcal{L}}\) consists of ideals satisfying \({\mathcal{L}}_{\emptyset}\subseteq{\mathcal{L}}_{F}\) for all \(F\subseteq[d]\), and \({\mathcal{L}}_{\emptyset}\) is in particular positively invariant for \(X\). Hence it suffices to show that \([{\mathcal{L}}]_{{\mathcal{L}}_{\emptyset}}\) is an (M)-\(2^{d}\)-tuple of \([X]_{{\mathcal{L}}_{\emptyset}}\) by Proposition 4.1.8. By item (ii) of Proposition 5.2.1, we have that \([X]_{{\mathcal{L}}_{\emptyset}}\) is regular and thus \[{\mathcal{I}}_{\emptyset}([X]_{{\mathcal{L}}_{\emptyset}})=\{0\}=[{\mathcal{L} }_{\emptyset}]_{{\mathcal{L}}_{\emptyset}}\quad\text{and}\quad{\mathcal{I}}_{ F}([X]_{{\mathcal{L}}_{\emptyset}})=[A]_{{\mathcal{L}}_{\emptyset}}=[{\mathcal{L}}_{F}]_{{ \mathcal{L}}_{\emptyset}}\text{ for all }\emptyset\neq F\subseteq[d].\] Thus \({\mathcal{I}}([X]_{{\mathcal{L}}_{\emptyset}})=[{\mathcal{L}}]_{{\mathcal{L}}_ {\emptyset}}\) and we conclude that \([{\mathcal{L}}]_{{\mathcal{L}}_{\emptyset}}\) is an (M)-\(2^{d}\)-tuple of \([X]_{{\mathcal{L}}_{\emptyset}}\) by Remark 3.2.8. Note also that \[{\mathcal{I}}_{\emptyset}(X)=\{0\}\subseteq{\mathcal{L}}_{\emptyset}\quad \text{and}\quad{\mathcal{I}}_{F}(X)=A\subseteq{\mathcal{L}}_{F}\text{ for all }\emptyset\neq F\subseteq[d]\] by regularity of \(X\). Thus \({\mathcal{L}}\) is an NO-\(2^{d}\)-tuple, finishing the proof. **Corollary 5.2.3**.: _Let \(X\) be a regular product system over \({\mathbb{Z}}_{+}^{d}\) with coefficients in a C*-algebra \(A\). Then the association_ \[I\mapsto{\mathcal{L}}_{I},\text{ where }{\mathcal{L}}_{I,\emptyset}:=I\text{ and }{ \mathcal{L}}_{I,F}:=A\text{ for all }\emptyset\neq F\subseteq[d], \tag{5.1}\] _defines a bijection between the set of ideals of \(A\) that are positively and negatively invariant for \(X\) and the set of NO-\(2^{d}\)-tuples of \(X\). Hence the set of gauge-invariant ideals of \({\mathcal{NO}}_{X}\) corresponds bijectively to the set of ideals of \(A\) that are positively and negatively invariant for \(X\)._ **Proof.** By Corollary 5.2.2, the map (5.1) is well-defined and a bijection. The last assertion follows from Corollary 4.2.12. We obtain the following consequence of Corollary 5.2.3 when \(A\) is simple. **Corollary 5.2.4**.: _Let \(X\) be a regular product system over \({\mathbb{Z}}_{+}^{d}\) with coefficients in a non-zero simple C*-algebra \(A\). Then \({\mathcal{NO}}_{X}\) contains no non-trivial gauge-invariant ideals._ **Proof.** By simplicity, the only ideals of \(A\) are \(\{0\}\) and \(A\), both of which are positively and negatively invariant for \(X\). Note that negative invariance of \(\{0\}\) follows from the fact that \({\mathcal{I}}_{F}\cap X_{F}^{-1}(\{0\})=\{0\}\) for all \(\emptyset\neq F\subseteq[d]\). An application of Corollary 5.2.3 finishes the proof. Accounting for the gauge-invariant ideals of \({\mathcal{NT}}_{X}\) when \(X\) is regular and \(A\) is simple reduces to a combinatorial argument. To this end, we consider \({\mathcal{P}}([d])\) with the usual partial order of inclusion. Fixing notation, for a family \({\mathcal{S}}\subseteq{\mathcal{P}}([d])\) we write \[\min({\mathcal{S}}):=\{F\in{\mathcal{S}}\mid\text{ $F$ is minimal in ${\mathcal{S}}$}\},\] and \[\langle{\mathcal{S}}\rangle:=\{F\subseteq[d]\mid\text{ there exists $F^{\prime}\in{ \mathcal{S}}$ such that $F^{\prime}\subseteq F$}\}.\] It follows that \(\min({\mathcal{S}})\) consists of pairwise incomparable elements of \({\mathcal{S}}\) and that \(\langle\min({\mathcal{S}})\rangle=\langle{\mathcal{S}}\rangle\). Moreover, a family \({\mathcal{S}}\subseteq{\mathcal{P}}([d])\) consists of pairwise incomparable elements of \({\mathcal{P}}([d])\) if and only if \({\mathcal{S}}=\min({\mathcal{S}})\). Note that if \({\mathcal{S}}\subseteq{\mathcal{P}}([d])\) is empty or a singleton, then it is a family of pairwise incomparable subsets vacuously. **Proposition 5.2.5**.: _Let \(X\) be a regular product system over \({\mathbb{Z}}_{+}^{d}\) with coefficients in a non-zero simple C*-algebra \(A\). The following are equivalent for a \(2^{d}\)-tuple \({\mathcal{L}}\) of \(X\):_ * \({\mathcal{L}}\) _is an NT-_\(2^{d}\)_-tuple of_ \(X\)_;_ * \({\mathcal{L}}\) _is an (M)-_\(2^{d}\)_-tuple of_ \(X\) _or_ \({\mathcal{L}}_{F}=A\) _for all_ \(F\subseteq[d]\)_;_ * \({\mathcal{L}}\) _is partially ordered and consists of ideals, each of which is either_ \(A\) _or_ \(\{0\}\)_._ _If any (and thus all) of the items (i)-(iii) hold, then \({\mathcal{NO}}({\mathcal{L}},X)\) either factors through the quotient map \({\mathcal{NT}}_{X}\to{\mathcal{NO}}_{X}\) or \({\mathcal{NO}}({\mathcal{L}},X)=\{0\}\)._ **Proof.** To see that items (i)-(iii) are equivalent, first suppose that \(\mathcal{L}\) is an NT-\(2^{d}\)-tuple of \(X\). If \(\mathcal{L}_{\emptyset}=\{0\}\), then \(\mathcal{L}\) is an (M)-\(2^{d}\)-tuple of \(X\) by Proposition 4.1.8. If \(\mathcal{L}_{\emptyset}\neq\{0\}\), then by simplicity of \(A\) we have that \(\mathcal{L}_{\emptyset}=A\). It follows that \(\mathcal{L}_{F}=A\) for all \(F\subseteq[d]\) by the partial ordering property of \(\mathcal{L}\). This shows that item (i) implies item (ii). Conversely, if \(\mathcal{L}\) is an (M)-\(2^{d}\)-tuple of \(X\) then it is an NT-\(2^{d}\)-tuple of \(X\) by Proposition 4.1.8. On the other hand, if \(\mathcal{L}_{F}=A\) for all \(F\subseteq[d]\), then \([\mathcal{L}_{F}]_{\mathcal{L}_{\emptyset}}=\{0\}\) for all \(F\subseteq[d]\). Thus \([\mathcal{L}]_{\mathcal{L}_{\emptyset}}\) is an (M)-\(2^{d}\)-tuple of \([X]_{\mathcal{L}_{\emptyset}}\) by Remark 3.2.8, and an application of Proposition 4.1.8 then yields that \(\mathcal{L}\) is an NT-\(2^{d}\)-tuple of \(X\). This shows that item (ii) implies item (i). Clearly item (ii) implies item (iii) by Theorem 3.4.6 (and simplicity of \(A\)). Finally, suppose that \(\mathcal{L}\) is partially ordered and consists of ideals, each of which is either \(A\) or \(\{0\}\). If \(\mathcal{L}_{\emptyset}=A\), then \(\mathcal{L}_{F}=A\) for all \(F\subseteq[d]\) by the partial ordering property of \(\mathcal{L}\). On the other hand, if \(\mathcal{L}_{\emptyset}=\{0\}\) then \(\mathcal{L}\subseteq\mathcal{J}(X)\) since \(\mathcal{J}_{F}(X)=A\) for all \(\emptyset\neq F\subseteq[d]\) by regularity of \(X\), and thus \(\mathcal{L}\) satisfies condition (i) of Theorem 3.4.6. Condition (ii) of the latter is satisfied trivially, and condition (iii) is satisfied by assumption. Moreover, by definition we have that \(\mathcal{L}_{F}^{(1)}=\mathcal{L}_{F}\) when \(F=\emptyset\) or \(F=[d]\). For \(\emptyset\neq F\subsetneq[d]\), we have that either \(\mathcal{L}_{F}=A\) or \(\mathcal{L}_{F}=\{0\}\). If \(\mathcal{L}_{F}=A\), then \(\mathcal{L}_{F}^{(1)}\subseteq\mathcal{L}_{F}\) trivially. If \(\mathcal{L}_{F}=\{0\}\), then \(\mathcal{L}_{\lim,F}=\{0\}\) due to injectivity of \(X\), and thus \(\mathcal{L}_{F}^{(1)}=\{0\}=\mathcal{L}_{F}\). Therefore \(\mathcal{L}\) satisfies condition (iv) of Theorem 3.4.6 and so is an (M)-\(2^{d}\)-tuple of \(X\) by the latter, finishing the proof of the equivalences. For the second claim, if \(\mathcal{L}\) satisfies item (ii) then we have that either \(\mathcal{NO}(\mathcal{L},X)\) factors through \(\mathcal{NT}_{X}\to\mathcal{NO}_{X}\) (if \(\mathcal{L}\) is an (M)-\(2^{d}\)-tuple of \(X\)) or \(\mathcal{NO}(\mathcal{L},X)=\{0\}\) (if \(\mathcal{L}_{F}=A\) for all \(F\subseteq[d]\)), and the proof is complete. **Corollary 5.2.6**.: _Let \(X\) be a regular product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a non-zero simple C*-algebra \(A\). Then the association_ \[\mathcal{S}\mapsto\mathcal{L}_{\mathcal{S}},\text{ where }\mathcal{L}_{ \mathcal{S},F}:=\begin{cases}A&\text{if }F\in\langle\mathcal{S}\rangle,\\ \{0\}&\text{otherwise},\end{cases}\text{ for all }F\subseteq[d], \tag{5.2}\] _defines a bijection between the set of families of pairwise incomparable subsets of \([d]\) and the set of NT-\(2^{d}\)-tuples of \(X\). Hence the set of gauge-invariant ideals of \(\mathcal{NT}_{X}\) corresponds bijectively to the set of families of pairwise incomparable subsets of \([d]\)._ **Proof.** Item (iii) of Proposition 5.2.5 implies that the map (5.2) is well-defined. For bijectivity, it suffices to construct an inverse map. To this end, fixing an NT-\(2^{d}\)-tuple \(\mathcal{L}\) of \(X\), we set \[\mathcal{S}_{\mathcal{L}}:=\{F\subseteq[d]\mid\mathcal{L}_{F}=A\}\subseteq \mathcal{P}([d]).\] Note that \(\mathcal{S}_{\mathcal{L}}=\langle\mathcal{S}_{\mathcal{L}}\rangle\) by the partial ordering property of \(\mathcal{L}\). It follows that the map \[\mathcal{L}\mapsto\min(\mathcal{S}_{\mathcal{L}}),\] where \(\mathcal{L}\) is an NT-\(2^{d}\)-tuple of \(X\), is the inverse of the map (5.2), as required. The last assertion follows from Theorem 4.2.3, and the proof is complete. ### C*-dynamical systems In this subsection we interpret the parametrisation of Theorem 4.2.3 in the case of C*-dynamical systems. As a corollary we recover the well-known parametrisation for crossed products. We use this class of product systems to produce an example for which the quotient map \(X\to[X]_{I}\) does not lift to a canonical \(*\)-epimorphism \(\mathcal{NO}_{X}\to\mathcal{NO}_{[X]_{I}}\) for a positively invariant ideal \(I\subseteq A\). The structure of the Nica-Pimsner and Cuntz-Nica-Pimsner algebras of a dynamical system were studied in [14, 29]. In particular, the form of the CNP-relations that arose in this setting was one point of motivation for looking further into strong compactly aligned product systems in [16]. We start by establishing notation and terminology from [14]. **Definition 5.3.1**.: A _C*-dynamical system_\((A,\alpha,\mathbb{Z}_{+}^{d})\) consists of a C*-algebra \(A\) and a unital semigroup homomorphism \(\alpha\colon\mathbb{Z}_{+}^{d}\to\operatorname{End}(A)\). The system will be called _injective_ (resp. _automorphic_) if \(\alpha_{\underline{n}}\) is injective (resp. a \(*\)-automorphism) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). The system will be called _unital_ if \(A\) is unital and \(\alpha_{\underline{n}}(1_{A})=1_{A}\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). Let \((A,\alpha,\mathbb{Z}_{+}^{d})\) be a C*-dynamical system. We associate a product system \(X_{\alpha}\coloneqq\{X_{\alpha,\underline{n}}\}_{\underline{n}\in\mathbb{Z}_ {+}^{d}}\) with \((A,\alpha,\mathbb{Z}_{+}^{d})\) by setting \[X_{\alpha,\underline{n}}:=[\alpha_{\underline{n}}(A)A]\text{ for all } \underline{n}\in\mathbb{Z}_{+}^{d},\] where each \(X_{\alpha,\underline{n}}\) becomes a C*-correspondence over \(A\) via the left action \[\phi_{\underline{n}}\colon A\to\mathcal{L}(X_{\alpha,\underline{n}});\phi_{ \underline{n}}(a)\xi_{\underline{n}}=\alpha_{\underline{n}}(a)\xi_{ \underline{n}}\text{ for all }a\in A,\xi_{\underline{n}}\in X_{\alpha, \underline{n}}.\] Notice that \(X_{\alpha,\underline{0}}=A\) and that \(\alpha_{\underline{n}}(A)\subseteq X_{\alpha,\underline{n}}\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). The product system structure is implemented by the multiplication maps \[\xi_{\underline{n}}\otimes\xi_{\underline{m}}\mapsto\alpha_{\underline{m}}( \xi_{\underline{n}})\xi_{\underline{m}}\text{ for all }\xi_{\underline{n}}\in X_{\alpha, \underline{n}},\xi_{\underline{m}}\in X_{\alpha,\underline{m}},\underline{n}, \underline{m}\in\mathbb{Z}_{+}^{d}. \tag{5.3}\] All left actions are by compacts since \(\phi_{\underline{n}}(ab^{*})=\Theta_{\alpha_{\underline{n}}(a),\alpha_{ \underline{n}}(b)}\) for all \(a,b\in A\) and \(\underline{n}\in\mathbb{Z}_{+}^{d}\). Thus \(X_{\alpha}\) is strong compactly aligned by Corollary 2.5.3. For each \(\underline{n}\in\mathbb{Z}_{+}^{d}\), we have that \(\ker\alpha_{\underline{n}}=\ker\phi_{\underline{n}}\); thus \((A,\alpha,\mathbb{Z}_{+}^{d})\) is injective if and only if \(X_{\alpha}\) is injective. For an ideal \(I\subseteq A\), we have that \[X_{\alpha,\underline{n}}^{-1}(I)=\alpha_{\underline{n}}^{-1}(I)\text{ for all } \underline{n}\in\mathbb{Z}_{+}^{d}.\] **Definition 5.3.2**.: Let \((A,\alpha,\mathbb{Z}_{+}^{d})\) be a C*-dynamical system. A _covariant pair_ for \((A,\alpha,\mathbb{Z}_{+}^{d})\) is a pair \((\pi,V)\) acting on a Hilbert space \(H\) such that \(\pi\colon A\to\mathcal{B}(H)\) is a \(*\)-homomorphism, \(V\colon\mathbb{Z}_{+}^{d}\to\mathcal{B}(H)\) is a semigroup homomorphism, and \[\pi(a)V_{\underline{n}}=V_{\underline{n}}\pi(\alpha_{\underline{n}}(a))\text{ for all }\underline{n}\in\mathbb{Z}_{+}^{d},a\in A. \tag{5.4}\] We say that \(V\) is _contractive/isometric_ if \(V_{\underline{n}}\) is contractive/isometric for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). We say that \(V\) is _Nica-covariant_ if it is contractive and \(V_{\underline{n}}^{*}V_{\underline{m}}=V_{\underline{m}}V_{\underline{n}}^{*}\) for all \(\underline{n},\underline{m}\in\mathbb{Z}_{+}^{d}\) satisfying \(\underline{n}\wedge\underline{m}=\underline{0}\). We say that \((\pi,V)\) is _contractive/isometric/Nica-covariant_ if \(V\) is contractive/isometric/Nica-covariant. An isometric Nica-covariant pair \((\pi,V)\) for \((A,\alpha,\mathbb{Z}_{+}^{d})\) is called _Cuntz-Nica-Pimsner covariant_ (abbrev. _CNP-covariant_) if \[\pi(a)\prod_{i\in F}(I-V_{\underline{n}}V_{\underline{i}}^{*})=0\text{ for all }a\in\bigcap_{\underline{n}\perp F}\alpha_{\underline{n}}^{-1}((\bigcap_{i\in F} \ker\alpha_{\underline{i}})^{\perp})\text{ and }\emptyset\neq F\subseteq[d]. \tag{5.5}\] We write \(\operatorname{C}^{*}(\pi,V)\) for the C*-subalgebra of \(\mathcal{B}(H)\) generated by the images of \(\pi\) and \(V\). We write \(\mathcal{NT}(A,\alpha)\) for the C*-algebra that is universal with respect to the isometric Nica-covariant pairs for \((A,\alpha,\mathbb{Z}_{+}^{d})\), and \(\mathcal{NO}(A,\alpha)\) for the C*-algebra that is universal with respect to the CNP-covariant pairs for \((A,\alpha,\mathbb{Z}_{+}^{d})\), both of which are generated by a copy of \(A\) and \(\mathbb{Z}_{+}^{d}\). The connection of (5.5) with the CNP-representations of \(X_{\alpha}\) was established in [14], where it is shown that \[\mathcal{J}_{F}=(\bigcap_{i\in F}\ker\alpha_{\underline{i}})^{\perp}\quad\text {and}\quad\mathcal{I}_{F}=\bigcap_{\underline{n}\perp F}\alpha_{\underline{n}}^{-1 }((\bigcap_{i\in F}\ker\alpha_{\underline{i}})^{\perp})\] for all \(\emptyset\neq F\subseteq[d]\). The isometric Nica-covariant (resp. CNP-covariant) pairs of \((A,\alpha,\mathbb{Z}_{+}^{d})\) induce Nica-covariant representations (resp. CNP-representations) of \(X_{\alpha}\) by the association \((\pi,V)\mapsto(\pi,t)\), where \(t_{\underline{n}}(\alpha_{\underline{n}}(a)b)=V_{\underline{n}}\pi(\alpha_{ \underline{n}}(a)b)\) for all \(a,b\in A\) and \(\underline{n}\in\mathbb{Z}_{+}^{d}\setminus\{\underline{0}\}\). In general, we have that \[\mathcal{NT}_{X_{\alpha}}\hookrightarrow\mathcal{NT}(A,\alpha)\quad\text{and} \quad\mathcal{NO}_{X_{\alpha}}\hookrightarrow\mathcal{NO}(A,\alpha).\] The embeddings are surjective when each \(\alpha_{\underline{n}}\) is non-degenerate, so that \(X_{\alpha,\underline{n}}\equiv[\alpha_{\underline{n}}(A)A]=A\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). **Proposition 5.3.3**.: _[_14_]_ _Let \((A,\alpha,\mathbb{Z}_{+}^{d})\) be a unital C*-dynamical system. Then there exists a bijective correspondence between the set of Nica-covariant (resp. CNP-) representations \((\pi,t)\) of \(X_{\alpha}\) with \(\pi\) unital, and the set of isometric Nica-covariant (resp. CNP-covariant) pairs \((\pi,V)\) for \((A,\alpha,\mathbb{Z}_{+}^{d})\) with \(\pi\) unital. Consequently,_ \[\mathcal{NT}_{X_{\alpha}}\cong\mathcal{NT}(A,\alpha)\quad\text{and}\quad \mathcal{NO}_{X_{\alpha}}\cong\mathcal{NO}(A,\alpha)\] _by canonical \(*\)-isomorphisms._ **Proof.** The association \((\pi,t)\mapsto(\pi,V)\) is defined by \(V_{\underline{n}}:=t_{\underline{n}}(1_{\underline{n}})\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). Notice that given any Nica-covariant representation \((\pi,t)\) of \(X_{\alpha}\) or any isometric Nica-covariant pair \((\pi,V)\) of \((A,\alpha,\mathbb{Z}_{+}^{d})\), we have that \(\pi(1_{A})\) is a unit for \(\mathrm{C}^{*}(\pi,t)\) and \(\mathrm{C}^{*}(\pi,V)\). Hence, by suitably restricting the codomains of \(\pi,V\) and each \(t_{\underline{n}}\), we deduce the canonical \(*\)-isomorphisms. Injectivity of a unital system has an equivalent reformulation in terms of the semigroup representation in the Cuntz-Nica-Pimsner algebra. This is a standard result in the theory of semicrossed products, but we include a proof here. **Proposition 5.3.4**.: _Let \((A,\alpha,\mathbb{Z}_{+}^{d})\) be a unital C*-dynamical system. Let \(\mathcal{NO}_{X_{\alpha}}=\mathrm{C}^{*}(\pi,V)\), where \((\pi,V)\) is a CNP-covariant pair of \((A,\alpha,\mathbb{Z}_{+}^{d})\) such that \(\pi\) is unital and injective. Then \(\alpha_{\underline{i}}\) is injective for all \(i\in[d]\) if and only if \(V_{\underline{i}}\) is a unitary for all \(i\in[d]\)._ **Proof.** First assume that \(\alpha_{\underline{i}}\) is injective for all \(i\in[d]\), and so \(X_{\alpha}\) is regular by Proposition 2.5.1. Hence we have that \(\mathcal{I}_{F}=A\) for all \(\emptyset\neq F\subseteq[d]\), and the CNP-covariance of \((\pi,V)\) gives that \[\pi(a)(I-V_{\underline{i}}V_{\underline{i}}^{*})=0\text{ for all }a\in\mathcal{I} _{\{i\}}=A,i\in[d].\] Applying for \(a=1_{A}\) gives that \(I-V_{\underline{i}}V_{\underline{i}}^{*}=0\), and so \(V_{\underline{i}}\) is a unitary for all \(i\in[d]\). Now assume that \(V_{\underline{i}}\) is a unitary for all \(i\in[d]\). Fixing \(a\in A\) and \(i\in[d]\), we have that \[\pi(a)=\pi(a)V_{\underline{i}}V_{\underline{i}}^{*}=V_{\underline{i}}\pi( \alpha_{\underline{i}}(a))V_{\underline{i}}^{*},\] using (5.4). Thus it follows that if \(a\in\ker\alpha_{\underline{i}}\), then \(a\in\ker\pi=\{0\}\) and hence \(\alpha_{\underline{i}}\) is injective, finishing the proof. Next we translate the key concepts of Sections 3 and 4 into the language of C*-dynamical systems. Firstly, note that all \(2^{d}\)-tuples of \(X_{\alpha}\) are relative since all left actions are by compacts. Let \(\mathcal{L}\) be a \(2^{d}\)-tuple of \(X_{\alpha}\) consisting of ideals. By construction of \(X_{\alpha}\), we see that \(\mathcal{L}\) is \(X_{\alpha}\)-invariant if and only if \[\mathcal{L}_{F}\subseteq\bigcap_{\underline{n}\perp F}\alpha_{\underline{n}}^ {-1}(\mathcal{L}_{F})\text{ for all }F\subseteq[d].\] In particular, an ideal \(I\subseteq A\) is positively invariant for \(X_{\alpha}\) if and only if \[I\subseteq\bigcap_{\underline{n}\in\mathbb{Z}_{+}^{d}}\alpha_{\underline{n}}^ {-1}(I).\] Also, \(I\) is negatively invariant for \(X_{\alpha}\) if and only if \[\mathcal{I}_{F}\cap(\bigcap\{\alpha_{\underline{n}}^{-1}(I)\mid\underline{0} \neq\underline{n}\leq\underline{1}_{F}\})\subseteq I\text{ for all }\emptyset\neq F\subseteq[d].\] Moreover, we have a \(*\)-isomorphism \[\mathcal{K}(X_{\alpha,\underline{n}})\to[\alpha_{\underline{n}}(A)A\alpha_{ \underline{n}}(A)];\Theta_{\xi_{\underline{n}},\eta_{\underline{n}}}\mapsto \xi_{\underline{n}}\eta_{\underline{n}}^{*}\text{ for all }\xi_{\underline{n}},\eta_{\underline{n}}\in X_{\alpha,\underline{n}}, \underline{n}\in\mathbb{Z}_{+}^{d}.\] This follows by a double application of [7, Lemma 4.6.1] in the same way one obtains that \(A\cong\mathcal{K}(A)\). By restriction we obtain a \(*\)-isomorphism \[\mathcal{K}(X_{\alpha,\underline{n}}I)\to[\alpha_{\underline{n}}(A)I\alpha_{ \underline{n}}(A)];\Theta_{\xi_{\underline{n}},\eta_{\underline{n}}}\mapsto \xi_{\underline{n}}\eta_{\underline{n}}^{*}\text{ for all }\xi_{\underline{n}},\eta_{\underline{n}}\in X_{\alpha,\underline{n}}I, \underline{n}\in\mathbb{Z}_{+}^{d},\] for an ideal \(I\subseteq A\). Therefore, for each \(F\subseteq[d]\) and \(\underline{m}\in\mathbb{Z}_{+}^{d}\), we have that \[\|\phi_{\underline{m}}(a)+\mathcal{K}(X_{\alpha,\underline{m}}I)\|=\|\alpha_{ \underline{m}}(a)+[\alpha_{\underline{m}}(A)I\alpha_{\underline{m}}(A)]\|\text{ for all }a\in A.\] Additionally, for each \(\emptyset\neq F\subseteq[d]\), we have that \[J_{F}(I,X_{\alpha})=\{a\in A\mid a(\bigcap\{\alpha_{\underline{n}}^{-1}(I)\mid \underline{0}\neq\underline{n}\leq\underline{1}_{F}\})\subseteq I\}.\] When \(I\) is positively invariant for \(X_{\alpha}\), we define the C*-dynamical system \(([A]_{I},[\alpha]_{I},\mathbb{Z}_{+}^{d})\) by \[[\alpha_{\underline{n}}]_{I}[a]_{I}=[\alpha_{\underline{n}}(a)]_{I}\text{ for all }a\in A,\underline{n}\in\mathbb{Z}_{+}^{d}.\] Hence we obtain the product system \(X_{[\alpha]_{I}}\) over \(\mathbb{Z}_{+}^{d}\) with coefficients in \([A]_{I}\). Note that \[\ker[\phi_{\underline{n}}]_{I}=\ker[\alpha_{\underline{n}}]_{I}=[\alpha_{ \underline{n}}^{-1}(I)]_{I}\text{ for all }\underline{n}\in\mathbb{Z}_{+}^{d}.\] Moreover, we have that \([X_{\alpha}]_{I}\cong X_{[\alpha]_{I}}\) when each \(X_{\alpha,\underline{n}}\) is non-degenerate. If \((A,\alpha,\mathbb{Z}_{+}^{d})\) is unital, then so is \(([A]_{I},[\alpha]_{I},\mathbb{Z}_{+}^{d})\). **Corollary 5.3.5**.: _Let \((A,\alpha,\mathbb{Z}_{+}^{d})\) be a C*-dynamical system. Let \(\mathcal{K}\) and \(\mathcal{L}\) be \(2^{d}\)-tuples of \(X_{\alpha}\). Then \(\mathcal{L}\) is a \(\mathcal{K}\)-relative NO-\(2^{d}\)-tuple of \(X_{\alpha}\) if and only if \(\mathcal{K}\subseteq\mathcal{L}\) and the following hold:_ * \(\mathcal{L}\) _consists of ideals and_ \(\mathcal{L}_{F}\cap(\bigcap_{i\in F}\alpha_{\underline{i}}^{-1}(\mathcal{L}_{ \emptyset}))\subseteq\mathcal{L}_{\emptyset}\) _for all_ \(\emptyset\neq F\subseteq[d]\)_,_ * \(\mathcal{L}_{F}\subseteq\bigcap_{\underline{n}\perp F}\alpha_{\underline{n}}^ {-1}(\mathcal{L}_{F})\) _for all_ \(F\subseteq[d]\)_,_ * \(\mathcal{L}\) _is partially ordered,_ * \(I_{1,F}\cap I_{2,F}\cap I_{3,F}\subseteq\mathcal{L}_{F}\) _for all_ \(\emptyset\neq F\subsetneq[d]\)_, where_ * \(I_{1,F}:=\bigcap_{\underline{n}\perp F}\alpha_{\underline{n}}^{-1}(\{a\in A \mid a(\bigcap_{i\in F}\alpha_{\underline{i}}^{-1}(\mathcal{L}_{\emptyset})) \subseteq\mathcal{L}_{\emptyset}\})\)_,_ * \(I_{2,F}:=\bigcap_{\underline{m}\perp F}\alpha_{\underline{m}}^{-1}(\cap_{F \subsetneq D}\mathcal{L}_{D})\)_,_ * \(I_{3,F}:=\{a\in A\mid\lim_{\underline{m}\perp F}\|\alpha_{\underline{m}}(a)+[ \alpha_{\underline{m}}(A)\mathcal{L}_{F}\alpha_{\underline{m}}(A)]\|=0\}\)_._ **Proof.** The result follows immediately by the remarks preceding the statement, applied in conjunction with Definitions 4.1.4 and 4.2.8, as well as Proposition 4.1.5. Note that the latter applies since the left action of each fibre of \(X_{\alpha}\) is by compacts. For item (i) and the definition of \(I_{1,F}\), we also use that \[\bigcap\{\alpha_{\underline{n}}^{-1}(\mathcal{L}_{\emptyset})\mid\underline{ 0}\neq\underline{n}\leq\underline{1}_{F}\}=\bigcap_{i\in F}\alpha_{ \underline{i}}^{-1}(\mathcal{L}_{\emptyset})\text{ for all }\emptyset\neq F\subseteq[d],\] which follows from Proposition 4.1.2 since \(\mathcal{L}_{\emptyset}\subseteq A\) is positively invariant. Next we turn our attention to showing that, in general, we do not have a canonical \(*\)-epimorphism \(\mathcal{NO}_{X}\to\mathcal{NO}_{[X]_{I}}\) for a positively invariant ideal \(I\subseteq A\). In particular, injectivity of \((A,\alpha,\mathbb{Z}_{+}^{d})\) need not imply injectivity of \(([A]_{I},[\alpha]_{I},\mathbb{Z}_{+}^{d})\). **Example 5.3.6**.: Let \((A,\alpha,\mathbb{Z}_{+})\) be a unital and injective C*-dynamical system. Suppose \(I\subseteq A\) is an ideal that is positively invariant for \(X_{\alpha}\) (i.e., \(I\subseteq\alpha^{-1}(I)\)), and so \([X_{\alpha}]_{I}\cong X_{[\alpha]_{I}}\). We claim that we do not have a (unique) canonical \(*\)-epimorphism \(\Phi\colon\mathcal{NO}_{X_{\alpha}}\to\mathcal{NO}_{X_{[\alpha]_{I}}}\), in general. To reach contradiction, assume that such a map \(\Phi\) exists. Write \(\mathcal{NO}_{X_{\alpha}}=\mathrm{C}^{*}(\pi,V)\) and \(\mathcal{NO}_{X_{[\alpha]_{I}}}=\mathrm{C}^{*}(\sigma,W)\) for CNP-covariant pairs \((\pi,V)\) and \((\sigma,W)\) of \((A,\alpha,\mathbb{Z}_{+}^{d})\) and \(([A]_{I},[\alpha]_{I},\mathbb{Z}_{+}^{d})\) (respectively), where \(\pi\) and \(\sigma\) are unital and injective. Since \(\alpha\) is injective, we have that \(V_{1}\) is a unitary by Proposition 5.3.4, and thus \(W_{1}=\Phi(V_{1})\) is also a unitary. Another application of Proposition 5.3.4 gives that \([\alpha]_{I}\) is injective. However, this leads to contradiction when we choose \(A,\alpha\) and \(I\) so that \([\alpha]_{I}\) is not injective. For such an example, let \(B\) be a unital C*-algebra and \(\beta\in\mathrm{End}(B)\) be unital and non-injective. Set \(A=\bigoplus_{n\in\mathbb{N}}B\) and define \(\alpha\in\mathrm{End}(A)\) by \(\alpha((b_{n})_{n\in\mathbb{N}})=(\beta(b_{1}),b_{1},b_{2},\dots)\) for all \((b_{n})_{n\in\mathbb{N}}\in A\), noting that \(\alpha\) is unital and injective. Set \[I:=\{(b_{n})_{n\in\mathbb{N}}\in A\mid b_{1}=0\},\] which is an ideal that is positively invariant for \(X_{\alpha}\). Then \([\alpha]_{I}\in\mathrm{End}([A]_{I})\) is conjugate to \(\beta\in\mathrm{End}(B)\), and thus \([\alpha]_{I}\) is not injective by the choice of \(\beta\). Notice that this does _not_ contradict Proposition 5.1.5 because \(I\) does not participate in any NO-2-tuple of \(X_{\alpha}\). Indeed, suppose that \(I\) participates in an NO-2-tuple \(\mathcal{L}\) of \(X_{\alpha}\), i.e., \(\{I,\mathcal{L}_{\{1\}}\}\) is an O-pair of \(X_{\alpha,1}\) by Proposition 4.2.9. Since \(\alpha\) is injective, we have that \(J_{X_{\alpha,1}}=A\) and thus \(\mathcal{L}_{\{1\}}=A\). Hence \(\alpha^{-1}(I)=A\cap\alpha^{-1}(I)\subseteq I\) by item (i) of Corollary 5.3.5. However, \(\beta\) is not injective and so we can choose \(0\neq b\in\ker\beta\). Then \(\alpha(b,0,0,\dots)=(0,b,0,\dots)\in I\) but \((b,0,0,\dots)\notin I\), giving the contradiction that \(\alpha^{-1}(I)\not\subseteq I\). Next we turn to injective C*-dynamical systems \((A,\alpha,\mathbb{Z}_{+}^{d})\), so that \(X_{\alpha}\) is regular. Hence the gauge-invariant ideal structure of \(\mathcal{NO}_{X_{\alpha}}\) falls under the purview of Corollary 5.2.3. **Corollary 5.3.7**.: _Let \((A,\alpha,\mathbb{Z}_{+}^{d})\) be an injective C*-dynamical system. Then the association_ \[I\mapsto\mathcal{L}_{I},\text{ where }\mathcal{L}_{I,\emptyset}:=I\text{ and } \mathcal{L}_{I,F}:=A\text{ for all }\emptyset\neq F\subseteq[d], \tag{5.6}\] _defines a bijection between the set of ideals \(I\subseteq A\) satisfying \(\alpha_{\underline{n}}(I)\subseteq I\) and \(\alpha_{\underline{n}}^{-1}(I)\subseteq I\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\) and the set of NO-\(2^{d}\)-tuples of \(X_{\alpha}\), which in turn induces a bijection with the set of gauge-invariant ideals of \(\mathcal{NO}_{X_{\alpha}}\)._ **Proof.** Note that the set of ideals of \(A\) that are positively and negatively invariant for \(X_{\alpha}\) is exactly the set of ideals \(I\) of \(A\) which satisfy \[I\subseteq\bigcap_{\underline{n}\in\mathbb{Z}_{+}^{d}}\alpha_{\underline{n}}^ {-1}(I)\quad\text{and}\quad\bigcup_{i\in[d]}\alpha_{\underline{i}}^{-1}(I) \subseteq I. \tag{5.7}\] Notice that \(\bigcup_{i\in[d]}\alpha_{\underline{i}}^{-1}(I)\subseteq I\) holds if and only if \(\alpha_{\underline{n}}^{-1}(I)\subseteq I\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). Consequently, \(I\) satisfies (5.7) if and only if \(\alpha_{\underline{n}}(I)\subseteq I\) and \(\alpha_{\underline{n}}^{-1}(I)\subseteq I\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). Since \(X_{\alpha}\) is regular, the claim now follows from Corollary 5.2.3. Now let \((A,\alpha,\mathbb{Z}_{+}^{d})\) be an automorphic C*-dynamical system. Then we can uniquely extend \(\alpha\) to a group action \(\alpha\colon\mathbb{Z}^{d}\to\operatorname{Aut}(A)\) since \(\mathbb{Z}_{+}^{d}\subseteq\mathbb{Z}^{d}\) is a spanning cone, and consider the crossed product C*-algebra \(A\rtimes_{\alpha}\mathbb{Z}^{d}\). We have that \(\mathcal{NO}_{X_{\alpha}}\cong A\rtimes_{\alpha}\mathbb{Z}^{d}\) by an equivariant \(*\)-isomorphism, which thus preserves the lattice structure of the gauge-invariant ideals. Accordingly, Corollary 5.3.7 recovers the following well-known result from the theory of crossed products. **Corollary 5.3.8**.: _Let \((A,\alpha,\mathbb{Z}_{+}^{d})\) be an automorphic C*-dynamical system. Then the bijection of Corollary 5.3.7 induces a lattice isomorphism between the set of \(\alpha\)-invariant ideals of \(A\) and the set of gauge-invariant ideals of \(A\rtimes_{\alpha}\mathbb{Z}^{d}\)._ **Proof.** Notice that in the automorphic case, the ideals \(I\subseteq A\) satisfying \(\alpha_{\underline{n}}(I)\subseteq I\) and \(\alpha_{\underline{n}}^{-1}(I)\subseteq I\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\) are exactly the ideals \(I\subseteq A\) satisfying \(\alpha_{\underline{n}}(I)=I\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). The set of all such ideals carries the usual lattice structure. Consider the mapping \[I\mapsto\mathcal{L}_{I},\text{ where }\mathcal{L}_{I,\emptyset}:=I\text{ and } \mathcal{L}_{I,F}:=A\text{ for all }\emptyset\neq F\subseteq[d],\] from Corollary 5.3.7. In view of Corollary 4.2.12 and the comments preceding the statement, it suffices to show that if \(I,J\subseteq A\) are \(\alpha\)-invariant ideals, then \[\mathcal{L}_{I\cap J}=\mathcal{L}_{I}\wedge\mathcal{L}_{J}\quad\text{and} \quad\mathcal{L}_{I+J}=\mathcal{L}_{I}\vee\mathcal{L}_{J}.\] For the operation \(\wedge\), this follows by Proposition 4.2.6. For the operation \(\vee\), this amounts to showing that \(\mathcal{L}_{I+J,F}=(\mathcal{L}_{I}\vee\mathcal{L}_{J})_{F}\) for all \(F\subseteq[d]\). To this end, first fix \(\emptyset\neq F\subseteq[d]\). Since \(\mathcal{L}_{I}\vee\mathcal{L}_{J}\) is an NO-\(2^{d}\)-tuple of the regular product system \(X_{\alpha}\), we have that \[\mathcal{I}_{F}=A\subseteq(\mathcal{L}_{I}\vee\mathcal{L}_{J})_{F}\subseteq A.\] By definition of \(\mathcal{L}_{I+J}\), we obtain that \[\mathcal{L}_{I+J,F}=A=(\mathcal{L}_{I}\vee\mathcal{L}_{J})_{F},\] as required. It remains to check that \(\mathcal{L}_{I+J,\emptyset}=(\mathcal{L}_{I}\vee\mathcal{L}_{J})_{\emptyset}\). By Proposition 4.2.7, this amounts to showing that \[I+J=\overline{\pi}_{X_{\alpha}}^{-1}(\mathfrak{J}^{\mathcal{L}_{I}}+\mathfrak{ J}^{\mathcal{L}_{J}}).\] The forward inclusion is immediate. For the reverse inclusion, we identify \(\mathcal{NO}_{X_{\alpha}}\) with \(\mathrm{C}^{*}(\pi,U)\) for a CNP-covariant pair \((\pi,U)\) where \(\pi\) is injective and \(U\) is a unitary representation, and let \(Q\colon\mathcal{NT}_{X_{\alpha}}\to\mathcal{NO}_{X_{\alpha}}\) be the quotient map. Since \([X_{\alpha}]_{I}\) is regular by item (ii) of Proposition 5.2.1, an application of item (iii) of Corollary 5.1.6 gives that \[Q(\mathfrak{J}^{\mathcal{L}_{I}})=\overline{\mathrm{span}}\{U_{\underline{n}} \pi(I)U_{\underline{n}}^{*}\mid\underline{n},\underline{m}\in\mathbb{Z}_{+}^{d }\},\] and likewise for \(Q(\mathfrak{J}^{\mathcal{L}_{J}})\). Applying the faithful conditional expectation \(E_{\beta}\colon\mathcal{NO}_{X_{\alpha}}\to\mathcal{NO}_{X_{\alpha}}^{\beta}\) then gives that \[Q(\mathfrak{J}^{\mathcal{L}_{I}}+\mathfrak{J}^{\mathcal{L}_{J}})^{\beta}= \overline{\mathrm{span}}\{U_{\underline{n}}\pi(I+J)U_{\underline{n}}^{*}\mid \underline{n}\in\mathbb{Z}_{+}^{d}\}=\overline{\mathrm{span}}\{\pi\circ\alpha _{\underline{n}}^{-1}(I+J)\mid\underline{n}\in\mathbb{Z}_{+}^{d}\}=\pi(I+J),\] using \(\alpha\)-invariance of \(I\) and \(J\) together with (5.4). Therefore, if \(a\in\overline{\pi}_{X_{\alpha}}^{-1}(\mathfrak{J}^{\mathcal{L}_{I}}+\mathfrak{ J}^{\mathcal{L}_{J}})\) then \[\pi(a)=Q(\overline{\pi}_{X_{\alpha}}(a))\in Q(\mathfrak{J}^{\mathcal{L}_{I}}+ \mathfrak{J}^{\mathcal{L}_{J}})^{\beta}=\pi(I+J).\] Injectivity of \(\pi\) then gives that \(a\in I+J\), as required. In the automorphic case, we also have that \[(A\rtimes_{\alpha}\mathbb{Z}^{d})/(I\rtimes_{\alpha|_{I}}\mathbb{Z}^{d}) \cong A/I\rtimes_{[\alpha]_{I}}\mathbb{Z}^{d}\] whenever \(I\subseteq A\) is an ideal satisfying \(\alpha_{\underline{n}}(I)=I\) for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\). This is in accordance with Corollary 5.1.6, i.e., \[\mathcal{NO}_{X_{\alpha}}/\left<Q_{\mathcal{I}}(\overline{\pi}_{X_{\alpha}}(I ))\right>\cong\mathcal{NO}_{X_{[\alpha]_{I}}}.\] ### Higher-rank graphs In this subsection we interpret the parametrisation in the case of strong finitely aligned higher-rank graphs, which also accounts for row-finite higher-rank graphs. The parametrisation we offer is in terms of vertex sets. As a corollary, we recover the parametrisation of Raeburn, Sims and Yeend [45, Theorem 5.2] for locally convex row-finite higher-rank graphs. The concepts from the theory of higher-rank graphs that we utilise are taken from [44, 45, 46, 50]. For this subsection, we will reserve \(d\) for the degree map of a graph \((\Lambda,d)\) of rank \(k\). Fix \(k\in\mathbb{N}\). A _\(k\)-graph_\((\Lambda,d)\) consists of a countable small category \(\Lambda=(\mathrm{Obj}(\Lambda),\mathrm{Mor}(\Lambda),r,s)\) together with a functor \(d\colon\Lambda\to\mathbb{Z}_{+}^{k}\), called the _degree map_, satisfying the _factorisation property_: For all \(\lambda\in\mathrm{Mor}(\Lambda)\) and \(\underline{m},\underline{n}\in\mathbb{Z}_{+}^{k}\) such that \(d(\lambda)=\underline{m}+\underline{n}\), there exist unique \(\mu,\nu\in\mathrm{Mor}(\Lambda)\) such that \(d(\mu)=\underline{m},d(\nu)=\underline{n}\) and \(\lambda=\mu\nu\). Here we view \(\mathbb{Z}_{+}^{k}\) as a category consisting of a single object, and whose morphisms are exactly the elements of \(\mathbb{Z}_{+}^{k}\) (when viewed as a set). Composition in this category is given by entrywise addition, and the identity morphism is \(\underline{0}\). Therefore, \(d\) being a functor means that \(d(\lambda\mu)=d(\lambda)+d(\mu)\) and \(d(\mathrm{id}_{v})=\underline{0}\) for all \(\lambda,\mu\in\mathrm{Mor}(\Lambda)\) satisfying \(r(\mu)=s(\lambda)\) and \(v\in\mathrm{Obj}(\Lambda)\). We view \(k\)-graphs as generalised graphs, and therefore refer to the elements of \(\mathrm{Obj}(\Lambda)\) as _vertices_ and the elements of \(\mathrm{Mor}(\Lambda)\) as _paths_. Fixing \(\lambda\in\mathrm{Mor}(\Lambda)\), the factorisation property guarantees that \(d(\lambda)=\underline{0}\) if and only if \(\lambda=\mathrm{id}_{s(\lambda)}\). Hence we may identify \(\mathrm{Obj}(\Lambda)\) with \(\{\lambda\in\mathrm{Mor}(\Lambda)\mid d(\lambda)=\underline{0}\}\), and consequently we may write \(\lambda\in\Lambda\) instead of \(\lambda\in\mathrm{Mor}(\Lambda)\) without any ambiguity. Fix a \(k\)-graph \((\Lambda,d)\). Given \(\lambda\in\Lambda\) and \(E\subseteq\Lambda\), we define \[\lambda E:=\{\lambda\mu\in\Lambda\mid\mu\in E,r(\mu)=s(\lambda)\}\quad\text{ and}\quad E\lambda:=\{\mu\lambda\in\Lambda\mid\mu\in E,r(\lambda)=s(\mu)\}.\] In particular, we may replace \(\lambda\) by a vertex \(v\in\Lambda\) and write \[vE:=\{\lambda\in E\mid r(\lambda)=v\}\quad\text{and}\quad Ev:=\{\lambda\in E \mid s(\lambda)=v\}.\] Analogously, given \(E,F\subseteq\Lambda\), we define \[EF:=\{\lambda\mu\in\Lambda\mid\lambda\in E,\mu\in F,r(\mu)=s(\lambda)\}.\] Fixing \(\underline{n}\in\mathbb{Z}_{+}^{k}\), we set \[\Lambda^{\underline{n}}:=\{\lambda\in\Lambda\mid d(\lambda)=\underline{n}\} \ \text{and}\ \ \Lambda^{\underline{\subseteq n}}:=\{\lambda\in\Lambda\mid d(\lambda)\leq \underline{n}\text{ and }s(\lambda)\Lambda^{\underline{i}}=\emptyset\text{ if }d(\lambda)+ \underline{i}\leq\underline{n}\}.\] We will refer to the elements of \(\Lambda^{\underline{i}}\) as _edges_ for all \(i\in[k]\). Given any \(v\in\Lambda^{\underline{0}}\), the set \(v\Lambda^{\leq\underline{n}}\) is non-empty (indeed, an element can be constructed inductively). Suppose that we have \(\underline{\ell},\underline{m}.\underline{n}\in\mathbb{Z}_{+}^{k}\) satisfying \(\underline{\ell}\leq\underline{m}\leq\underline{n}\), and \(\lambda\in\Lambda\) satisfying \(d(\lambda)=\underline{n}\). By the factorisation property, there exist unique paths \(\lambda(\underline{0},\underline{\ell}),\lambda(\underline{\ell},\underline{ m}),\lambda(\underline{m},\underline{n})\in\Lambda\) such that \[d(\lambda(\underline{0},\underline{\ell}))=\underline{\ell},d(\lambda( \underline{\ell},\underline{m}))=\underline{m}-\underline{\ell},d(\lambda( \underline{m},\underline{n}))=\underline{n}-\underline{m},\ \text{and}\ \lambda=\lambda(\underline{0},\underline{\ell}) \lambda(\underline{\ell},\underline{m})\lambda(\underline{m},\underline{n}).\] Fixing \(\mu,\nu\in\Lambda\), we define the set of _minimal common extensions_ of \(\mu\) and \(\nu\) by \[\operatorname{MCE}(\mu,\nu):=\{\lambda\in\Lambda\mid d(\lambda)=d(\mu)\lor d( \nu),\lambda(\underline{0},d(\mu))=\mu,\lambda(\underline{0},d(\nu))=\nu\}.\] Notice that \(\operatorname{MCE}(\mu,\nu)\) may be empty, e.g., when \(r(\mu)\neq r(\nu)\). Intrinsically related to \(\operatorname{MCE}(\mu,\nu)\) is the set \[\Lambda^{\min}(\mu,\nu) :=\{(\alpha,\beta)\in\Lambda\times\Lambda\mid\mu\alpha=\nu\beta \in\operatorname{MCE}(\mu,\nu)\}\] \[=\{(\alpha,\beta)\in\Lambda\times\Lambda\mid\mu\alpha=\nu\beta,d( \mu\alpha)=d(\mu)\lor d(\nu)=d(\nu\beta)\}.\] Note that if \(\mu,\nu\in\Lambda\) satisfy \(d(\mu)=d(\nu)\), then \(\Lambda^{\min}(\mu,\nu)\) being non-empty implies that \(\mu=\nu\). Given a vertex \(v\in\Lambda^{\underline{0}}\), a subset \(E\subseteq v\Lambda\) is called _exhaustive_ if for every \(\lambda\in v\Lambda\) there exists \(\mu\in E\) such that \(\Lambda^{\min}(\lambda,\mu)\neq\emptyset\). A \(k\)-graph \((\Lambda,d)\) is said to be _finitely aligned_ if \(|\operatorname{MCE}(\mu,\nu)|<\infty\) for all \(\mu,\nu\in\Lambda\); equivalently, if \(|\Lambda^{\min}(\mu,\nu)|<\infty\) for all \(\mu,\nu\in\Lambda\). We say that \((\Lambda,d)\) is _strong finitely aligned_ if \((\Lambda,d)\) is finitely aligned and for every \(\lambda\in\Lambda\) and \(i\in[k]\) satisfying \(d(\lambda)\perp\underline{i}\), there are at most finitely many edges \(e\in\Lambda^{\underline{i}}\) such that \(\Lambda^{\min}(\lambda,e)\neq\emptyset\), see [16, Definition 7.2]. A \(k\)-graph \((\Lambda,d)\) is said to be _row-finite_ if \(|v\Lambda^{\underline{n}}|<\infty\) for all \(v\in\Lambda^{\underline{0}}\) and \(\underline{n}\in\mathbb{Z}_{+}^{k}\). Row-finite \(k\)-graphs are in particular strong finitely aligned. We say that \((\Lambda,d)\) is _locally convex_ if, for all \(v\in\Lambda^{\underline{0}},i,j\in[k]\) satisfying \(i\neq j,\lambda\in v\Lambda^{\underline{i}}\) and \(\mu\in v\Lambda^{\underline{j}}\), we have that \(s(\lambda)\Lambda^{\underline{j}}\) and \(s(\mu)\Lambda^{\underline{i}}\) are non-empty. Finally, \((\Lambda,d)\) is said to be _sourceless_ if \(v\Lambda^{\underline{n}}\neq\emptyset\) for all \(v\in\Lambda^{\underline{0}}\) and \(\underline{n}\in\mathbb{Z}_{+}^{k}\). Any sourceless \(k\)-graph is automatically locally convex. Let \((\Lambda,d)\) be a finitely aligned \(k\)-graph. A set of partial isometries \(\{T_{\lambda}\}_{\lambda\in\Lambda}\) in a C*-algebra is called a _Toeplitz-Cuntz-Krieger \(\Lambda\)-family_ if the following hold: * \(\{T_{v}\}_{v\in\Lambda^{\underline{0}}}\) is a collection of pairwise orthogonal projections; * \(T_{\lambda}T_{\mu}=\delta_{s(\lambda),r(\mu)}T_{\lambda\mu}\) for all \(\lambda,\mu\in\Lambda\); * \(T_{\lambda}^{*}T_{\mu}=\sum_{(\alpha,\beta)\in\Lambda^{\min}(\lambda,\mu)}T_{ \alpha}T_{\beta}^{*}\) for all \(\lambda,\mu\in\Lambda\). A Toeplitz-Cuntz-Krieger \(\Lambda\)-family \(\{T_{\lambda}\}_{\lambda\in\Lambda}\) is called a _Cuntz-Krieger \(\Lambda\)-family_ if it satisfies: * \(\prod_{\lambda\in E}(T_{v}-T_{\lambda}T_{\lambda}^{*})=0\) for every \(v\in\Lambda^{\underline{0}}\) and all non-empty finite exhaustive sets \(E\subseteq v\Lambda\). Note that \(T_{\lambda}^{*}T_{\mu}=0\) whenever \(\lambda,\mu\in\Lambda\) satisfy \(\Lambda^{\min}(\lambda,\mu)=\emptyset\) by (TCK3). Likewise, (TCK3) implies that \(T_{\lambda}^{*}T_{\lambda}=T_{s(\lambda)}\) for all \(\lambda\in\Lambda\). The C*-algebra \(\mathrm{C}^{*}(\Lambda)\) is the universal one with respect to the Cuntz-Krieger \(\Lambda\)-families, and satisfies a Gauge-Invariant Uniqueness Theorem [46, Theorem 4.2]. Every \(k\)-graph \((\Lambda,d)\) is canonically associated with a product system \(X(\Lambda):=\{X_{\underline{n}}(\Lambda)\}_{\underline{n}\in\mathbb{Z}_{+}^{k}}\) with coefficients in the C*-algebra \(c_{0}(\Lambda^{\underline{0}})\), where we view \(\Lambda^{\underline{0}}\) as a discrete space. Firstly, set \(X_{\underline{0}}(\Lambda):=c_{0}(\Lambda^{\underline{0}})\), which we view as a C*-correspondence over itself in the usual way. For each \(v\in\Lambda^{\underline{0}}\), we write \(\delta_{v}\in c_{0}(\Lambda^{\underline{0}})\) for the projection on \(\{v\}\). For every \(\underline{0}\neq\underline{n}\in\mathbb{Z}_{+}^{k}\), we consider the linear space \(c_{00}(\Lambda^{\underline{n}})\) and write \(\delta_{\lambda}\) for its generators. A right pre-Hilbert \(c_{0}(\Lambda^{\underline{0}})\)-module structure on \(c_{00}(\Lambda^{\underline{n}})\) is given by \[\left\langle\xi_{\underline{n}},\eta_{\underline{n}}\right\rangle(v):=\sum_{s( \lambda)=v}\overline{\xi_{\underline{n}}(\lambda)}\eta_{\underline{n}}(\lambda) \quad\text{and}\quad(\xi_{\underline{n}}a)(\lambda):=\xi_{\underline{n}}( \lambda)a(s(\lambda)),\] for all \(\xi_{\underline{n}},\eta_{\underline{n}}\in c_{00}(\Lambda^{\underline{n}}),a \in c_{0}(\Lambda^{\underline{0}}),v\in\Lambda^{\underline{0}}\) and \(\lambda\in\Lambda^{\underline{n}}\). We write \(X_{\underline{n}}(\Lambda)\) for the right Hilbert C*-module completion of \(c_{00}(\Lambda^{\underline{n}})\). A left action \(\phi_{\underline{n}}\) of \(c_{0}(\Lambda^{\underline{0}})\) on \(X_{\underline{n}}(\Lambda)\) is induced by \[\phi_{\underline{n}}(a)\colon c_{00}(\Lambda^{\underline{n}})\to c_{00}(\Lambda^{ \underline{n}});(\phi_{\underline{n}}(a)\xi_{\underline{n}})(\lambda)=a(r(\lambda)) \xi_{\underline{n}}(\lambda)\text{ for all }a\in c_{0}(\Lambda^{\underline{n}}),\xi_{ \underline{n}}\in c_{00}(\Lambda^{\underline{n}}),\lambda\in\Lambda^{\underline{n}},\] thereby imposing a C*-correspondence structure on \(X_{\underline{n}}(\Lambda)\). Fixing \(\underline{n},\underline{m}\in\mathbb{Z}_{+}^{k}\), we define a multiplication map \(u_{\underline{n},\underline{m}}\) of \(X(\Lambda)\) by \[u_{\underline{n},\underline{m}}\colon X_{\underline{n}}(\Lambda)\otimes_{c_{0}( \Lambda\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! and that \[H_{\mathcal{I}_{F}}=\{v\in\Lambda^{\underline{0}}\mid v\text{ is $F$- tracing}\}.\] In [16, Theorem 7.6], it is shown that a Toeplitz-Cuntz-Krieger \(\Lambda\)-family \(\{T_{\lambda}\}_{\lambda\in\Lambda}\) is a Cuntz-Krieger \(\Lambda\)-family if and only if it satisfies \((\text{CK}^{\prime})\ \prod\{T_{v}-T_{\lambda}T_{\lambda}^{*}\mid\lambda\in v \Lambda^{\underline{\mathsf{a}}},i\in F\}=0\), for every \(F\)-tracing vertex \(v\) and every \(\emptyset\neq F\subseteq[k]\). Let \(\mathcal{L}\) be a \(2^{k}\)-tuple of \(X(\Lambda)\) that consists of ideals. For notational convenience, we set \(H_{\mathcal{L},F}:=H_{\mathcal{L}_{F}}\) for all \(F\subseteq[k]\) and \(H_{\mathcal{L}}:=\{H_{\mathcal{L},F}\}_{F\subseteq[k]}\). For an ideal \(I\subseteq c_{0}(\Lambda^{\underline{0}})\), we have that \[H_{X_{\underline{n}}(\Lambda)^{-1}(I)}=\{v\in\Lambda^{\underline{0}}\mid s(v \Lambda^{\underline{n}})\subseteq H_{I}\}\text{ for all }\underline{n}\in \mathbb{Z}_{+}^{k}. \tag{5.8}\] From this we obtain that \[H_{\mathcal{L}_{\text{inv},F}}=\bigcap_{\underline{m}\perp F}\{v\in\Lambda^{ \underline{0}}\mid s(v\Lambda^{\underline{m}})\subseteq\cap_{F\subseteq D}H_{ \mathcal{L},D}\}\text{ for all }\emptyset\neq F\subsetneq[k]. \tag{5.9}\] To address each \(\mathcal{L}_{\text{lim},F}\), we have the following proposition. **Proposition 5.4.4**.: _Let \((\Lambda,d)\) be a strong finitely aligned \(k\)-graph. Let \(\mathcal{L}\) be a \(2^{k}\)-tuple of \(X(\Lambda)\) that consists of ideals and let \(H_{\mathcal{L}}\) be the corresponding family of sets of vertices of \(\Lambda\). Then, fixing \(\emptyset\neq F\subsetneq[k]\), a vertex \(v\in\Lambda^{\underline{0}}\) belongs to \(H_{\mathcal{L}_{\text{lim},F}}\) if and only if there exists \(\underline{m}\perp F\) such that whenever \(\underline{n}\perp F\) and \(\underline{n}\geq\underline{m}\), we have that \(s(v\Lambda^{\underline{n}})\subseteq H_{\mathcal{L},F}\) and \(|v\Lambda^{\underline{n}}|<\infty\)._ **Proof.** For the forward implication, take \(v\in H_{\mathcal{L}_{\text{lim},F}}\). Then we have that \[\lim_{\underline{m}\perp F}\|\phi_{\underline{m}}(\delta_{v})+\mathcal{K}(X_{ \underline{m}}(\Lambda)\mathcal{L}_{F})\|=0.\] By definition there exists \(\underline{m}\perp F\) such that whenever \(\underline{n}\perp F\) and \(\underline{n}\geq\underline{m}\), we have that \[\|\phi_{\underline{n}}(\delta_{v})+\mathcal{K}(X_{\underline{n}}(\Lambda) \mathcal{L}_{F})\|<1/2.\] Fixing \(\underline{n}\perp F\) such that \(\underline{n}\geq\underline{m}\), the fact that \(\phi_{\underline{n}}(\delta_{v})+\mathcal{K}(X_{\underline{n}}(\Lambda) \mathcal{L}_{F})\) is a projection then implies that \(\phi_{\underline{n}}(\delta_{v})\in\mathcal{K}(X_{\underline{n}}(\Lambda) \mathcal{L}_{F})\). By (2.1), we have that \(\phi_{\underline{n}}(\delta_{v})\in\mathcal{K}(X_{\underline{n}}(\Lambda))\) and that \(\left\langle X_{\underline{n}}(\Lambda),\phi_{\underline{n}}(\delta_{v})X_{ \underline{n}}(\Lambda)\right\rangle\subseteq\mathcal{L}_{F}\). The former implies that \(|v\Lambda^{\underline{n}}|<\infty\) by item (ii) of Proposition 5.4.1, and the latter implies that \(s(v\Lambda^{\underline{n}})\subseteq H_{\mathcal{L},F}\) by (5.8). Hence the forward implication holds. Now assume that there exists \(\underline{m}\perp F\) such that whenever \(\underline{n}\perp F\) and \(\underline{n}\geq\underline{m}\), we have that \(s(v\Lambda^{\underline{n}})\subseteq H_{\mathcal{L},F}\) and \(|v\Lambda^{\underline{n}}|<\infty\). Fix \(\varepsilon>0\) and \(\underline{n}\perp F\) satisfying \(\underline{n}\geq\underline{m}\). By assumption we have that \(|v\Lambda^{\underline{n}}|<\infty\), and so an application of item (ii) of Proposition 5.4.1 gives that \(\phi_{\underline{n}}(\delta_{v})\in\mathcal{K}(X_{\underline{n}}(\Lambda))\). Likewise, the assumption that \(s(v\Lambda^{\underline{n}})\subseteq H_{\mathcal{L},F}\) and (5.8) imply that \(\left\langle X_{\underline{n}}(\Lambda),\phi_{\underline{n}}(\delta_{v})X_{ \underline{n}}(\Lambda)\right\rangle\subseteq\mathcal{L}_{F}\). An application of (2.1) then gives that \(\phi_{\underline{n}}(\delta_{v})\in\mathcal{K}(X_{\underline{n}}(\Lambda) \mathcal{L}_{F})\), and hence \[\|\phi_{\underline{n}}(\delta_{v})+\mathcal{K}(X_{\underline{n}}(\Lambda) \mathcal{L}_{F})\|=0<\varepsilon.\] It follows that \(\lim_{\underline{m}\perp F}\|\phi_{\underline{m}}(\delta_{v})+\mathcal{K}(X_{ \underline{m}}(\Lambda)\mathcal{L}_{F})\|=0\) and thus \(v\in H_{\mathcal{L}_{\text{lim},F}}\), finishing the proof. Next we present a construction of [45] for an arbitrary \(k\)-graph \((\Lambda,d)\). A subset \(H\) of \(\Lambda^{\underline{0}}\) is called _hereditary (in \(\Lambda\))_ if whenever \(v\in H\) and \(v\Lambda w\neq\emptyset\) (where \(w\in\Lambda^{\underline{0}}\)), we have that \(w\in H\). Due to duality, hereditarity is captured by positive invariance. **Proposition 5.4.5**.: _Let \((\Lambda,d)\) be a \(k\)-graph and let \(H\) be a subset of \(\Lambda^{\underline{0}}\). Then \(H\) is hereditary if and only if \(I_{H}\) is positively invariant for \(X(\Lambda)\)._ For a hereditary subset \(H\) of \(\Lambda^{\underline{0}}\), we form a new countable small category \(\Gamma(\Lambda\setminus H)\) by \[\Gamma(\Lambda\setminus H):=(\Lambda^{\underline{0}}\setminus H,\{\lambda\in \Lambda\mid s(\lambda)\not\in H\},r,s),\] i.e., the objects of \(\Gamma(\Lambda\setminus H)\) are the vertices of \(\Lambda\) that do not belong to \(H\), and the morphisms are those paths in \(\Lambda\) whose source does not belong to \(H\). Notice that the range and source maps are inherited from \(\Lambda\). As \(\Gamma(\Lambda\setminus H)\) is a subcategory of \(\Lambda\) and \(H\) is hereditary, \(\Gamma(\Lambda\setminus H)\) inherits a \(k\)-graph structure from \(\Lambda\) (e.g., see the proof of [45, Theorem 5.2]). Hence we can form the product systems \(X(\Gamma(\Lambda\setminus H))\) and \([X(\Lambda)]_{I_{H}}\), using Propositions 2.3.6 and 5.4.5 for the latter. To avoid confusion, we will denote the corresponding generators by \(\delta_{\lambda,\Lambda}\), for \(\lambda\in\Lambda\), and by \(\delta_{\mu,\Gamma}\), for \(\mu\in\Gamma(\Lambda\setminus H)\). **Proposition 5.4.6**.: _Let \((\Lambda,d)\) be a \(k\)-graph and let \(H\subseteq\Lambda^{\underline{0}}\) be hereditary. Then \(X(\Gamma(\Lambda\setminus H))\) and \([X(\Lambda)]_{I_{H}}\) are unitarily equivalent by the family of maps \(\{W_{\underline{n}}\}_{\underline{n}\in\mathbb{Z}_{+}^{k}}\) defined by_ \[W_{\underline{n}}\colon X_{\underline{n}}(\Gamma(\Lambda\setminus H))\to[X_{ \underline{n}}(\Lambda)]_{I_{H}};W_{\underline{n}}(\delta_{\lambda,\Gamma})=[ \delta_{\lambda,\Lambda}]_{I_{H}}\text{ for all }\lambda\in\Gamma(\Lambda \setminus H)^{\underline{n}},\underline{n}\in\mathbb{Z}_{+}^{k}.\] **Proof.** Since \(\Gamma(\Lambda\setminus H)\) is a sub-\(k\)-graph of \(\Lambda\), each map \(W_{\underline{n}}\) is realised as the composition of canonical maps so that \[W_{\underline{n}}\colon X_{\underline{n}}(\Gamma(\Lambda\setminus H)) \hookrightarrow X_{\underline{n}}(\Lambda)\to[X_{\underline{n}}(\Lambda)]_{I_ {H}},\] where the first map is isometric and the second is the quotient map. This map is an isometry because of the duality \(H\mapsto I_{H}\). Moreover, \(\delta_{\lambda,\Lambda}\in X_{\underline{n}}(\Lambda)I_{H}\) if and only if \(s(\lambda)\in H\), giving that \(W_{\underline{n}}\) is invertible. It is straightforward to check that the family \(\{W_{\underline{n}}\}_{\underline{n}\in\mathbb{Z}_{+}^{k}}\) satisfies the conditions of Definition 2.3.1, thus providing the unitary equivalence. **Corollary 5.4.7**.: _Let \((\Lambda,d)\) be a \(k\)-graph and let \(H\subseteq\Lambda^{\underline{0}}\) be hereditary. If \((\Lambda,d)\) is (strong) finitely aligned, then so is \(\Gamma(\Lambda\setminus H)\)._ **Proof.** Since \(X(\Gamma(\Lambda\setminus H))\cong[X(\Lambda)]_{I_{H}}\) by Proposition 5.4.6, the result follows by combining Proposition 2.4.4 (resp. Proposition 2.5.5) with Proposition 2.4.2 (resp. Proposition 2.5.4). We now turn our attention to relative NO-\(2^{k}\)-tuples in the case of a strong finitely aligned \(k\)-graph \((\Lambda,d)\). Our first aim is to describe the NT-\(2^{k}\)-tuples of \(X(\Lambda)\) from a graph theoretic perspective, by translating items (i)-(iv) of Definition 4.1.4 into properties on vertices. We obtain the following proposition towards the characterisation of item (i) of Definition 4.1.4. **Proposition 5.4.8**.: _Let \((\Lambda,d)\) be a strong finitely aligned \(k\)-graph and let \(H\subseteq\Lambda^{\underline{0}}\) be hereditary in \(\Lambda\). Then, fixing \(\emptyset\neq F\subseteq[k]\), the vertex set associated with \(J_{F}(I_{H},X(\Lambda))\) is given by_ \[H_{J_{F}(I_{H},X(\Lambda))}=H\cup\{v\notin H\mid|v\Gamma(\Lambda\setminus H)^{ \underline{i}}|<\infty\;\forall i\in[k]\text{ and }v\text{ is not an }F\text{-source in }\Gamma(\Lambda\setminus H)\}.\] **Proof.** Fix \(v\in\Lambda^{\underline{0}}\) and recall that \(v\in H_{J_{F}(I_{H},X(\Lambda))}\) if and only if \(\delta_{v,\Lambda}\in J_{F}(I_{H},X(\Lambda))\). Combining Propositions 4.1.3 and 5.4.5, we have that \[J_{F}(I_{H},X(\Lambda))=[\cdot]_{I_{H}}^{-1}(\mathcal{J}_{F}([X(\Lambda)]_{I_ {H}})).\] Thus \(\delta_{v,\Lambda}\in J_{F}(I_{H},X(\Lambda))\) if and only if \([\delta_{v,\Lambda}]_{I_{H}}\in\mathcal{J}_{F}([X(\Lambda)]_{I_{H}})\). Consider the unitary equivalence \(\{W_{\underline{n}}\colon X_{\underline{n}}(\Gamma(\Lambda\setminus H))\to[X _{\underline{n}}(\Lambda)]_{I_{H}}\}_{\underline{n}\in\mathbb{Z}_{+}^{k}}\) of Proposition 5.4.6. Then we have that \[[\delta_{v,\Lambda}]_{I_{H}}\in\mathcal{J}_{F}([X(\Lambda)]_{I_{H}})\] if and only if \[W_{\underline{0}}^{-1}([\delta_{v,\Lambda}]_{I_{H}})\in W_{\underline{0}}^{-1} (\mathcal{J}_{F}([X(\Lambda)]_{I_{H}}))=\mathcal{J}_{F}(X(\Gamma(\Lambda \setminus H))),\] using Proposition 2.5.6 in the final equality. Notice that if \(v\not\in H\), then \(W_{\underline{0}}^{-1}([\delta_{v,\Lambda}]_{I_{H}})=\delta_{v,\Gamma}\) by definition. In this case, we have that \(\delta_{v,\Gamma}\in\mathcal{J}_{F}(X(\Gamma(\Lambda\setminus H)))\) if and only if \(|v\Gamma(\Lambda\setminus H)^{\underline{i}}|<\infty\) for all \(i\in[k]\) and \(v\) is not an \(F\)-source in \(\Gamma(\Lambda\setminus H)\), by applying the comments succeeding Proposition 5.4.3 to \(\Gamma(\Lambda\setminus H)\). Thus \(v\in H_{J_{F}(I_{H},X(\Lambda))}\) if and only if either \(v\in H\) or \(v\not\in H\) but \(|v\Gamma(\Lambda\setminus H)^{\underline{i}}|<\infty\) for all \(i\in[k]\) and \(v\) is not an \(F\)-source in \(\Gamma(\Lambda\setminus H)\), proving the result. Next we translate items (ii) and (iii) of Definition 4.1.4. **Definition 5.4.9**.: Let \((\Lambda,d)\) be a \(k\)-graph. 1. Given \(F\subseteq[k]\), we say that \(H\subseteq\Lambda^{\underline{0}}\) is \(F^{\perp}\)_-hereditary (in \(\Lambda\))_ if \(s(H\Lambda^{\underline{n}})\subseteq H\) for all \(\underline{n}\perp F\). 2. We say that a family \(H:=\{H_{F}\}_{F\subseteq[k]}\) of subsets of \(\Lambda^{\underline{0}}\) is _hereditary (in \(\Lambda\))_ if \(H_{F}\) is \(F^{\perp}\)-hereditary (in \(\Lambda\)) for all \(F\subseteq[k]\). Notice that \(H\subseteq\Lambda^{\underline{0}}\) is \(\emptyset^{\perp}\)-hereditary if and only if \(H\) is hereditary in the usual sense. For a strong finitely aligned \(k\)-graph \(\Lambda\), a \(2^{k}\)-tuple \(\mathcal{L}\) of \(X(\Lambda)\) that consists of ideals is \(X(\Lambda)\)-invariant if and only if the associated family \(H_{\mathcal{L}}\) of sets of vertices is hereditary. **Definition 5.4.10**.: Let \((\Lambda,d)\) be a \(k\)-graph and let \(H:=\{H_{F}\}_{F\subseteq[k]}\) be a family of subsets of \(\Lambda^{\underline{0}}\). We say that \(H\) is _partially ordered_ if \(H_{F_{1}}\subseteq H_{F_{2}}\) whenever \(F_{1}\subseteq F_{2}\subseteq[k]\). If \(\Lambda\) is a strong finitely aligned \(k\)-graph, then a \(2^{k}\)-tuple \(\mathcal{L}\) of \(X(\Lambda)\) that consists of ideals is partially ordered if and only if the associated family \(H_{\mathcal{L}}\) of sets of vertices is partially ordered. Finally, to translate item (iv) of Definition 4.1.4, we will need the following definition. **Definition 5.4.11**.: Let \((\Lambda,d)\) be a strong finitely aligned \(k\)-graph. Let \(H:=\{H_{F}\}_{F\subseteq[k]}\) be a family of subsets of \(\Lambda^{\underline{0}}\). We say that \(H\) is _absorbent (in \(\Lambda\))_ if the following holds for every \(\emptyset\neq F\subsetneq[k]\): a vertex \(v\in\Lambda^{\underline{0}}\) belongs to \(H_{F}\) whenever it satisfies 1. \(v\) is \(F\)-tracing, 2. \(s(v\Lambda^{\underline{m}})\subseteq\cap_{F\subsetneq D}H_{D}\) for all \(\underline{m}\perp F\), and 3. there exists \(\underline{m}\perp F\) such that whenever \(\underline{n}\perp F\) and \(\underline{n}\geq\underline{m}\), we have that \(s(v\Lambda^{\underline{n}})\subseteq H_{F}\) and \(|v\Lambda^{\underline{n}}|<\infty\). **Proposition 5.4.12**.: _Let \((\Lambda,d)\) be a strong finitely aligned \(k\)-graph. Let \(\mathcal{L}\) be a \(2^{k}\)-tuple of \(X(\Lambda)\) that consists of ideals and let \(H_{\mathcal{L}}\) be the corresponding family of sets of vertices of \(\Lambda\). Then \(\mathcal{L}\) is an NT-\(2^{k}\)-tuple of \(X(\Lambda)\) if and only if the following four conditions hold:_ 1. _for each_ \(\emptyset\neq F\subseteq[k]\)_, we have that_ \[H_{\mathcal{L},F}\subseteq H_{\mathcal{L},\emptyset}\cup\{v\notin H_{\mathcal{ L},\emptyset}\mid|v\Gamma(\Lambda\setminus H_{\mathcal{L},\emptyset})^{ \underline{i}}|<\infty\ \forall i\in[k]\ \text{and}\ v\ \text{is not an}\ F\text{-source in}\ \Gamma(\Lambda\setminus H_{\mathcal{L}, \emptyset})\},\] 2. \(H_{\mathcal{L}}\) _is hereditary in_ \(\Lambda\)_,_ 3. \(H_{\mathcal{L}}\) _is partially ordered,_ 4. \(H_{\mathcal{L}}\setminus H_{\mathcal{L},\emptyset}:=\{H_{\mathcal{L},F} \setminus H_{\mathcal{L},\emptyset}\}_{F\subseteq[k]}\) _is absorbent in_ \(\Gamma(\Lambda\setminus H_{\mathcal{L},\emptyset})\)_._ **Proof.** We have already commented on the equivalence of invariance of \(\mathcal{L}\) with item (ii), and of the partial ordering of \(\mathcal{L}\) with item (iii). By Proposition 5.4.5, we have that \(\mathcal{L}_{\emptyset}\) is positively invariant for \(X(\Lambda)\) if and only if \(H_{\mathcal{L},\emptyset}\) is hereditary. In turn, we obtain the equivalence of item (i) with item (i) of Definition 4.1.4 by Proposition 5.4.8. To complete the proof, let \(\mathcal{L}\) be a \(2^{k}\)-tuple of \(X(\Lambda)\) that satisfies items (i)-(iii) of Definition 4.1.4. We will show that item (iv) of Definition 4.1.4 is equivalent to item (iv) of the statement. Since \(H_{\mathcal{L},\emptyset}\) is hereditary, we can form the product system \([X(\Lambda)]_{\mathcal{L}_{\emptyset}}\) and the \(k\)-graph \(\Gamma:=\Gamma(\Lambda\setminus H_{\mathcal{L},\emptyset})\). We identify \(X(\Gamma)\) with \([X(\Lambda)]_{\mathcal{L}_{\emptyset}}\) as in Proposition 5.4.6, and for \(F\subseteq[k]\) we write \[\mathcal{L}_{F}^{\Gamma}:=\overline{\operatorname{span}}\{\delta_{v,\Gamma} \mid v\in H_{\mathcal{L},F}\setminus H_{\mathcal{L},\emptyset}\}\subseteq c_{ 0}(\Gamma^{\underline{0}}),\] which corresponds to \([\mathcal{L}_{F}]_{\mathcal{L}_{\emptyset}}\) under the identification \(X(\Gamma)\cong[X(\Lambda)]_{\mathcal{L}_{\emptyset}}\). Now notice that \[[\,\cdot\,]_{\mathcal{L}_{\emptyset}}^{-1}([\mathcal{L}_{F}]_{\mathcal{L}_{ \emptyset}}^{(1)})\subseteq\mathcal{L}_{F}\ \text{for all}\ F\subseteq[k]\] (which holds automatically for \(F=\emptyset\) and \(F=[k]\)) if and only if \[(\mathcal{L}_{F}^{\Gamma})^{(1)}\subseteq\mathcal{L}_{F}^{\Gamma}\ \text{for all}\ \emptyset\neq F\subsetneqneq[k],\] which is in turn equivalent to having that \[\mathcal{I}_{F}(X(\Gamma))\cap\mathcal{L}_{\operatorname{inv},F}^{\Gamma}\cap \mathcal{L}_{\operatorname{lim},F}^{\Gamma}\subseteq\mathcal{L}_{F}^{\Gamma}\ \text{for all}\ \emptyset\neq F\subsetneq[k], \tag{5.10}\] by definition. We will show that (5.10) holds exactly when \(H_{\mathcal{L}}\setminus H_{\mathcal{L},\emptyset}\) is absorbent in \(\Gamma\), thereby completing the proof. To this end, fix \(\emptyset\neq F\subsetneqneq[k]\) and \(v\in\Gamma^{\underline{0}}\). We have that \(\delta_{v,\Gamma}\in\mathcal{I}_{F}(X(\Gamma))\) if and only if \(v\) is \(F\)-tracing in \(\Gamma\). Likewise, by (5.9) we have that \(\delta_{v,\Gamma}\in\mathcal{L}_{\operatorname{inv},F}^{\Gamma}\) if and only if \[s(v\Gamma^{\underline{m}})\subseteq\cap_{F\subsetneq D}(H_{\mathcal{L},D} \setminus H_{\mathcal{L},\emptyset})\ \text{for all}\ \underline{m}\perp F.\] Finally, we have that \(\delta_{v,\Gamma}\in\mathcal{L}_{\operatorname{lim},F}^{\Gamma}\) if and only if there exists \(\underline{m}\perp F\) such that whenever \(\underline{n}\perp F\) and \(\underline{n}\geq\underline{m}\), we have that \(s(v\Gamma^{\underline{n}})\subseteq H_{\mathcal{L},F}\setminus H_{\mathcal{L},\emptyset}\) and \(|v\Gamma^{\underline{n}}|<\infty\) by Proposition 5.4.4. It follows that (5.10) holds if and only if \(H_{\mathcal{L}}\setminus H_{\mathcal{L},\emptyset}\) is absorbent in \(\Gamma\), as required. In the row-finite case, the characterisation of Proposition 5.4.12 simplifies as follows. **Proposition 5.4.13**.: _Let \((\Lambda,d)\) be a row-finite \(k\)-graph. Let \(\mathcal{L}\) be a \(2^{k}\)-tuple of \(X(\Lambda)\) that consists of ideals and let \(H_{\mathcal{L}}\) be the corresponding family of sets of vertices of \(\Lambda\). Then \(\mathcal{L}\) is an NT-\(2^{k}\)-tuple of \(X(\Lambda)\) if and only if the following four conditions hold:_ * _for each_ \(\emptyset\neq F\subseteq[k]\)_, we have that_ \[H_{\mathcal{L},F}\subseteq H_{\mathcal{L},\emptyset}\cup\{v\notin H_{ \mathcal{L},\emptyset}\mid v\text{ is not an $F$-source in }\Gamma:=\Gamma(\Lambda\setminus H_{\mathcal{L},\emptyset})\},\] * \(H_{\mathcal{L}}\) _is hereditary in_ \(\Lambda\)_,_ * \(H_{\mathcal{L}}\) _is partially ordered,_ * \(H_{1,F}\cap H_{2,F}\cap H_{3,F}\subseteq H_{\mathcal{L},F}\) _for all_ \(\emptyset\neq F\subsetneq[k]\)_, where_ * \(H_{1,F}:=\bigcap_{\underline{n}\perp F}\{v\in\Lambda^{\underline{0}}\cdot \mid s(v\Lambda^{\underline{n}})\subseteq H_{\mathcal{L},\emptyset}\cup\{v \notin H_{\mathcal{L},\emptyset}\mid v\text{ is not an $F$-source in }\Gamma\}\}\)_,_ * \(H_{2,F}:=\bigcap_{\underline{m}\perp F}\{v\in\Lambda^{\underline{0}}\mid s(v \Lambda^{\underline{m}})\subseteq\cap_{F\subsetneq D}H_{\mathcal{L},D}\}\)_,_ * \(H_{3,F}\) _is the set of all_ \(v\in\Lambda^{\underline{0}}\) _for which there exists_ \(\underline{m}\perp F\) _such that whenever_ \(\underline{n}\perp F\) _and_ \(\underline{n}\geq\underline{m}\)_, we have that_ \(s(v\Lambda^{\underline{n}})\subseteq H_{\mathcal{L},F}\)_._ Proof.: Firstly, note that \(\Lambda\) is in particular strong finitely aligned, so we are free to use Proposition 5.4.12. Next, assuming that \(H_{\mathcal{L},\emptyset}\) is hereditary, we have that \(\Gamma\) inherits row-finiteness from \(\Lambda\) and therefore \(|v\Gamma^{\underline{i}}|<\infty\) for all \(v\in\Gamma^{\underline{0}}\) and \(i\in[k]\) automatically. Consequently, items (i)-(iii) of the statement coincide with items (i)-(iii) of Proposition 5.4.12, which are in turn equivalent to items (i)-(iii) of Definition 4.1.4. Thus, without loss of generality, we may assume that \(\mathcal{L}\) satisfies items (i)-(iii) of Definition 4.1.4. Since \(\phi_{\underline{n}}(c_{0}(\Lambda^{\underline{0}}))\subseteq\mathcal{K}(X_{ \underline{n}}(\Lambda))\) for all \(\underline{n}\in\mathbb{Z}_{+}^{k}\) by item (i) of Proposition 5.4.2, by Proposition 4.1.5 it suffices to show that item (iv) of the statement is equivalent to the following condition: \[\bigg{(}\bigcap_{\underline{n}\perp F}X_{\underline{n}}(\Lambda)^{-1}(J_{F}( \mathcal{L}_{\emptyset},X(\Lambda)))\bigg{)}\cap\mathcal{L}_{\text{inv},F} \cap\mathcal{L}_{\text{lim},F}\subseteq\mathcal{L}_{F}\text{ for all }\emptyset\neq F \subsetneq[k].\] To this end, fix \(\emptyset\neq F\subsetneq[k]\). The vertex set associated with \(\bigcap_{\underline{n}\perp F}X_{\underline{n}}(\Lambda)^{-1}(J_{F}(\mathcal{ L}_{\emptyset},X(\Lambda)))\) is nothing but \(H_{1,F}\), which can be seen by combining (5.8) and Proposition 5.4.8. Likewise, we have that \(H_{\mathcal{L}_{\text{inv},F}}=H_{2,F}\) by (5.9). Finally, we have that \(H_{\mathcal{L}_{\text{lim},F}}=H_{3,F}\) by Proposition 5.4.4, noting that the stipulation that \(|v\Lambda^{\underline{n}}|<\infty\) can be dropped by row-finiteness of \(\Lambda\). The result now follows from the fact that the duality between ideals of \(c_{0}(\Lambda^{\underline{0}})\) and subsets of \(\Lambda^{\underline{0}}\) preserves inclusions and intersections. The characterisation of relative NO-\(2^{k}\)-tuples in the case of strong finite alignment (resp. row-finiteness) follows directly from Proposition 5.4.12 (resp. Proposition 5.4.13), as inclusion of ideals corresponds to inclusion of their associated vertex sets. **Corollary 5.4.14**.: _Let \((\Lambda,d)\) be a strong finitely aligned (resp. row-finite) \(k\)-graph. Let \(\mathcal{K}\) be a relative \(2^{k}\)-tuple of \(X(\Lambda)\) that consists of ideals and let \(H_{\mathcal{K}}\) be the corresponding family of sets of vertices of \(\Lambda\). Let \(\mathcal{L}\) be a \(2^{k}\)-tuple of \(X(\Lambda)\) that consists of ideals and let \(H_{\mathcal{L}}\) be the corresponding family of sets of vertices of \(\Lambda\). Then the following are equivalent:_ * \(\mathcal{L}\) _is a_ \(\mathcal{K}\)_-relative NO-_\(2^{k}\)_-tuple of_ \(X(\Lambda)\)_;_ * \(H_{\mathcal{L}}\) _satisfies_ (i)-(iv) _of Proposition_ 5.4.12 _(resp. Proposition_ 5.4.13_) and_ \(H_{\mathcal{K},F}\subseteq H_{\mathcal{L},F}\) _for all_ \(F\subseteq[k]\)_._ _In particular, the following are equivalent:_ * \(\mathcal{L}\) _is an NO-_\(2^{k}\)_-tuple of_ \(X(\Lambda)\)_;_ * \(H_{\mathcal{L}}\) _satisfies_ (i)-(iv) _of Proposition_ 5.4.12 _(resp. Proposition_ 5.4.13_) and every_ \(F\)_-tracing vertex of_ \(\Lambda\) _belongs to_ \(H_{\mathcal{L},F}\) _for all_ \(\emptyset\neq F\subseteq[k]\)_._ Finally, we turn our attention to the case of a locally convex row-finite \(k\)-graph \((\Lambda,d)\). In accordance with [45], a subset \(H\) of \(\Lambda^{\underline{0}}\) is called _saturated_ if whenever a vertex \(v\in\Lambda^{\underline{0}}\) satisfies \(s(v\Lambda^{\underline{<}\underline{i}})\subseteq H\) for some \(i\in[k]\), we have that \(v\in H\). Note that \(\emptyset\) is vacuously saturated. When \(H\subseteq\Lambda^{\underline{0}}\) is both hereditary and saturated, the row-finite \(k\)-graph \(\Gamma(\Lambda\setminus H)\) is also locally convex [45, Theorem 5.2]. The _saturation_\(\overline{H}^{s}\) of \(H\subseteq\Lambda^{\underline{0}}\) is the smallest saturated subset of \(\Lambda^{\underline{0}}\) that contains \(H\). The saturation of a hereditary set is also hereditary [45, Lemma 5.1]. **Proposition 5.4.15**.: **[**3, 45**]** _Let \((\Lambda,d)\) be a locally convex row-finite \(k\)-graph. Then the operations_ \[H_{1}\lor H_{2}:=\overline{H_{1}\cup H_{2}}^{s}\quad\text{and}\quad H_{1}\wedge H _{2}:=H_{1}\cap H_{2}\] _for hereditary saturated subsets \(H_{1},H_{2}\subseteq\Lambda^{\underline{0}}\) define a lattice structure on the set of hereditary saturated subsets of \(\Lambda^{\underline{0}}\)._ Local convexity implies that \(\mathcal{J}_{F}(X(\Lambda))\) and \(\mathcal{I}_{F}(X(\Lambda))\) coincide for all \(F\subseteq[k]\). **Proposition 5.4.16**.: _Let \((\Lambda,d)\) be a locally convex row-finite \(k\)-graph. Then \(\mathcal{J}_{F}(X(\Lambda))=\mathcal{I}_{F}(X(\Lambda))\) for all \(F\subseteq[k]\)._ **Proof.** The claim holds trivially when \(F=\emptyset\), so fix \(\emptyset\neq F\subseteq[k]\). It suffices to show that \(\mathcal{J}_{F}(X(\Lambda))\subseteq\mathcal{I}_{F}(X(\Lambda))\). To this end, since \(\Lambda\) is row-finite this amounts to showing that if \(v\in\Lambda^{\underline{0}}\) is not an \(F\)-source, then it is \(F\)-tracing. Recalling the definition of \(F\)-tracing vertices, we proceed by induction on the length of the degree of the paths \(\lambda\in v\Lambda\) with \(d(\lambda)\perp F\). For \(|d(\lambda)|=0\), there is nothing to show (this accounts for \(F=[k]\)). For \(|d(\lambda)|=1\), we have that \(d(\lambda)=\underline{i}\) for some \(i\in F^{c}\). Since \(v\) is not an \(F\)-source, we can find \(\mu\in v\Lambda\underline{i}\) for some \(j\in F\). Since \(i\neq j,\lambda\in v\Lambda\underline{i}\) and \(\mu\in v\Lambda\underline{j}\), local convexity of \((\Lambda,d)\) gives in particular that \(s(\lambda)\Lambda\underline{i}\neq\emptyset\), and thus \(s(\lambda)\) is not an \(F\)-source, as required. Now assume that \(s(\lambda)\) is not an \(F\)-source for all \(\lambda\in v\Lambda\) satisfying \(d(\lambda)\perp F\) and \(|d(\lambda)|=N\) for some \(N\in\mathbb{N}\). Fix \(\lambda\in v\Lambda\) such that \(d(\lambda)\perp F\) and \(|d(\lambda)|=N+1\). Then \(d(\lambda)=\underline{n}+\underline{i}\) for some \(\underline{n}\perp F\) satisfying \(|\underline{n}|=N\) and some \(i\in F^{c}\). The factorisation property gives unique paths \(\mu,\nu\in\Lambda\) such that \(d(\mu)=\underline{n},d(\nu)=\underline{i}\) and \(\lambda=\mu\nu\). Note that \(v=r(\lambda)=r(\mu\nu)=r(\mu)\), so the inductive hypothesis implies that \(s(\mu)\Lambda\underline{i}\neq\emptyset\) for some \(j\in F\). We also have that \(\nu\in s(\mu)\Lambda\underline{i}\) and \(i\neq j\), so local convexity of \((\Lambda,d)\) gives that \(s(\nu)\Lambda\underline{i}\neq\emptyset\). In other words, \(s(\nu)=s(\lambda)\) is not an \(F\)-source, as required. By induction, the proof is complete. **Proposition 5.4.17**.: _Let \((\Lambda,d)\) be a locally convex row-finite \(k\)-graph and let \(H\subseteq\Lambda^{\underline{0}}\) be hereditary and saturated. Then_ \[\mathcal{I}_{F}([X(\Lambda)]_{I_{H}})=(\mathcal{I}_{F}(X(\Lambda))+I_{H})/I_{H }\text{ for all }F\subseteq[k],\] _and consequently_ \[J_{F}(I_{H},X(\Lambda))=\mathcal{I}_{F}(X(\Lambda))+I_{H}\text{ for all }\emptyset \neq F\subseteq[k].\] **Proof.** The first claim holds trivially when \(F=\emptyset\), so fix \(\emptyset\neq F\subseteq[k]\). Recall that \(\Gamma:=\Gamma(\Lambda\setminus H)\) is a locally convex row-finite \(k\)-graph, and that \(X(\Gamma)\) and \([X(\Lambda)]_{I_{H}}\) are unitarily equivalent via the family \(\{W_{\underline{n}}\colon X_{\underline{n}}(\Gamma(\Lambda\setminus H)) \to[X_{\underline{n}}(\Lambda)]_{I_{H}}\}_{\underline{n}\in\mathbb{Z}_{+}^{ \underline{k}}}\) of Proposition 5.4.6. We have that \[W_{\underline{0}}(\mathcal{I}_{F}(X(\Gamma)))=\mathcal{I}_{F}([X(\Lambda)]_{ I_{H}})\] by Proposition 2.5.6. Moreover, since \(\Lambda\) and \(\Gamma\) are locally convex and row-finite, Proposition 5.4.16 gives that \[\mathcal{I}_{F}(X(\Lambda))=\mathcal{J}_{F}(X(\Lambda))\quad\text{and}\quad \mathcal{I}_{F}(X(\Gamma))=\mathcal{J}_{F}(X(\Gamma)).\] Hence it suffices to show that \[(\mathcal{J}_{F}(X(\Lambda))+I_{H})/I_{H}=W_{\underline{0}}(\mathcal{J}_{F}(X( \Gamma))). \tag{5.11}\] Note that \[(\mathcal{J}_{F}(X(\Lambda))+I_{H})/I_{H}=\overline{\operatorname{span}}\{[ \delta_{v,\Lambda}]_{I_{H}}\mid v\not\in H,v\text{ is not an }F\text{-source in }\Lambda\}. \tag{5.12}\] Thus, to prove the forward inclusion of (5.11), it suffices to show that \([\delta_{v,\Lambda}]_{I_{H}}\in W_{\underline{0}}(\mathcal{J}_{F}(X(\Gamma)))\) whenever \(v\not\in H\) and \(v\) is not an \(F\)-source in \(\Lambda\). Fix such a \(v\in\Lambda^{\underline{0}}\) and note that \(W_{\underline{0}}^{-1}([\delta_{v,\Lambda}]_{I_{H}})=\delta_{v,\Gamma}\). We claim that \(v\) is not an \(F\)-source in \(\Gamma\). Towards contradiction, suppose that \(v\Gamma^{\underline{i}}=\emptyset\) for all \(i\in F\). Since \(v\) is not an \(F\)-source in \(\Lambda\), there exists \(i\in F\) such that \(v\Lambda^{\underline{i}}\neq\emptyset\). For each \(\lambda\in v\Lambda^{\underline{i}}\), we must have that \(s(\lambda)\in H\), as otherwise we would obtain that \(v\Gamma^{\underline{i}}\neq\emptyset\). Thus \(s(v\Lambda^{\underline{i}})=s(v\Lambda^{\underline{i}})\subseteq H\). Since \(H\) is saturated, we obtain the contradiction that \(v\in H\), establishing the forward inclusion of (5.11). For the reverse inclusion of (5.11), take \(v\in\Gamma^{\underline{0}}\), i.e., \(v\not\in H\), such that \(v\) is not an \(F\)-source in \(\Gamma\). In particular, \(v\in\Lambda^{\underline{0}}\) is not an \(F\)-source in \(\Lambda\). Hence \[W_{\underline{0}}(\delta_{v,\Gamma})=[\delta_{v,\Lambda}]_{I_{H}}\in(\mathcal{ J}_{F}(X(\Lambda))+I_{H})/I_{H}\] by (5.12), giving (5.11). The last claim follows by item (ii) of Proposition 4.1.3 and the fact that \(I_{H}\subseteq J_{F}(I_{H},X(\Lambda))\) for all \(\emptyset\neq F\subseteq[k]\), as \(I_{H}\) is positively invariant. **Proposition 5.4.18**.: _Let \((\Lambda,d)\) be a locally convex row-finite \(k\)-graph and let \(H\) be a subset of \(\Lambda^{\underline{0}}\). Then \(H\) is hereditary and saturated if and only if \(I_{H}\) is positively and negatively invariant for \(X(\Lambda)\)._ **Proof.** Assume that \(H\) is hereditary and saturated. Then \(I_{H}\) is positively invariant for \(X(\Lambda)\) by Proposition 5.4.5. Fix \(\emptyset\neq F\subseteq[k]\). By Proposition 5.4.17, we obtain that \[\mathcal{I}_{F}(X(\Lambda))\subseteq\mathcal{I}_{F}(X(\Lambda))+I_{H}=J_{F}(I _{H},X(\Lambda)).\] Hence \(I_{H}\) is negatively invariant for \(X(\Lambda)\) by Proposition 5.1.2, as required. Now assume that \(I_{H}\) is positively and negatively invariant for \(X(\Lambda)\). We have that \(H\) is hereditary by Proposition 5.4.5, so it remains to check that \(H\) is saturated. Accordingly, fix \(v\in\Lambda^{\underline{0}}\) and suppose that \(s(v\Lambda^{\underline{<}\underline{i}})\subseteq H\) for some \(i\in[k]\). We must show that \(v\in H\). This is clear when \(v\Lambda^{\underline{<}\underline{i}}=\emptyset\), as in this case \(v\Lambda^{\underline{<}\underline{i}}=\{v\}\); so assume that \(v\Lambda^{\underline{i}}\neq\emptyset\). In this case we have that \(s(v\Lambda^{\underline{<}\underline{i}})=s(v\Lambda^{\underline{A}})\subseteq H\), and thus \(\delta_{v,\Lambda}\in X(\Lambda)^{-1}_{\{i\}}(I_{H})\) by (5.8). Since \(v\Lambda^{\underline{i}}\neq\emptyset\), we have that \(v\) is not an \(\{i\}\)-source, and thus \(v\) is \(\{i\}\)-tracing by Proposition 5.4.16. Hence \[\delta_{v,\Lambda}\in\mathcal{I}_{\{i\}}(X(\Lambda))\cap X(\Lambda)^{-1}_{\{ i\}}(I_{H})\subseteq I_{H},\] using negative invariance of \(I_{H}\) in the final inclusion. Consequently, we obtain that \(v\in H\) and hence \(H\) is saturated, finishing the proof. **Proposition 5.4.19**.: _Let \((\Lambda,d)\) be a locally convex row-finite \(k\)-graph. Then the association_ \[I\mapsto\mathcal{L}_{I},\text{ where }\mathcal{L}_{I,F}:=\mathcal{I}_{F}(X( \Lambda))+I\text{ for all }F\subseteq[k],\] _defines a bijection between the set of ideals of \(c_{0}(\Lambda^{\underline{0}})\) that are positively and negatively invariant for \(X(\Lambda)\) and the set of NO-\(2^{k}\)-tuples of \(X(\Lambda)\), which in turn induces a bijection with the set of gauge-invariant ideals of \(\mathcal{NO}_{X(\Lambda)}\)._ **Proof.** By Proposition 5.4.17 and Corollary 5.1.6, the map is well-defined and clearly injective. For surjectivity, we must show that if \(\mathcal{L}\) is an NO-\(2^{k}\)-tuple then necessarily \(\mathcal{L}_{F}=\mathcal{I}_{F}(X(\Lambda))+\mathcal{L}_{\emptyset}\) for every \(\emptyset\neq F\subseteq[k]\) (as the equality clearly holds when \(F=\emptyset\)). We have that \(\mathcal{L}_{\emptyset}\) is positively and negatively invariant for \(X(\Lambda)\) by Proposition 5.1.5. In turn, we obtain that \[(\mathcal{I}_{F}(X(\Lambda))+\mathcal{L}_{\emptyset})/\mathcal{L}_{\emptyset} \subseteq\mathcal{L}_{F}/\mathcal{L}_{\emptyset}\subseteq J_{F}(\mathcal{L}_ {\emptyset},X(\Lambda))/\mathcal{L}_{\emptyset}=(\mathcal{I}_{F}(X(\Lambda))+ \mathcal{L}_{\emptyset})/\mathcal{L}_{\emptyset}\text{ for all }\emptyset\neq F \subseteq[k],\] using Propositions 5.4.17 and 5.4.18 in the final equality. Thus \(\mathcal{L}_{F}=\mathcal{I}_{F}(X(\Lambda))+\mathcal{L}_{\emptyset}\) for all \(\emptyset\neq F\subseteq[k]\), showing that the map of the statement is bijective. The last claim follows from Corollary 4.2.12, completing the proof. To recover [45, Theorem 5.2], we need the following identification. Let \(H\subseteq\Lambda^{\underline{0}}\) be a hereditary vertex set. Then \[\Lambda(H):=(H,\{\lambda\in\Lambda\mid r(\lambda)\in H\},r,s)\] is a locally convex row-finite sub-\(k\)-graph of \(\Lambda\), and \(X(\Lambda(H))\) and \(I_{H}X(\Lambda)I_{H}\) are unitarily equivalent via the family of maps \(\{W_{\underline{n}}\}_{\underline{n}\in\mathbb{Z}_{+}^{k}}\) defined by \[W_{\underline{n}}\colon X_{\underline{n}}(\Lambda(H))\to I_{H}X_{\underline{n} }(\Lambda)I_{H};\delta_{\lambda,\Lambda(H)}\mapsto\delta_{\lambda,\Lambda} \text{ for all }\lambda\in\Lambda(H)^{\underline{n}},\underline{n}\in\mathbb{Z}_{+}^{k}.\] Consequently, by Propositions 2.5.12 and 2.6.10 we obtain that \[\operatorname{C}^{*}(\Lambda(H))\cong\mathcal{NO}_{X(\Lambda(H))}\cong \mathcal{NO}_{I_{H}X(\Lambda)I_{H}}\cong I_{H}\operatorname{C}^{*}(\Lambda)I_{H},\] where we identify \(I_{H}\) with its faithful image inside \(\operatorname{C}^{*}(\Lambda)\) in the final \(*\)-isomorphism. **Corollary 5.4.20**.: **[**45**, Theorem 5.2]** _Let \((\Lambda,d)\) be a locally convex row-finite \(k\)-graph. Equip the set of hereditary saturated subsets of \(\Lambda^{\underline{0}}\) with the lattice structure of Proposition 5.4.15, and equip the set of NO-\(2^{k}\)-tuples of \(X(\Lambda)\) with the lattice structure of Definition 4.2.5 (suitably restricted). Let \(\{T_{\lambda}\}_{\lambda\in\Lambda}\) be the universal Cuntz-Krieger \(\Lambda\)-family. Then the following hold:_ * _The set of hereditary saturated subsets of_ \(\Lambda^{\underline{0}}\) _and the set of NO-_\(2^{k}\)_-tuples of_ \(X(\Lambda)\) _are isomorphic as lattices via the map_ (5.13) \[H\mapsto\mathcal{L}_{I_{H}},\text{ where }\mathcal{L}_{I_{H},F}:=\mathcal{I}_{F}(X( \Lambda))+I_{H}\text{ for all }F\subseteq[k].\] _Consequently, the set of hereditary saturated subsets of_ \(\Lambda^{\underline{0}}\) _and the set of gauge-invariant ideals of_ \(\mathrm{C}^{*}(\Lambda)\) _are isomorphic as lattices._ * _Let_ \(H\) _be a hereditary and saturated subset of_ \(\Lambda^{\underline{0}}\) _and let_ \(Q\colon\mathcal{NT}_{X(\Lambda)}\to\mathrm{C}^{*}(\Lambda)\) _be the canonical quotient map. Then the quotient_ \(\mathrm{C}^{*}(\Lambda)/Q(\mathfrak{J}^{\mathcal{L}_{I_{H}}})\) _is canonically_ \(*\)_-isomorphic to the graph_ \(\mathrm{C}\)_*-algebra_ \(\mathrm{C}^{*}(\Gamma(\Lambda\setminus H))\)_._ * _Let_ \(H\) _be a hereditary subset of_ \(\Lambda^{\underline{0}}\)_. Then_ \(\mathrm{C}^{*}(\Lambda(H))\) _is canonically_ \(*\)_-isomorphic to the C*-subalgebra_ \(\mathrm{C}^{*}(T_{\lambda}\mid r(\lambda)\in H)\) _of_ \(\mathrm{C}^{*}(\Lambda)\)_, and this_ \(\mathrm{C}\)_*-subalgebra is a full corner of the ideal generated by_ \(\{T_{v}\mid v\in H\}\) _in_ \(\mathrm{C}^{*}(\Lambda)\)_. If_ \(H\) _is in addition saturated, then the ideal generated by_ \(\{T_{v}\mid v\in H\}\) _in_ \(\mathrm{C}^{*}(\Lambda)\) _coincides with_ \(Q(\mathfrak{J}^{\mathcal{L}_{I_{H}}})\)_, for the canonical quotient map_ \(Q\colon\mathcal{NT}_{X(\Lambda)}\to\mathrm{C}^{*}(\Lambda)\)_._ **Proof**.: (i) Firstly, note that (5.13) is the composition of the duality map \(H\mapsto I_{H}\) and the map \(I\mapsto\mathcal{L}_{I}\) of Proposition 5.4.19, where \(H\subseteq\Lambda^{\underline{0}}\) is hereditary and saturated, and \(I\) is an ideal of \(c_{0}(\Lambda^{\underline{0}})\) that is positively and negatively invariant for \(X(\Lambda)\). These maps are bijections by Propositions 5.4.18 and 5.4.19, respectively, and hence the map (5.13) is also a bijection. Therefore, to prove the first claim, it suffices to show that (5.13) preserves the lattice structure. To this end, fix hereditary saturated subsets \(H_{1}\) and \(H_{2}\) of \(\Lambda^{\underline{0}}\). We must show that \[\mathcal{L}_{I_{H_{1}}}\wedge\mathcal{L}_{I_{H_{2}}}=\mathcal{L}_{I_{H_{1} \wedge H_{2}}}\quad\text{and}\quad\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{ H_{2}}}=\mathcal{L}_{I_{H_{1}\lor H_{2}}}.\] For the operation \(\wedge\), recall that \(H_{1}\wedge H_{2}\equiv H_{1}\cap H_{2}\) and that \(I_{H_{1}}I_{H_{2}}=I_{H_{1}\cap H_{2}}\). Hence for each \(F\subseteq[k]\) we obtain that \[(\mathcal{L}_{I_{H_{1}}}\wedge\mathcal{L}_{I_{H_{2}}})_{F} =\mathcal{L}_{I_{H_{1}},F}\mathcal{L}_{I_{H_{2}},F}=(\mathcal{I}_{ F}(X(\Lambda))+I_{H_{1}})(\mathcal{I}_{F}(X(\Lambda))+I_{H_{2}})\] \[=\mathcal{I}_{F}(X(\Lambda))+I_{H_{1}}I_{H_{2}}=\mathcal{I}_{F}( X(\Lambda))+I_{H_{1}\cap H_{2}}=\mathcal{L}_{I_{H_{1}\wedge H_{2}},F},\] by Proposition 4.2.6. For the operation \(\vee\), we must show that \[(\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}})_{F}=\mathcal{L}_{I_{ \overline{H_{1}\cup H_{2}}^{*},F}}\text{ for all }F\subseteq[k].\] For \(F=\emptyset\), we must show that \[(\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}})_{\emptyset}\equiv \overline{\pi}_{X(\Lambda)}^{-1}(\mathfrak{J}^{\mathcal{L}_{I_{H_{1}}}}+ \mathfrak{J}^{\mathcal{L}_{I_{H_{2}}}})=I_{\overline{H_{1}\cup H_{2}}^{*}},\] using Proposition 4.2.7 in the first identity. We have that \(H_{1}\subseteq\overline{H_{1}\cup H_{2}}^{*}\) and hence we obtain that \(I_{H_{1}}\subseteq I_{\overline{H_{1}\cup H_{2}}^{*}}\). Thus \(\mathcal{L}_{I_{H_{1}}}\subseteq\mathcal{L}_{I_{\overline{H_{1}\cup H_{2}}^{*}}}\), and so \(\mathfrak{J}^{\mathcal{L}_{I_{H_{1}}}}\subseteq\mathfrak{J}^{\mathcal{L}_{I_{ \overline{H_{1}\cup H_{2}}^{*}}}}\) using that the parametrisation of Theorem 4.2.3 respects inclusions. Likewise for \(H_{2}\). In turn, we obtain that \[\overline{\pi}_{X(\Lambda)}^{-1}(\mathfrak{J}^{\mathcal{L}_{I_{H_{1}}}}+ \mathfrak{J}^{\mathcal{L}_{I_{H_{2}}}})\subseteq\overline{\pi}_{X(\Lambda)}^{ -1}(\mathfrak{J}^{\mathcal{L}_{I_{\overline{H_{1}\cup H_{2}}^{*}}}})=\mathcal{ L}_{\emptyset}^{\mathcal{L}_{I_{\overline{H_{1}\cup H_{2}}^{*}}}}=I_{ \overline{H_{1}\cup H_{2}}}.\] Conversely, note that \((\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}})_{\emptyset}\) is positively and negatively invariant for \(X(\Lambda)\) by Proposition 5.1.5, since it participates in the NO-\(2^{k}\)-tuple \(\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}}\). An application of Proposition 5.4.18 then gives that \(H_{(\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}})_{\emptyset}}\) is hereditary and saturated. Additionally, we have that \(I_{H_{1}\cup H_{2}}\subseteq(\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}})_{\emptyset}\) by definition. From this we deduce that \(H_{1}\cup H_{2}\subseteq H_{(\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}}) _{\emptyset}}\). Minimality of the saturation then implies that \(\overline{H_{1}\cup H_{2}}^{*}\subseteq H_{(\mathcal{L}_{I_{H_{1}}}\vee \mathcal{L}_{I_{H_{2}}})_{\emptyset}}\). Consequently we have that \(I_{\overline{H_{1}\cup H_{2}}^{*}}\subseteq(\mathcal{L}_{I_{H_{1}}}\vee \mathcal{L}_{I_{H_{2}}})_{\emptyset}\), as required. Now fix \(\emptyset\neq F\subseteq[k]\). Recall from the preceding argument that \(\mathfrak{J}^{\mathcal{L}_{IH_{1}}}+\mathfrak{J}^{\mathcal{L}_{IH_{2}}}\subseteq \mathfrak{J}^{\mathcal{L}_{I\overline{H_{1}\cup H_{2}}}}\) and hence \[\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}}\equiv\mathcal{L}^{3}{}^{ \mathcal{L}_{I_{H_{1}}}+\mathfrak{J}^{\mathcal{L}_{I_{H_{2}}}}}\subseteq \mathcal{L}^{3}{}^{\mathcal{L}_{I\overline{H_{1}\cup H_{2}}}}=\mathcal{L}_{I_{ \overline{H_{1}\cup H_{2}}}}\equiv\mathcal{L}_{I_{H_{1}\lor H_{2}}},\] since the parametrisation of Theorem 4.2.3 respects inclusions. In particular, we have that \((\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}})_{F}\subseteq\mathcal{L}_ {I_{H_{1}\lor H_{2}},F}\). For the reverse inclusion, first recall that \[(\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}})_{F}=[\cdot]_{(\mathcal{L} _{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}})_{\emptyset}}^{-1}[((\mathcal{L}_{I_{ H_{1}},F}+\mathcal{L}_{I_{H_{2}},F}+(\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}})_{ \emptyset})/(\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}})_{\emptyset}) ^{(k-1)}],\] by Proposition 4.2.7. Observe that \[\mathcal{I}_{F}(X(\Lambda))+(\mathcal{L}_{I_{H_{1}}}\vee\mathcal{ L}_{I_{H_{2}}})_{\emptyset} \subseteq\mathcal{L}_{I_{H_{1}},F}+(\mathcal{L}_{I_{H_{1}}}\vee \mathcal{L}_{I_{H_{2}}})_{\emptyset}\] \[=\mathcal{I}_{F}(X(\Lambda))+I_{H_{1}}+(\mathcal{L}_{I_{H_{1}}} \vee\mathcal{L}_{I_{H_{2}}})_{\emptyset}\] \[\subseteq\mathcal{I}_{F}(X(\Lambda))+(\mathcal{L}_{I_{H_{1}}} \vee\mathcal{L}_{I_{H_{2}}})_{\emptyset},\] using that \(I_{H_{1}}\subseteq(\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}})_{\emptyset}\) in the final inclusion. Likewise for \(H_{2}\). Therefore, we have that \[(\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}})_{F}=[\cdot]_{(\mathcal{L }_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}})_{\emptyset}}^{-1}[((\mathcal{I}_{F} (X(\Lambda))+(\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}})_{\emptyset} )/(\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}})_{\emptyset})^{(k-1)}].\] We then obtain that \[[\mathcal{L}_{I_{H_{1}\lor H_{2}},F}]_{(\mathcal{L}_{I_{H_{1}}} \vee\mathcal{L}_{I_{H_{2}}})_{\emptyset}} =[\mathcal{I}_{F}(X(\Lambda))+I_{\overline{H_{1}\cup H_{2}}}]_{ (\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H_{2}}})_{\emptyset}}\] \[=[\mathcal{I}_{F}(X(\Lambda))+(\mathcal{L}_{I_{H_{1}}}\vee \mathcal{L}_{I_{H_{2}}})_{\emptyset}]_{(\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L }_{I_{H_{2}}})_{\emptyset}}\] \[\subseteq\Big{(}[\mathcal{I}_{F}(X(\Lambda))+(\mathcal{L}_{I_{H_ {1}}}\vee\mathcal{L}_{I_{H_{2}}})_{\emptyset}]_{(\mathcal{L}_{I_{H_{1}}} \vee\mathcal{L}_{I_{H_{2}}})_{\emptyset}}\Big{)}^{(k-1)}\,,\] using that \(I_{\overline{H_{1}\cup H_{2}}}=(\mathcal{L}_{I_{H_{1}}}\vee\mathcal{L}_{I_{H _{2}}})_{\emptyset}\) in the second equality. Thus \(\mathcal{L}_{I_{H_{1}\lor H_{2}},F}\subseteq(\mathcal{L}_{I_{H_{1}}}\vee \mathcal{L}_{I_{H_{2}}})_{F}\), as required. The second claim now follows by an application of Corollary 4.2.12. (ii) Let \(H\subseteq\Lambda^{\underline{0}}\) be hereditary and saturated. Then \(\mathcal{I}_{F}([X(\Lambda)]_{I_{H}})=(\mathcal{I}_{F}(X(\Lambda))+I_{H})/I_{H}\) for all \(F\subseteq[k]\) by Proposition 5.4.17. Hence, applying items (ii) and (iii) of Corollary 5.1.6 and adopting the notation therein, we obtain that \[\mathcal{NO}_{X(\Lambda)}/\left<Q_{\mathcal{I}}(\overline{\pi}_{X(\Lambda)}(I_ {H}))\right>\cong\mathcal{NO}_{[X(\Lambda)]_{I_{H}}}\cong\mathcal{NO}_{X( \Gamma(\Lambda\setminus H))},\] using Propositions 2.5.12 and 5.4.6 in the final \(*\)-isomorphism. Note that item (iii) of Corollary 5.1.6 applies due to row-finiteness of \((\Lambda,d)\). Item (ii) of the statement follows. (iii) The first statement follows by the comments preceding the corollary, using Proposition 2.6.10 for fullness. For the corner property, note that the generators of \(c_{0}(\Lambda^{\underline{0}})\) form a countable approximate unit of projections. The second statement is an immediate consequence of item (iii) of Corollary 5.1.6. ### Finite frames In this subsection we apply the NT-\(2^{d}\)-tuple machinery to the case of a product system \(X\) over \(\mathbb{Z}_{+}^{d}\) in which \(X_{\underline{i}}\) admits a finite frame for all \(i\in[d]\). In [30], the second-named author used the quotients of \(\mathcal{NT}_{X}\) by the ideals \(\left<\overline{\pi}_{X}(A)\overline{q}_{X_{\underline{i}}}\mid i\in F\right>\) for \(\emptyset\neq F\subseteq[d]\) in order to examine the structure of subinvariant KMS-states. A note was made in [30] that such quotients can be realised as the Cuntz-Nica-Pimsner algebra of an induced product system. We will see here how this is achieved from the parametrisation that we have established. **Definition 5.5.1**.: Let \(X\) be a right Hilbert module over a C*-algebra \(A\). A finite non-empty subset \(\{\xi^{(j)}\}_{j\in[N]}\) of \(X\) is said to be a _finite frame of \(X\)_ if \(\sum_{j=1}^{N}\Theta_{\xi^{(j)},\xi^{(j)}}=\mathrm{id}_{X}\). When such a subset exists, we say that \(X\)_admits a finite frame_. Observe that a right Hilbert C*-module \(X\) admits a finite frame if and only if \(\mathrm{id}_{X}\in\mathcal{K}(X)\), which in turn holds if and only if \(\mathcal{K}(X)=\mathcal{L}(X)\). Hence \(X\) is projective and finitely generated. **Lemma 5.5.2**.: _Let \(X\) and \(Y\) be C*-correspondences over a C*-algebra \(A\). Suppose that \(Y\) admits a finite frame \(\{y^{(j)}\}_{j\in[N]}\). Then_ \[\Theta^{X}_{x_{1},x_{2}}\otimes\mathrm{id}_{Y}=\sum_{j=1}^{N}\Theta^{X\otimes_{ \underline{n}}Y}_{x_{1}\otimes y^{(j)},x_{2}\otimes y^{(j)}},\text{ for all }x_{1},x_{2}\in X.\] _If in addition \(X\) admits a finite frame \(\{x^{(i)}\}_{i\in[M]}\), then \(X\otimes_{A}Y\) admits the finite frame_ \[\{x^{(i)}\otimes y^{(j)}\mid i\in[M],j\in[N]\}.\] Proof.: This follows by a direct computation on elementary tensors. Admission of a finite frame is preserved when we pass to quotients, since \([\mathrm{id}_{X}]_{I}=\mathrm{id}_{[X]_{I}}\). **Proposition 5.5.3**.: _Let \(X\) be a right Hilbert module over a C*-algebra \(A\) and let \(I\subseteq A\) be an ideal. If \(X\) admits a finite frame \(\{\xi^{(j)}\}_{j\in[N]}\), then \([X]_{I}\) admits the finite frame \(\{[\xi^{(j)}]_{I}\}_{j\in[N]}\)._ Passing to product systems, we have the following proposition. **Proposition 5.5.4**.: _Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\). Then \(X_{\underline{i}}\) admits a finite frame for all \(i\in[d]\) if and only if \(X_{\underline{n}}\) admits a finite frame for all \(\underline{n}\in\mathbb{Z}_{+}^{d}\setminus\{\underline{0}\}\)._ Proof.: The forward implication follows by inducting on \(|\underline{n}|\) and using Lemma 5.5.2. The converse is immediate. We do not assume that \(X_{\underline{0}}=A\) admits a finite frame, as this would force \(A\) to be unital, and the results of this subsection hold even when \(A\) is non-unital. Notice that \(X\) is automatically strong compactly aligned when every \(X_{\underline{i}}\) admits a finite frame by Corollary 2.5.3. **Corollary 5.5.5**.: _Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\), wherein \(X_{\underline{i}}\) admits a finite frame for all \(i\in[d]\). Fix \(\underline{n},\underline{m}\in\mathbb{Z}_{+}^{d}\setminus\{\underline{0}\}\) and let \(\{\xi_{\underline{m}}^{(j)}\}_{j\in[N_{\underline{m}}]}\) be a finite frame of \(X_{\underline{m}}\). Then we have that_ \[\iota_{\underline{n}}^{\underline{n}+\underline{m}}(\Theta^{X_{\underline{n}} }_{\underline{\xi_{\underline{n}}},\underline{\eta_{\underline{n}}}})=\sum_{j =1}^{N_{\underline{m}}}\Theta^{X_{\underline{n}+\underline{m}}}_{\underline{ \xi_{\underline{n}}},\underline{\eta_{\underline{n}}},\underline{\xi_{ \underline{n}}}^{(j)}},\text{ for all }\xi_{\underline{n}},\eta_{ \underline{n}}\in X_{\underline{n}}.\] _In particular, if \((\pi,t)\) is a representation of \(X\) then_ \[\psi_{\underline{n}+\underline{m}}(\iota_{\underline{n}}^{\underline{n}+ \underline{m}}(\Theta^{X_{\underline{n}}}_{\xi_{\underline{n}},\underline{ \eta_{\underline{n}}}}))=\sum_{j=1}^{N_{\underline{m}}}t_{\underline{n}}(\xi_ {\underline{n}})t_{\underline{m}}(\xi_{\underline{m}}^{(j)})t_{\underline{m} }(\xi_{\underline{m}}^{(j)})^{*}t_{\underline{n}}(\eta_{\underline{n}})^{*}, \text{ for all }\xi_{\underline{n}},\eta_{\underline{n}}\in X_{\underline{n}}.\] Proof.: First note that the finite frame of \(X_{\underline{m}}\) exists by Proposition 5.5.4. The proof now follows by a direct computation using Lemma 5.5.2 and the properties of \((\pi,t)\). **Remark 5.5.6**.: Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\). Suppose that every \(X_{\underline{i}}\) admits a finite frame \(\{\xi_{\underline{i}}^{(j)}\}_{j\in[N_{\underline{i}}]}\). If \((\pi,t)\) is a Nica-covariant representation of \(X\), then we have that \[p_{\underline{i}}=\psi_{\underline{i}}(\mathrm{id}_{X_{\underline{i}}})=\psi_ {\underline{i}}(\sum_{j=1}^{N_{\underline{i}}}\Theta_{\xi_{\underline{i}}^{(j )},\xi_{\underline{i}}^{(j)})}=\sum_{j=1}^{N_{\underline{i}}}t_{\underline{i}}( \xi_{\underline{i}}^{(j)})t_{\underline{i}}(\xi_{\underline{i}}^{(j)})^{*}\in \mathrm{C}^{*}(\pi,t),\text{ for all }i\in[d].\] Since the left action of each fibre of \(X\) is by compacts, we have that \(\pi(A)q_{F}\subseteq\mathrm{C}^{*}(\pi,t)\) for all \(F\subseteq[d]\) by Proposition 2.5.10. However, we may still have that \(q_{F}\notin\mathrm{C}^{*}(\pi,t)\) for \(F\subseteq[d]\) (unless \(\mathrm{C}^{*}(\pi,t)\) is unital). **Proposition 5.5.7**.: _Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\), wherein each left action is by compact operators. Let \((\pi,t)\) be a Nica-covariant representation of \(X\) and fix \(\emptyset\neq F\subseteq[d]\). Then for each \(\emptyset\neq D\subseteq F\), we have that_ \[\pi(A)q_{D}\subseteq\langle\pi(A)q_{\underline{i}}\mid i\in F\rangle\subseteq \mathrm{C}^{*}(\pi,t).\] **Proof.** Without loss of generality, assume that \(D=[m]\) and \(F=[n]\) for \(m\leq n\). By the Hewitt-Cohen Factorisation Theorem, for \(a\in A\) there exist \(b_{1},\ldots,b_{m}\in A\) such that \(a=b_{1}\cdots b_{m}\). In turn, we obtain that \[\pi(a)q_{D}=(\pi(b_{1})\cdots\pi(b_{m}))(q_{\underline{1}}\cdots q_{\underline{ m}})=(\pi(b_{1})q_{\underline{1}})\cdots(\pi(b_{m})q_{\underline{m}})\in \left\langle\pi(A)q_{\underline{i}}\mid i\in F\right\rangle,\] using that \(q_{\underline{i}}\in\pi(A)^{\prime}\) for all \(i\in D\) by Proposition 2.5.9. This finishes the proof. We now pass to the study of the quotient \(\mathcal{N}\mathcal{T}_{X}/\left\langle\overline{\pi}_{X}(A)\overline{q}_{X, \underline{i}}\mid i\in F\right\rangle\) for \(\emptyset\neq F\subseteq[d]\). We first turn our attention to the case of \(F=[d]\), where the quotient corresponds to the finite part of the Wold decomposition of a KMS-state in the context of [30]. **Proposition 5.5.8**.: _Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\), wherein \(X_{\underline{i}}\) admits a finite frame for all \(i\in[d]\). Consider the gauge-invariant ideal_ \[\mathfrak{J}:=\left\langle\overline{\pi}_{X}(A)\overline{q}_{X,\underline{i}} \mid i\in[d]\right\rangle\subseteq\mathcal{N}\mathcal{T}_{X}.\] _Then_ \[\mathcal{L}_{\emptyset}^{3}=\{a\in A\mid\lim_{\underline{n}\in\mathbb{Z}_{+}^{d }}\|\phi_{\underline{n}}(a)\|=0\}\quad\text{and}\quad\mathcal{L}_{F}^{3}=A \text{ for all }\emptyset\neq F\subseteq[d].\] _Moreover, the product system \([X]_{\mathcal{L}_{\emptyset}^{3}}\) is regular, and thus there is a canonical \(*\)-isomorphism_ \[\mathcal{N}\mathcal{T}_{X}/\mathfrak{J}\cong\mathcal{N}\mathcal{O}_{[X]_{ \mathcal{L}_{\emptyset}^{3}}}.\] _If \(X\) is in addition injective, then \(\mathcal{L}_{\emptyset}^{3}=\{0\}\) and thus \(\mathcal{N}\mathcal{T}_{X}/\mathfrak{J}=\mathcal{N}\mathcal{O}_{X}\)._ **Proof.** Let \(Q_{\mathfrak{J}}\colon\mathcal{N}\mathcal{T}_{X}\to\mathcal{N}\mathcal{T}_{X}/ \mathfrak{J}\) be the canonical quotient map. The fact that \[\mathcal{L}_{\emptyset}^{3}\equiv\ker Q_{\mathfrak{J}}\circ\overline{\pi}_{X} =\{a\in A\mid\lim_{\underline{n}\in\mathbb{Z}_{+}^{d}}\|\phi_{\underline{n}} (a)\|=0\}\] follows from [30, Proposition 4.3]. Next, fix \(\emptyset\neq F\subseteq[d]\) and \(a\in A\). By Proposition 5.5.7, we have that \(\overline{\pi}_{X}(a)\overline{q}_{X,F}\in\mathfrak{J}\). Thus \(A\subseteq\mathcal{L}_{F}^{3}\), as required. Next we show that \([X]_{\mathcal{L}_{\emptyset}^{3}}\) is regular. By Proposition 2.5.1, and since the left actions of the fibres of \(X\) (and hence of \([X]_{\mathcal{L}_{\emptyset}^{3}}\)) are by compacts, it suffices to show that \([\phi_{\underline{i}}]_{\mathcal{L}_{\emptyset}^{3}}\) is injective for all \(i\in[d]\). To this end, fix \(i\in[d]\), a finite frame \(\{\xi_{\underline{i}}^{(j)}\}_{j\in[N]}\) of \(X_{\underline{i}}\) and \([a]_{\mathcal{L}_{\emptyset}^{3}}\in\ker[\phi_{\underline{i}}]_{\mathcal{L}_{ \emptyset}^{3}}\). We have that \([\phi_{\underline{i}}(a)\xi_{\underline{i}}^{(j)}]_{\mathcal{L}_{\emptyset}^{3 }}=0\) for every \(j\in[N]\), and hence \(\phi_{\underline{i}}(a)\xi_{\underline{i}}^{(j)}\in X_{\underline{i}} \mathcal{L}_{\emptyset}^{3}\) for all \(j\in[N]\). For notational convenience, we write \[\pi_{\mathfrak{J}}:=Q_{\mathfrak{J}}\circ\overline{\pi}_{X}\quad\text{and} \quad t_{\mathfrak{J},\underline{n}}:=Q_{\mathfrak{J}}\circ\overline{t}_{X, \underline{n}}\text{ for all }\underline{n}\in\mathbb{Z}_{+}^{d}\setminus\{0\}.\] Note that \((\pi_{\mathfrak{J}},t_{\mathfrak{J}})\) is a Nica-covariant representation of \(X\). Thus we have that \[\pi_{\mathfrak{J}}(a)t_{\mathfrak{J},\underline{i}}(\xi_{\underline{i}}^{(j)} )=t_{\mathfrak{J},\underline{i}}(\phi_{\underline{i}}(a)\xi_{\underline{i}}^{( j)})\in t_{\mathfrak{J},\underline{i}}(X_{\underline{i}}\mathcal{L}_{ \emptyset}^{3})=t_{\mathfrak{J},\underline{i}}(X_{\underline{i}})\pi_{ \mathfrak{J}}(\mathcal{L}_{\emptyset}^{3})=\{0\},\text{ for all }j\in[N].\] In turn, we obtain that \(\pi_{\mathfrak{J}}(a)t_{\mathfrak{J},\underline{i}}(\xi_{\underline{i}}^{(j)})t_ {\mathfrak{J},\underline{i}}(\xi_{\underline{i}}^{(j)})^{*}=0\) for all \(j\in[N]\), and therefore \[\pi_{\mathfrak{J}}(a)p_{\mathfrak{J},\underline{i}}=\pi_{\mathfrak{J}}(a)\sum_{ j=1}^{N}t_{\mathfrak{J},\underline{i}}(\xi_{\underline{i}}^{(j)})t_{\mathfrak{J}, \underline{i}}(\xi_{\underline{i}}^{(j)})^{*}=0.\] We also have that \(\overline{\pi}_{X}(a)\overline{q}_{X,\underline{i}}\in\mathfrak{J}\) by definition, and hence \(\pi_{\mathfrak{J}}(a)q_{\mathfrak{J},\underline{i}}=Q_{\mathfrak{J}}(\overline{ \pi}_{X}(a)\overline{q}_{X,\underline{i}})=0\). Consequently, we obtain that \[\pi_{\mathfrak{J}}(a)=\pi_{\mathfrak{J}}(a)q_{\mathfrak{J},\underline{i}}+\pi_{ \mathfrak{J}}(a)p_{\mathfrak{J},\underline{i}}=0.\] Hence \(a\in\mathcal{L}_{\emptyset}^{3}\) and thus \([a]_{\mathcal{L}_{\emptyset}^{3}}=0\), proving that \([\phi_{\underline{i}}]_{\mathcal{L}_{\emptyset}^{3}}\) is injective, as required. Regularity of \([X]_{\mathcal{L}_{\emptyset}^{3}}\) implies that \(\mathcal{I}([X]_{\mathcal{L}_{\emptyset}^{3}})=[\mathcal{L}^{3}]_{\mathcal{L}_{ \emptyset}^{3}}\). Therefore, we conclude that \[\mathcal{N}\mathcal{T}_{X}/\mathfrak{J}\cong\mathcal{N}\mathcal{O}([\mathcal{L} ^{3}]_{\mathcal{L}_{\emptyset}^{3}},[X]_{\mathcal{L}_{\emptyset}^{3}})= \mathcal{N}\mathcal{O}_{[X]_{\mathcal{L}_{\emptyset}^{3}}},\] using item (ii) of Proposition 4.2.1 to establish the canonical \(*\)-isomorphism. If \(X\) is injective, then every \(\phi_{\underline{n}}\) is isometric and hence \(\mathcal{L}_{\emptyset}^{3}=\{0\}\), finishing the proof. Next we move to the characterisation of the quotient of \(\mathcal{NT}_{X}\) by \(\left\langle\overline{\pi}_{X}(A)\overline{q}_{X,\underline{\iota}}\mid i\in F\right\rangle\), for some fixed \(\emptyset\neq F\subsetneq[d]\), as the Cuntz-Nica-Pimsner algebra of an appropriate quotient of a new product system \(Y^{F}\) over \(\mathbb{Z}_{+}^{|F|}\). The key is that every non-trivial fibre of \(Y^{F}\) inherits a finite frame, and that \(\mathcal{NT}_{X}\cong\mathcal{NT}_{Y^{F}}\) in a canonical way. For ease of notation, we will assume that \(F=[r]\) for some \(r<d\). This is sufficient, as the ensuing arguments can be adapted to account for general \(\emptyset\neq F\subsetneq[d]\) by relabelling the elements. For each \(\underline{n}=(n_{1},\ldots,n_{d})\in\mathbb{Z}_{+}^{d}\), we define \[\underline{n}_{F}:= (n_{1},\ldots,n_{r})\in\mathbb{Z}_{+}^{r}\quad\text{and}\quad \underline{n}_{F^{\perp}}:=(n_{r+1},\ldots,n_{d})\in\mathbb{Z}_{+}^{d-r}.\] Therefore we can canonically express every \(\underline{n}\in\mathbb{Z}_{+}^{d}\) as \(\underline{n}=(\underline{n}_{F},\underline{n}_{F^{\perp}})\). Conversely, given \(\underline{k}\in\mathbb{Z}_{+}^{r}\) and \(\underline{\ell}\in\mathbb{Z}_{+}^{d-r}\), we can form \(\underline{n}:=(\underline{k},\underline{\ell})\in\mathbb{Z}_{+}^{d}\) so that \(\underline{n}_{F}=\underline{k}\) and \(\underline{n}_{F^{\perp}}=\underline{\ell}\). For the remainder of the subsection, we will take \(X\) to be compactly aligned and identify \(\mathcal{NT}_{X}\) with \(\mathrm{C}^{*}(\overline{\pi},\overline{t})\) for the Fock representation \((\overline{\pi},\overline{t})\). We use \(F\) to split the gauge action \(\beta\) of \((\overline{\pi},\overline{t})\) into two parts. More specifically, we define families \(\beta_{F}\) and \(\beta_{F^{\perp}}\) via \[\beta_{F}:=\{\beta_{(\underline{x},\underline{1}_{[d-r]})}\}_{\underline{x} \in\mathbb{T}^{r}}\quad\text{and}\quad\beta_{F^{\perp}}:=\{\beta_{(\underline {1}_{[r]};\underline{y}]}\}_{\underline{y}\in\mathbb{T}^{d-r}}.\] Note that \(\beta_{F}\) and \(\beta_{F^{\perp}}\) are point-norm continuous families of \(*\)-automorphisms of \(\mathcal{NT}_{X}\), being restrictions of \(\beta\). For \(\underline{n}\in\mathbb{Z}_{+}^{d}\) and \(\xi_{\underline{n}}\in X_{\underline{n}}\), we have that \[\beta_{F,\underline{x}}(\overline{t}_{\underline{n}}(\xi_{\underline{n}}))= \beta_{(\underline{x},\underline{1}_{[d-r]})}(\overline{t}_{\underline{n}}( \xi_{\underline{n}}))=\underline{x}^{\underline{n}_{F}}\overline{t}_{ \underline{n}}(\xi_{\underline{n}})\text{ for all }\underline{x}\in\mathbb{T}^{r},\] and that \[\beta_{F^{\perp},\underline{y}}(\overline{t}_{\underline{n}}(\xi_{\underline{ n}}))=\beta_{(\underline{1}_{[r]};\underline{y})}(\overline{t}_{\underline{n}}( \xi_{\underline{n}}))=\underline{y}^{\underline{n}_{F^{\perp}}}\overline{t}_{ \underline{n}}(\xi_{\underline{n}})\text{ for all }\underline{y}\in\mathbb{T}^{d-r}.\] In particular, we have that \(\beta_{F,\underline{x}}(\overline{\pi}(a))=\overline{\pi}(a)=\beta_{F^{\perp },\underline{y}}(\overline{\pi}(a))\) for all \(\underline{x}\in\mathbb{T}^{r},\underline{y}\in\mathbb{T}^{d-r}\) and \(a\in A\). **Definition 5.5.9**.: Let \(X\) be a compactly aligned product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\) and let \(F=[r]\) for some \(r<d\). We define the C*-algebra \(B^{F^{\perp}}\) by \[B^{F^{\perp}}:=\mathrm{C}^{*}(\overline{\pi}(A),\overline{t}_{\underline{i}}( X_{\underline{j}})\mid i\in F^{c})\subseteq\mathcal{NT}_{X},\] and a collection of linear spaces \(Y^{F}:=\{Y^{F}_{\underline{n}}\}_{\underline{n}\in\mathbb{Z}_{+}^{r}}\) by \[Y^{F}_{\underline{0}}:=B^{F^{\perp}}\quad\text{and}\quad Y^{F}_{\underline{n} }:=[\overline{t}_{(\underline{n},\underline{0})}(X_{(\underline{n}, \underline{0})})B^{F^{\perp}}]\subseteq\mathcal{NT}_{X}\text{ for all }\underline{n}\in\mathbb{Z}_{+}^{r}\setminus\{ \underline{0}\}.\] Since \(\overline{\pi}(A)\subseteq B^{F^{\perp}}\), by using an approximate unit of \(A\) we obtain that \(\overline{t}_{(\underline{n},\underline{0})}(X_{(\underline{n},\underline{0})}) \subseteq Y^{F}_{\underline{n}}\) for all \(\underline{n}\in\mathbb{Z}_{+}^{r}\). If \(\underline{n},\underline{m}\in\mathbb{Z}_{+}^{d}\) have support contained in \(F^{c}\), then the same is true of \(\underline{n}\vee\underline{m}\). Hence Nica covariance yields that \[B^{F^{\perp}}=\mathrm{C}^{*}(\overline{\pi}(A),\overline{t}_{\underline{n}}(X_ {\underline{n}})\mid\underline{n}\perp F)=\overline{\mathrm{span}}(\overline{t }_{\underline{n}}(X_{\underline{n}})\overline{t}_{\underline{m}}(X_{\underline{m} })^{*}\mid\underline{n},\underline{m}\perp F). \tag{5.14}\] Note that \(\beta_{F}|_{B^{F^{\perp}}}=\mathrm{id}\). We can restrict \(\beta_{F^{\perp}}\) to \(B^{F^{\perp}}\) to obtain a point-norm continuous family \[\beta_{F^{\perp}}|_{B^{F^{\perp}}}:=\{\beta_{F^{\perp},\underline{i}}|_{B^{F^{ \perp}}}\}_{\underline{z}\in\mathbb{T}^{d-r}}\] of \(*\)-automorphisms of \(B^{F^{\perp}}\). We may characterise \(B^{F^{\perp}}\) as the Toeplitz-Nica-Pimsner algebra of a compactly aligned product system over \(\mathbb{Z}_{+}^{d-r}\). **Definition 5.5.10**.: Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\) and let \(F=[r]\) for some \(r<d\). We define the collection of C*-correspondences \(Z^{F^{\perp}}:=\{Z^{F^{\perp}}_{\underline{n}}\}_{\underline{n}\in\mathbb{Z}_{+}^{d -r}}\) over \(A\) by \[Z^{F^{\perp}}_{\underline{0}}:=A\quad\text{and}\quad Z^{F^{\perp}}_{\underline{n} }:=X_{(\underline{0},\underline{n})}\text{ for all }\underline{n}\in\mathbb{Z}_{+}^{d-r} \setminus\{\underline{0}\}.\] We can endow \(Z^{F^{\perp}}\) with a canonical product system structure, inherited from \(X\). **Proposition 5.5.11**.: _Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\) and let \(F=[r]\) for some \(r<d\). Then \(Z^{F^{\perp}}\) inherits a product system structure from \(X\). Moreover, if every non-trivial fibre of \(X\) admits a finite frame (resp. \(X\) is compactly aligned, strong compactly aligned), then every non-trivial fibre of \(Z^{F^{\perp}}\) admits a finite frame (resp. \(Z^{F^{\perp}}\) is compactly aligned, strong compactly aligned)._ **Proof.** This is immediate when seeing \(Z^{F^{\perp}}\) as a subfamily of \(X\). **Proposition 5.5.12**.: _Let \(X\) be a compactly aligned product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\) and let \(F=[r]\) for some \(r<d\). Then the maps_ \[\pi\colon A\to B^{F^{\perp}};\pi(a)=\overline{\pi}(a)\text{ for all }a\in A,\] \[t_{\underline{n}}\colon Z_{\underline{n}}^{F^{\perp}}\to B^{F^{ \perp}};t_{\underline{n}}(z_{\underline{n}})=\overline{t}_{(\underline{0}, \underline{n})}(z_{\underline{n}})\text{ for all }z_{\underline{n}}\in Z_{ \underline{n}}^{F^{\perp}},\underline{n}\in\mathbb{Z}_{+}^{d-r}\setminus\{ \underline{0}\},\] _form an injective Nica-covariant representation \((\pi,t)\) of \(Z^{F^{\perp}}\) which induces a canonical \(*\)-isomorphism \(\mathcal{NT}_{Z^{F^{\perp}}}\cong B^{F^{\perp}}\)._ **Proof.** For notational convenience, we set \(B:=B^{F^{\perp}}\) and \(Z:=Z^{F^{\perp}}\). The maps are well-defined by (5.14). It is routine to check that \((\pi,t)\) constitutes an injective representation of \(Z\) with C\({}^{*}(\pi,t)=B\). Since \(Z\) inherits the product system structure of \(X\) and \((\overline{\pi},\overline{t})\) is Nica-covariant, it follows that \((\pi,t)\) is Nica-covariant. By the universal property of \(\mathcal{NT}_{Z}\), there exists a (unique) canonical \(*\)-epimorphism \(\pi\times t\colon\mathcal{NT}_{Z}\to B\). We will show that \(\pi\times t\) is injective, thereby completing the proof. Without loss of generality, we may take \((\overline{\pi}_{Z},\overline{t}_{Z})\) to be the Fock representation of \(Z\). Note that \(\beta_{F^{\perp}}|_{B}\) defines a gauge action of \((\pi,t)\). Let \(\gamma\) denote the gauge action of \((\overline{\pi}_{Z},\overline{t}_{Z})\). Then \(\pi\times t\) is equivariant, and so it suffices to show that the restriction \[(\pi\times t)|_{\mathcal{NT}_{Z}^{\gamma}}\colon\mathcal{NT}_{Z}^{\gamma}\to B ^{\beta_{F^{\perp}}|_{B}}\] to the fixed point algebras is injective. We have that \(\sum_{\underline{n}\perp F}X_{\underline{n}}\) is reducing for \(B^{\beta_{F^{\perp}}|_{B}}\) by construction, and thus the map \[\Psi\colon B^{\beta_{F^{\perp}}|_{B}}\to\mathcal{L}(\sum_{\underline{n}\perp F }X_{\underline{n}});\Psi(b)=b|_{\sum_{\underline{n}\perp F}X_{\underline{n}}} \text{ for all }b\in B^{\beta_{F^{\perp}}|_{B}},\] is a well-defined \(*\)-homomorphism. By identifying \(\mathcal{F}Z\cong\sum_{\underline{n}\perp F}X_{\underline{n}}\), we obtain that \(\Psi\) is a left inverse of \((\pi\times t)|_{\mathcal{NT}_{Z}^{\gamma}}\), as required. We isolate the following corollary from the proof of Proposition 5.5.12. **Corollary 5.5.13**.: _Let \(X\) be a compactly aligned product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\) and let \(F=[r]\) for some \(r<d\). Then the map_ \[\Psi\colon(B^{F^{\perp}})^{\beta_{F^{\perp}}|_{B^{F^{\perp}}}}\to\mathcal{L} (\sum_{\underline{n}\perp F}X_{\underline{n}});\Psi(b)=b|_{\sum_{\underline{n} \perp F}X_{\underline{n}}}\text{ for all }b\in(B^{F^{\perp}})^{\beta_{F^{\perp}}|_{B^{F^{\perp}}}}\] _is an injective \(*\)-homomorphism. In particular, we have that_ \[\|b|_{\sum_{\underline{n}\perp F}X_{\underline{n}}}\|=\|b\|\text{ for all }b\in(B^{F^{\perp}})^{\beta_{F^{\perp}}|_{B^{F^{\perp}}}}.\] Next we focus on \(Y^{F}\) and explore the connection with \(X\). **Lemma 5.5.14**.: _Let \(X\) be a compactly aligned product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\) and let \(F=[r]\) for some \(r<d\). Then \(Y_{\underline{n}}^{F}\) is a sub-C*-correspondence of \(\mathcal{NT}_{X}\) over \(B^{F^{\perp}}\) for all \(\underline{n}\in\mathbb{Z}_{+}^{r}\)._ **Proof.** For notational convenience, we set \(B:=B^{F^{\perp}}\) and \(Y:=Y^{F}\). We view \(Y_{\underline{0}}\) as a C*-correspondence over itself in the usual way. For \(\underline{n}\in\mathbb{Z}_{+}^{r}\setminus\{\underline{0}\}\), the space \(Y_{\underline{n}}\) inherits the usual inner product from \(\mathcal{NT}_{X}\) (now taking values in \(B\)) as well as the usual right multiplication in \(\mathcal{NT}_{X}\) (now by elements of \(B\)). A direct computation gives that the norm induced by the \(B\)-valued inner product coincides with the restriction norm on \(\mathcal{NT}_{X}\), and thus \(Y_{\underline{n}}\) is a right Hilbert \(B\)-module. It remains to equip \(Y_{\underline{n}}\) with a left action \(\phi_{Y_{\underline{n}}}\colon B\to\mathcal{L}(Y_{\underline{n}})\). To this end, fix \(b\in B\) and define \[\phi_{Y_{\underline{n}}}(b)\colon Y_{\underline{n}}\to Y_{\underline{n}};\phi_ {Y_{\underline{n}}}(b)y_{\underline{n}}=by_{\underline{n}}\text{ for all }y_{\underline{n}}\in Y_{\underline{n}},\] where the multiplication is performed in \(\mathcal{NT}_{X}\). To see that this map is well-defined, fix \(\underline{k},\underline{\ell}\in\mathbb{Z}_{+}^{d}\) satisfying \(\underline{k},\underline{\ell}\perp F\). Thus we may write \(\underline{k}=(\underline{0},\underline{k}_{F^{\perp}})\) and \(\underline{\ell}=(\underline{0},\underline{\ell}_{F^{\perp}})\). Since \((\underline{0},\underline{\ell}_{F^{\perp}})\perp(\underline{n},\underline{0 })\), Nica-covariance yields that \[\overline{t}_{(\underline{0},\underline{k}_{F^{\perp}})}(X_{( \underline{0},\underline{k}_{F^{\perp}})})\overline{t}_{(\underline{0}, \underline{\ell}_{F^{\perp}})}(X_{(\underline{0},\underline{\ell}_{F^{\perp}} )})^{*}\overline{t}_{(\underline{n},\underline{0})}(X_{(\underline{0}, \underline{0})})B\subseteq\] \[\subseteq[\overline{t}_{(\underline{n},\underline{0})}(X_{( \underline{n},\underline{0})})\overline{t}_{(\underline{0},\underline{k}_{F^{ \perp}})}(X_{(\underline{0},\underline{k}_{F^{\perp}})})\overline{t}_{( \underline{0},\underline{\ell}_{F^{\perp}})}(X_{(\underline{0},\underline{\ell }_{F^{\perp}})})^{*}B]\] \[\subseteq[\overline{t}_{(\underline{n},\underline{0})}(X_{( \underline{n},\underline{0})})BB]\subseteq[\overline{t}_{(\underline{n}, \underline{0})}(X_{(\underline{n},\underline{0})})B]\equiv Y_{\underline{n}},\] using (5.14) to pass to the last line. It then follows by another application of (5.14) that \(BY_{\underline{n}}\subseteq Y_{\underline{n}}\), and hence \(\phi_{Y_{\underline{n}}}(b)\) is well-defined. It is also clear that \(\phi_{Y_{\underline{n}}}(b)\in\mathcal{L}(Y_{\underline{n}})\) with adjoint \(\phi_{Y_{\underline{n}}}(b^{*})\), and so \(\phi_{Y_{\underline{n}}}\) is well-defined. It is routine to check that \(\phi_{Y_{\underline{n}}}\) is a \(*\)-homomorphism, and the proof is complete. **Proposition 5.5.15**.: _Let \(X\) be a compactly aligned product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\) and let \(F=[r]\) for some \(r<d\). Then \(Y^{F}\) carries a canonical structure as a product system over \(\mathbb{Z}_{+}^{r}\) with coefficients in \(B^{F^{\perp}}\), with the multiplication maps given by multiplication in \(\mathcal{NT}_{X}\)._ _Moreover, if \(\{\xi_{(\underline{n},\underline{0})}^{j}\}_{j\in[N]}\) is a finite frame of \(X_{(\underline{n},\underline{0})}\) for \(\underline{n}\in\mathbb{Z}_{+}^{r}\setminus\{\underline{0}\}\), then \(\{\overline{t}_{(\underline{n},\underline{0})}(\xi_{(\underline{n}, \underline{0})}^{(j)})\}_{j\in[N]}\) is a finite frame of \(Y_{\underline{n}}^{F}\)._ **Proof.** For notational convenience, we set \(B:=B^{F^{\perp}}\) and \(Y:=Y^{F}\). Lemma 5.5.14 gives that the fibres of \(Y\) are C*-correspondences over \(B\), so it remains to construct multiplication maps \[v_{\underline{n},\underline{m}}\colon Y_{\underline{n}}\otimes_{B}Y_{\underline {m}}\to Y_{\underline{n}+\underline{m}}\text{ for all }\underline{n},\underline{m}\in\mathbb{Z}_{+}^{r}\] that are compatible with axioms (i)-(v) for product systems. Axiom (i) is satisfied by construction, and axioms (ii) and (iii) determine the maps \(v_{\underline{0},\underline{n}}\) and \(v_{\underline{n},\underline{0}}\), respectively, for all \(\underline{n}\in\mathbb{Z}_{+}^{r}\). For \(\underline{n},\underline{m}\in\mathbb{Z}_{+}^{r}\setminus\{\underline{0}\}\), we start by defining a map \[v_{\underline{n},\underline{m}}\colon Y_{\underline{n}}\times Y_{\underline{m} }\to Y_{\underline{n}+\underline{m}};(y_{\underline{n}},y_{\underline{m}})\mapsto y _{\underline{n}}y_{\underline{m}}\text{ for all }y_{\underline{n}}\in Y_{\underline{n}}\text{ and }y_{\underline{m}}\in Y_{\underline{m}},\] where the multiplication is performed in \(\mathcal{NT}_{X}\). To see that \(v_{\underline{n},\underline{m}}\) is well-defined, observe that \[\overline{t}_{(\underline{n},\underline{0})}(X_{(\underline{n}, \underline{0})})B\overline{t}_{(\underline{m},\underline{0})}(X_{(\underline{m}, \underline{0})})B \subseteq[\overline{t}_{(\underline{n},\underline{0})}(X_{(\underline{n}, \underline{0})})\overline{t}_{(\underline{m},\underline{0})}(X_{(\underline{m}, \underline{0})})B]\] \[\subseteq[\overline{t}_{(\underline{n}+\underline{m},\underline{0 })}(X_{(\underline{n}+\underline{m},\underline{0})})B]\equiv Y_{\underline{n}+ \underline{m}},\] using that \(BY_{\underline{m}}\subseteq Y_{\underline{m}}\) by Lemma 5.5.14. This shows that \(Y_{\underline{n}}Y_{\underline{m}}\subseteq Y_{\underline{n}+\underline{m}}\) and hence \(v_{\underline{n},\underline{m}}\) is well-defined. It is routine to check that \(v_{\underline{n},\underline{m}}\) is bilinear and \(B\)-balanced, and therefore it linearises to a map on \(Y_{\underline{n}}\odot_{B}Y_{\underline{m}}\), denoted by the same symbol. For all \(y_{\underline{n}},y_{\underline{n}}^{\prime}\in Y_{\underline{n}}\) and \(y_{\underline{m}},y_{\underline{m}}^{\prime}\in Y_{\underline{m}}\), we have that \[\left\langle v_{\underline{n},\underline{m}}(y_{\underline{n}}\otimes y_{ \underline{m}}),v_{\underline{n},\underline{m}}(y_{\underline{n}}^{\prime}\otimes y _{\underline{m}}^{\prime})\right\rangle=y_{\underline{n}}^{*}y_{\underline{n}}^{*}y_{ \underline{n}}^{\prime}y_{\underline{m}}^{\prime}=\left\langle y_{\underline{n}} \otimes y_{\underline{m}},y_{\underline{n}}^{\prime}\otimes y_{\underline{m}}^{ \prime}\right\rangle,\] from which it follows that \(\left\langle v_{\underline{n},\underline{m}}(\zeta),v_{\underline{n},\underline{m}}( \zeta^{\prime})\right\rangle=\left\langle\zeta,\zeta^{\prime}\right\rangle\) for all \(\zeta,\zeta^{\prime}\in Y_{\underline{n}}\odot_{B}Y_{\underline{m}}\). In particular, we have that \(v_{\underline{n},\underline{m}}\) is bounded and hence it extends to \(Y_{\underline{n}}\otimes_{B}Y_{\underline{m}}\). A direct computation gives that \(v_{\underline{n},\underline{m}}\) is a \(B\)-bimodule map, and the preceding calculation shows that it is isometric. To see that \(v_{\underline{n},\underline{m}}\) is surjective, fix \(\xi_{(\underline{n},\underline{0})}\in X_{(\underline{n},\underline{0})},\xi_{( \underline{m},\underline{0})}\in X_{(\underline{m},\underline{0})},b\in B\) and an approximate unit \((u_{\lambda})_{\lambda\in\Lambda}\) of \(A\). We have that \[\overline{t}_{(\underline{n}+\underline{m},\underline{0})}(\xi_{( \underline{n},\underline{0})}\xi_{(\underline{m},\underline{0})})b =\|\cdot\|\cdot\lim_{\lambda}\overline{t}_{(\underline{n},\underline{0})}( \xi_{(\underline{n},\underline{0})})\overline{\pi}(u_{\lambda})\overline{t}_{( \underline{m},\underline{0})}(\xi_{(\underline{m},\underline{0})})b\] \[=\|\cdot\|\cdot\|\cdot\lim_{\lambda}v_{\underline{n},\underline{m}}( \overline{t}_{(\underline{n},\underline{0})}(\xi_{(\underline{n},\underline{0})}) \overline{\pi}(u_{\lambda})\otimes\overline{t}_{(\underline{m},\underline{0} Hence \(\overline{t}_{(\underline{n}+\underline{m},\underline{0})}(\xi_{(\underline{n}, \underline{0})}\xi_{(\underline{m},\underline{0})})b\) can be realised as the norm-limit of a net that is contained in the closed linear space \(v_{\underline{n},\underline{m}}(Y_{\underline{n}}\otimes_{B}Y_{\underline{m}})\). It follows that \[\overline{t}_{(\underline{n}+\underline{m},\underline{0})}(X_{(\underline{n}+ \underline{m},\underline{0})})B\subseteq v_{\underline{n},\underline{m}}(Y_{ \underline{n}}\otimes_{B}Y_{\underline{m}}),\] and hence \(v_{\underline{n},\underline{m}}\) is surjective. Associativity of the multiplication maps follows by associativity of multiplication in \(\mathcal{NT}_{X}\). In total, we have that \(Y\) is a product system over \(\mathbb{Z}_{+}^{r}\) with coefficients in \(B\), as required. Finally, let \(\underline{n}\in\mathbb{Z}_{+}^{r}\setminus\{\underline{0}\}\) and let \(\{\xi_{(\underline{n},\underline{0})}^{(j)}\}_{j\in[N]}\) be a finite frame of \(X_{(\underline{n},\underline{0})}\). For \(y_{\underline{n}}=\overline{t}_{(\underline{n},\underline{0})}(\xi_{( \underline{n},\underline{0})})b\), where \(\xi_{(\underline{n},\underline{0})}\in X_{(\underline{n},\underline{0})}\) and \(b\in B\), we have that \[\sum_{j=1}^{N}\Theta^{Y_{\underline{n}}}_{\overline{t}_{(\underline {n},\underline{0})}(\xi_{(\underline{n},\underline{0})}^{(j)}),\overline{t}_ {(\underline{n},\underline{0})}(\xi_{(\underline{n},\underline{0})}^{(j)})}(y _{\underline{n}}) =\sum_{j=1}^{N}\overline{t}_{(\underline{n},\underline{0})}(\xi_ {(\underline{n},\underline{0})}^{(j)})\overline{t}_{(\underline{n}, \underline{0})}(\xi_{(\underline{n},\underline{0})}^{(j)})^{*}\overline{t}_{( \underline{n},\underline{0})}(\xi_{(\underline{n},\underline{0})})b\] \[=\overline{t}_{(\underline{n},\underline{0})}(\sum_{j=1}^{N} \Theta^{X_{(\underline{n},\underline{0})}}_{\xi_{(\underline{n},\underline{0})} ^{(j)}}(\xi_{(\underline{n},\underline{0})}))b=\overline{t}_{(\underline{n}, \underline{0})}(\xi_{(\underline{n},\underline{0})})b=y_{\underline{n}},\] as \(\{\xi_{(\underline{n},\underline{0})}^{(j)}\}_{j\in[N]}\) is a finite frame of \(X_{(\underline{n},\underline{0})}\). Since \(\overline{t}_{(\underline{n},\underline{0})}(X_{(\underline{n},\underline{0}) })B\) densely spans \(Y_{\underline{n}}\), it follows that \[\sum_{j=1}^{N}\Theta^{Y_{\underline{n}}}_{\overline{t}_{(\underline{n}, \underline{0})}(\xi_{(\underline{n},\underline{0})}^{(j)}),\overline{t}_{( \underline{n},\underline{0})}(\xi_{(\underline{n},\underline{0})}^{(j)})}= \operatorname{id}_{Y_{\underline{n}}},\] and the proof is complete. Next we show that \(\mathcal{NT}_{X}\cong\mathcal{NT}_{Y^{F}}\) when each \(X_{\underline{i}}\) admits a finite frame. Note that each \(Y_{\underline{i}}^{F}\) admits a finite frame by Proposition 5.5.15. We start by representing \(Y^{F}\) in \(\mathcal{NT}_{X}\). **Proposition 5.5.16**.: _Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\), wherein \(X_{\underline{i}}\) admits a finite frame for all \(i\in[d]\). Let \(F=[r]\) for some \(r<d\). Define the maps_ \[\pi\colon B^{F^{\perp}}\to\mathcal{NT}_{X};\pi(b)=b\text{ for all }b\in B^{F^{\perp}},\] \[t_{\underline{n}}\colon Y_{\underline{n}}^{F}\to\mathcal{NT}_{X};t_{\underline {n}}(y_{\underline{n}})=y_{\underline{n}}\text{ for all }y_{\underline{n}}\in Y_{\underline{n}}^{F},\underline{n}\in \mathbb{Z}_{+}^{r}\setminus\{\underline{0}\}.\] _Then \((\pi,t)\) is an injective Nica-covariant representation of \(Y^{F}\) satisfying \(\operatorname{C}^{*}(\pi,t)=\mathcal{NT}_{X}\)._ **Proof.** For notational convenience, we set \(B:=B^{F^{\perp}}\) and \(Y:=Y^{F}\). For \(\underline{s}\in\mathbb{Z}_{+}^{d}\), let \(P_{\underline{s}}\) denote the projection of \(\mathcal{F}X\) onto \(\sum_{\underline{k}\geq\underline{s}}X_{\underline{k}}\). Then \(P_{\underline{s}}\overline{t}_{\underline{s}}(\zeta_{\underline{s}})=\overline{ t}_{\underline{s}}(\zeta_{\underline{s}})\) for all \(\zeta_{\underline{s}}\in X_{\underline{s}}\). It is routine to check that \((\pi,t)\) defines an injective representation of the product system \(Y\). To see that \(\operatorname{C}^{*}(\pi,t)=\mathcal{NT}_{X}\), it suffices to show that \(\operatorname{C}^{*}(\pi,t)\) contains the generators of \(\mathcal{NT}_{X}\). By (5.14) we have that \(\overline{t}_{\underline{n}}(X_{\underline{n}})\subseteq\operatorname{C}^{*}(\pi,t)\) for all \(\underline{n}\perp F\). It remains to see that \[\overline{t}_{(\underline{k},\underline{\ell})}(X_{(\underline{k},\underline{ \ell})})\subseteq\operatorname{C}^{*}(\pi,t)\text{ for all }\underline{k}\in\mathbb{Z}_{+}^{r}\setminus\{\underline{0}\},\underline{ \ell}\in\mathbb{Z}_{+}^{d-r}.\] Fixing \(\xi_{(\underline{k},\underline{0})}\in X_{(\underline{k},\underline{0})}\) and \(\xi_{(\underline{0},\underline{\ell})}\in X_{(\underline{0},\underline{\ell})}\), we have that \[\overline{t}_{(\underline{k},\underline{\ell})}(\xi_{(\underline{k},\underline{0} )}\xi_{(\underline{0},\underline{\ell})})=\overline{t}_{(\underline{k}, \underline{0})}(\xi_{(\underline{k},\underline{0})})\overline{t}_{(\underline{0}, \underline{\ell})}(\xi_{(\underline{0},\underline{\ell})})=t_{\underline{k}}( \overline{t}_{(\underline{k},\underline{0})}(\xi_{(\underline{k},\underline{0})})) \pi(\overline{t}_{(\underline{0},\underline{\ell})}(\xi_{(\underline{0}, \underline{\ell})}))\in\operatorname{C}^{*}(\pi,t).\] Hence we have that \[\overline{t}_{(\underline{k},\underline{\ell})}(X_{(\underline{k},\underline{\ell})} )\subseteq[\overline{t}_{(\underline{k},\underline{0})}(X_{(\underline{k}, \underline{0})})\overline{t}_{(\underline{0},\underline{\ell})}(X_{( \underline{0},\underline{\ell})})]\subseteq\operatorname{C}^{*}(\pi,t),\] as required. Next we show that \((\pi,t)\) is Nica-covariant. Let \(\{t_{\underline{n}}^{+}\underline{m}\}_{\underline{n},\underline{m}\in \mathbb{Z}_{+}^{r}}\) denote the connecting \(*\)-homomorphisms of \(Y\). Fix \(\underline{n},\underline{m}\in\mathbb{Z}_{+}^{r}\setminus\{\underline{0}\},k_{ \underline{n}}\in\mathcal{K}(Y_{\underline{n}})\) and \(k_{\underline{m}}\in\mathcal{K}(Y_{\underline{m}})\). We have that \[\psi_{\underline{n}^{\vee}\underline{m}}(\underline{n}^{\vee}_{\underline{n}}(k_{ \underline{n}})\underline{n}^{\vee}_{\underline{m}}(k_{\underline{m}}))= \psi_{\underline{n}^{\vee}\underline{m}}(\underline{t}^{\vee}_{\underline{m}}(k_{ \underline{n}}))\psi_{\underline{n}^{\vee}\underline{m}}(\underline{t}^{\vee}_{ \underline{m}}(k_{\underline{m}}))\] by Proposition 2.4.1. Therefore, we must show that \[\psi_{\underline{n}^{\vee}\underline{m}}(\underline{n}^{\vee}_{\underline{n}}(k_{ \underline{n}}))\psi_{\underline{n}^{\vee}\underline{m}}(\underline{n}^{\vee}_{ \underline{m}}(k_{\underline{m}}))=\psi_{\underline{n}}(k_{\underline{n}})\psi_{ \underline{m}}(k_{\underline{m}}). \tag{5.15}\] This holds trivially when \(\underline{n}=\underline{n}\vee\underline{m}=\underline{m}\), so we consider the cases where \(\underline{n}\neq\underline{ Assume that \(\underline{n}\neq\underline{n}\vee\underline{m}\). We proceed by showing that (5.15) holds for \(k_{\underline{n}}=\Theta^{Y_{\underline{n}}}_{y_{\underline{n}},y^{\prime}_{ \underline{n}}}\) and \(k_{\underline{m}}=\Theta^{Y_{\underline{m}}}_{y_{\underline{m}},y^{\prime}_{ \underline{n}}}\), where \[y_{\underline{n}}=\overline{t}_{(\underline{n},\underline{0})}(\xi_{( \underline{n},\underline{0})})b,y^{\prime}_{\underline{n}}=\overline{t}_{( \underline{n},\underline{0})}(\eta_{(\underline{n},\underline{0})}),y_{ \underline{m}}=\overline{t}_{(\underline{m},\underline{0})}(\xi_{(\underline{m },\underline{0})})c,y^{\prime}_{\underline{m}}=\overline{t}_{(\underline{m}, \underline{0})}(\eta_{(\underline{m},\underline{0})}),\] for some \(\xi_{(\underline{n},\underline{0})},\eta_{(\underline{n},\underline{0})}\in X _{(\underline{n},\underline{0})},\xi_{(\underline{m},\underline{0})},\eta_{( \underline{m},\underline{0})}\in X_{(\underline{m},\underline{0})}\) and \(b,c\in B\). To this end, we compute \(\psi_{\underline{n}\vee\underline{m}}(t_{\underline{n}}^{\underline{n}\vee \underline{m}}(\Theta^{Y_{\underline{n}}}_{y_{\underline{n}},y^{\prime}_{ \underline{n}})})\) and \(\psi_{\underline{n}\vee\underline{m}}(t_{\underline{n}}^{\underline{n}\vee \underline{m}}(\Theta^{Y_{\underline{m}}}_{y_{\underline{m}},y^{\prime}_{ \underline{n}})})\). For \(\psi_{\underline{n}\vee\underline{m}}(t_{\underline{n}}^{\underline{n}\vee \underline{m}}(\Theta^{Y_{\underline{n}}}_{y_{\underline{n}},y^{\prime}_{ \underline{n}})})\), let \(\{\xi_{(\underline{n}\vee\underline{m}-\underline{n},\underline{0})}^{(j)}\}_{j \in[N]}\) be a finite frame of \(X_{(\underline{n}\vee\underline{m}-\underline{n},\underline{0})}\). By Proposition 5.5.15, we have that \(\{\overline{t}_{(\underline{n}\vee\underline{m}-\underline{n},\underline{0})} ^{(j)}(\xi_{(\underline{n}\vee\underline{m}-\underline{n},\underline{0})}^{(j) })\}_{j\in[N]}\) is a finite frame of \(Y_{\underline{n}\vee\underline{m}-\underline{n}}\). Note that \[P_{(\underline{n}\vee\underline{m}-\underline{n},\underline{0})}=\sum_{j=1}^{N }\overline{t}_{(\underline{n}\vee\underline{m}-\underline{n},\underline{0})}( \xi_{(\underline{n}\vee\underline{m}-\underline{n},\underline{0})}^{(j)}) \overline{t}_{(\underline{n}\vee\underline{m}-\underline{n},\underline{0})}( \xi_{(\underline{n}\vee\underline{m}-\underline{n},\underline{0})}^{(j)})^{*} \in\mathcal{NT}_{X}.\] Observe that \(P_{(\underline{n}\vee\underline{m}-\underline{n},\underline{0})}\) belongs to the commutant of \(\overline{t}_{(\underline{0},\underline{\ell})}(X_{(\underline{0},\underline {\ell})})\) (and of \(\overline{t}_{(\underline{0},\underline{\ell})}(X_{(\underline{0},\underline{ \ell})})^{*}\) by taking adjoints) for all \(\underline{\ell}\in\mathbb{Z}_{+}^{d-r}\), since for \(\underline{s}\in\mathbb{Z}_{+}^{d}\) we have that \[(\underline{n}\vee\underline{m}-\underline{n},\underline{0})\leq\underline{s} \iff(\underline{n}\vee\underline{m}-\underline{n},\underline{0})\leq \underline{s}+(\underline{0},\underline{\ell})\text{ for all }\underline{\ell}\in\mathbb{Z}_{+}^{d-r}.\] In particular, we have that \(P_{(\underline{n}\vee\underline{m}-\underline{n},\underline{0})}\) belongs to \(B^{\prime}\). Additionally, we have that \[\overline{t}_{(\underline{n},\underline{0})}(\zeta_{(\underline{n}, \underline{0})})P_{(\underline{n}\vee\underline{m}-\underline{n},\underline{0})} =P_{(\underline{n}\vee\underline{m},\underline{0})}\overline{t}_{(\underline{n },\underline{0})}(\zeta_{(\underline{n},\underline{0})}),\text{ for all }\zeta_{(\underline{n}, \underline{0})}\in X_{(\underline{n},\underline{0})},\] and by taking adjoints we obtain that \[P_{(\underline{n}\vee\underline{m}-\underline{n},\underline{0})}\overline{t}_{( \underline{n},\underline{0})}(\zeta_{(\underline{n},\underline{0})})^{*}= \overline{t}_{(\underline{n},\underline{0})}(\zeta_{(\underline{n},\underline{0 })})^{*}P_{(\underline{n}\vee\underline{m},\underline{0})},\text{ for all }\zeta_{(\underline{n},\underline{0})}\in X_{(\underline{n},\underline{0})}.\] This follows from the observation that for \(\underline{s}\in\mathbb{Z}_{+}^{d}\) we have that \[(\underline{n}\vee\underline{m}-\underline{n},\underline{0})\leq\underline{s} \iff(\underline{n}\vee\underline{m},\underline{0})\leq(\underline{n},\underline{0 })+\underline{s}.\] An application of Corollary 5.5.5 gives that \[\psi_{\underline{n}\vee\underline{m}}(t_{\underline{n}}^{\underline{n} \vee\underline{m}}(\Theta^{Y_{\underline{n}}}_{y_{\underline{n}},y^{\prime}_{ \underline{n}})}) =\sum_{j=1}^{N}y_{\underline{n}}\overline{t}_{(\underline{n}\vee \underline{m}-\underline{n},\underline{0})}(\xi_{(\underline{n}\vee\underline{m}- \underline{n},\underline{0})}^{(j)})\overline{t}_{(\underline{n}\vee\underline{m}- \underline{n},\underline{0})}(\xi_{(\underline{n}\vee\underline{m}-\underline{n}, \underline{0})}^{(j)})^{*}(y^{\prime}_{\underline{n}})^{*}\] \[=\overline{t}_{(\underline{n},\underline{0})}(\xi_{(\underline{n}, \underline{0})})bP_{(\underline{n}\vee\underline{m}-\underline{n},\underline{0})} \overline{t}_{(\underline{n},\underline{0})}(\eta_{(\underline{n},\underline{0})})^{*}\] \[=\overline{t}_{(\underline{n},\underline{0})}(\xi_{(\underline{n}, \underline{0})})\overline{t}_{(\underline{n},\underline{0})}(\eta_{(\underline{n}, \underline{0})})^{*}P_{(\underline{n}\vee\underline{m},\underline{0})}\] \[=P_{(\underline{n}\vee\underline{m},\underline{0})}\overline{t}_{( \underline{n},\underline{0})}(\xi_{(\underline{n},\underline{0})})b\overline{t}_{( \underline{n},\underline{0})}(\eta_{(\underline{n},\underline{0})})^{*}.\] For \(\psi_{\underline{n}\vee\underline{m}}(\underline{n}_{\underline{m}}^{\underline{n} \vee\underline{m}}(\Theta^{Y_{\underline{m}}}_{y_{\underline{m}},y^{\prime}_{ \underline{m}})})\), we consider two cases. If \(\underline{m}=\underline{n}\vee\underline{m}\), then we have that \[\psi_{\underline{n}\vee\underline{m}}(t_{\underline{m}}^{\underline{n}\vee \underline{m}}(\Theta^{Y_{\underline{m}}}_{y_{\underline{m}},y^{\prime}_{ \underline{m}})})=\psi_{\underline{m}}(\Theta^{Y_{\underline{m}}}_{y_{\underline{m}},y^{ \prime}_{\underline{m}})}=\overline{t}_{(\underline{m},\underline{0})}(\xi_{( \underline{m},\underline{0})})c\overline{t}_{(\underline{m},\underline{0})}(\eta_{( \underline{m},\underline{0})})^{*},\] and if \(\underline{m}\neq\underline{n}\vee\underline{m}\) then we have that \[\psi_{\underline{n}\vee\underline{m}}(t_{\underline{n}}^{\underline{n}\vee \underline{m}}(\Theta^{Y_{\underline{m}}}_{y_{\underline{m}},y^{\prime}_{ \underline{m}})})=P_{(\underline{n}\vee\underline{m},\underline{0})}\overline{t}_{( \underline{m},\underline{0})}(\xi_{(\underline{m},\underline{0})})c\overline{t}_{( \underline{m},\underline{0})}(\eta_{(\underline{m},\underline{0})})^{*}\] by swapping \(\underline{n}\) and \(\underline{m}\), as well as \(b\) and \(c\), in the preceding arguments. Since \(P_{(\underline{n}\vee\underline{m},\underline{0})}=P_{(\underline{n},\underline{0})}P using that \(P^{2}_{(\underline{n}\vee\underline{m},\underline{0})}=P_{(\underline{n}\vee \underline{m},\underline{0})}\) when \(\underline{m}\neq\underline{n}\vee\underline{m}\) in the first equality. By taking finite linear combinations and their norm-limits, we conclude that (5.15) holds when \(\underline{n}\neq\underline{n}\vee\underline{m}\). Since (5.15) is symmetric with respect to \(\underline{n}\) and \(\underline{m}\), taking adjoints (and relabelling) deals with the case where \(\underline{m}\neq\underline{n}\vee\underline{m}\), showing that \((\pi,t)\) is Nica-covariant. This completes the proof. We now arrive at the next main result of this subsection, namely that the decomposition of \(X\) along \(\emptyset\neq F\subsetneq[d]\) induces a similar decomposition of the Toeplitz-Nica-Pimsner algebras. The following is noted in [30, Proof of Theorem 4.4 (i)]. **Theorem 5.5.17**.: _Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\), wherein \(X_{\underline{i}}\) admits a finite frame for all \(i\in[d]\). Let \(F=[r]\) for some \(r<d\). Consider the product system \(Y^{F}\) over \(B^{F^{\perp}}\) related to \(X\) and \(F\), and define the maps_ \[\pi\colon B^{F^{\perp}}\to\mathcal{NT}_{X};\pi(b)=b\text{ for all }b\in B^{F^{\perp}},\] \[t_{\underline{n}}\colon Y^{F}_{\underline{n}}\to\mathcal{NT}_{X};t_{\underline {n}}(y_{\underline{n}})=y_{\underline{n}}\text{ for all }y_{\underline{n}}\in Y^{F}_{ \underline{n}},\underline{n}\in\mathbb{Z}_{+}^{r}\setminus\{\underline{0}\}.\] _Then the induced map \(\pi\times t\colon\mathcal{NT}_{Y^{F}}\to\mathcal{NT}_{X}\) is a \(*\)-isomorphism._ Proof.: For notational convenience, we set \(B:=B^{F^{\perp}}\) and \(Y:=Y^{F}\). By Proposition 5.5.16 we have that \((\pi,t)\) is Nica-covariant and thus \(\pi\times t\) is well-defined. Let \(\gamma\) be the gauge action of \((\overline{\pi}_{Y},\overline{t}_{Y})\). It is routine to check that \(\beta_{F}\) defines a gauge action of \((\pi,t)\). Then \(\pi\times t\) is an equivariant \(*\)-epimorphism, and it suffices to show that \(\pi\times t\) is injective on \(\mathcal{NT}_{Y}^{\gamma}\). By Proposition 2.5.13, this amounts to showing that \(\pi\times t\) is injective on the \([\underline{0},\underline{1}_{[r]}]\)-core. Towards contradiction, suppose that \(\ker\pi\times t\cap B^{(\overline{\pi}_{Y},\overline{t}_{Y})}_{[\underline{0},\underline{1}_{[r]}]}\neq\{0\}\). We claim that we can find \(0\neq f\in\ker\pi\times t\cap B^{(\overline{\pi}_{Y},\overline{t}_{Y})}_{[ \underline{0},\underline{1}_{[r]}]}\) of the form \[f=\overline{\pi}_{Y}(b)+\sum\{\overline{\psi}_{Y,\underline{n}}(k_{\underline{ n}})\mid\underline{0}\neq\underline{n}\leq\underline{1}_{[r]}\},\text{ where }0\neq b\in B_{+}\text{ and each }k_{\underline{n}}\in\mathcal{K}(Y_{\underline{n}}). \tag{5.16}\] Indeed, start by taking \(0\neq g\in\ker\pi\times t\cap B^{(\overline{\pi}_{Y},\overline{t}_{Y})}_{[ \underline{0},\underline{1}_{[r]}]}\), so that \[g=\overline{\pi}_{Y}(b^{\prime})+\sum\{\overline{\psi}_{Y,\underline{n}}(k^{ \prime}_{\underline{n}})\mid\underline{0}\neq\underline{n}\leq\underline{1}_{ [r]}\},\text{ where }b^{\prime}\in B\text{ and each }k^{\prime}_{\underline{n}}\in \mathcal{K}(Y_{\underline{n}}).\] If \(b^{\prime}\neq 0\), then choose \(f=g^{*}g\). If \(b^{\prime}=0\), then choose \(\underline{0}\neq\underline{m}\leq\underline{1}_{[r]}\) minimal such that \(k^{\prime}_{\underline{m}}\neq 0\). We may assume that \(\underline{m}\neq\underline{1}_{[r]}\), as otherwise we would have that \(g=\overline{\psi}_{Y,\underline{1}_{[r]}}(k^{\prime}_{\underline{1}_{[r]}})\) and injectivity of \(\psi_{\underline{1}_{[r]}}\) would give that \(g=0\), a contradiction. Since \(k^{\prime}_{\underline{m}}\neq 0\), we may find \(0\neq y_{\underline{m}}\in Y_{\underline{m}}\) such that \(k^{\prime}_{\underline{m}}y_{\underline{m}}\neq 0\). We set \[f:=\overline{t}_{Y,\underline{m}}(k^{\prime}_{\underline{m}}y_{\underline{m}})^ {*}g\overline{t}_{Y,\underline{m}}(y_{\underline{m}})\in\ker\pi\times t\cap B^ {(\overline{\pi}_{Y},\overline{t}_{Y})}_{[\underline{0},\underline{1}_{[r]} ]},\] and we note that \[f=\overline{\pi}_{Y}(\langle k^{\prime}_{\underline{m}}y_{\underline{m}},k^{ \prime}_{\underline{m}}y_{\underline{m}}\rangle)+\sum\{\overline{\psi}_{Y, \underline{n}}(k^{\prime\prime}_{\underline{n}})\mid\underline{0}\neq\underline {n}\leq\underline{1}_{[r]}-\underline{m}\}\] for suitably defined \(k^{\prime\prime}_{\underline{n}}\in\mathcal{K}(Y_{\underline{n}})\), for all \(\underline{0}\neq\underline{n}\leq\underline{1}_{[r]}-\underline{m}\). Notice that \(0\neq\left\langle k^{\prime}_{\underline{m}}y_{\underline{m}},k^{\prime}_{ \underline{m}}y_{\underline{m}}\right\rangle\in B_{+}\) by construction, and so by padding with zeroes we deduce that \(f\) has the required form. Hence, without loss of generality, we may assume that \(f\) is of the form (5.16). We have that \[\pi(b)+\sum\{\psi_{\underline{n}}(k_{\underline{n}})\mid\underline{0}\neq \underline{n}\leq\underline{1}_{[r]}\}=(\pi\times t)(f)=0,\] and hence \(\pi(b)q_{F}=0\) by (2.12). Fixing \(i\in F\), let \(\{\xi^{(j)}_{(\underline{i},\underline{0})}\}_{j\in[N]}\) be a finite frame of \(X_{(\underline{i},\underline{0})}\). Then \(\{\overline{t}_{(\underline{i},\underline{0})}(\xi^{(j)}_{(\underline{i}, \underline{0})})\}_{j\in[N]}\) is a finite frame of \(Y_{\underline{i}}\) by Proposition 5.5.15. Hence we have that \[p_{\underline{i}}=\sum_{j=1}^{N}t_{\underline{i}}(\overline{t}_{(\underline{i}, \underline{0})}(\xi^{(j)}_{(\underline{i},\underline{0})}))t_{\underline{i}}( \overline{t}_{(\underline{i},\underline{0})}(\xi^{(j)}_{(\underline{i}, \underline{0})}))^{*}=\sum_{j=1}^{N}\overline{t}_{(\underline{i},\underline{0})}( \xi^{(j)}_{(\underline{i},\underline{0})})\overline{t}_{(\underline{i}, \underline{0})}(\xi^{(j)}_{(\underline{i},\underline{0})})^{*}=\overline{p}_{( \underline{i},\underline{0})}.\] In turn, we obtain that \[b\overline{q}_{F}=b\prod_{i\in F}(I-\overline{p}_{(\underline{i},\underline{0})})= b\prod_{i\in F}(I-p_{\underline{i}})=bq_{F}=\pi(b)q_{F}=0.\] Since \(\overline{p}_{(\underline{i},\underline{0})}\in\mathcal{N}\mathcal{T}_{X}^{ \beta_{F^{\perp}}}\) for all \(i\in F\), we have that \[E_{\beta_{F^{\perp}}}(b)\overline{q}_{F} =E_{\beta_{F^{\perp}}}(b)+\sum\{(-1)^{|D|}E_{\beta_{F^{\perp}}}(b )\prod_{i\in D}\overline{p}_{(\underline{i},\underline{0})}\mid\emptyset\neq D \subseteq F\}\] \[=E_{\beta_{F^{\perp}}}\bigg{(}b+\sum\{(-1)^{|D|}b\prod_{i\in D} \overline{p}_{(\underline{i},\underline{0})}\mid\emptyset\neq D\subseteq F\} \bigg{)}=E_{\beta_{F^{\perp}}}(b\overline{q}_{F})=0,\] where in the second equality we use that \(E_{\beta_{F^{\perp}}}\) is an \(\mathcal{N}\mathcal{T}_{X}^{\beta_{F^{\perp}}}\)-bimodule map. In particular, for every \(\underline{n}\in\mathbb{Z}_{+}^{d}\) satisfying \(\underline{n}\perp F\), we have that \[E_{\beta_{F^{\perp}}}(b)\xi_{\underline{n}}=E_{\beta_{F^{\perp}}}(b)\overline {q}_{F}\xi_{\underline{n}}=0,\text{ for all }\xi_{\underline{n}}\in X_{ \underline{n}},\] and thus \(E_{\beta_{F^{\perp}}}(b)=0\) by Corollary 5.5.13. Since \(b\in B_{+}\), faithfulness of \(E_{\beta_{F^{\perp}}}\) gives the contradiction that \(b=0\), and the proof is complete. We are now ready to capture the quotient of \(\mathcal{NT}_{X}\) by the ideal \(\left\langle\overline{\pi}(A)\overline{q}_{\underline{i}}\mid i\in F\right\rangle\) induced by \(F\) as a Cuntz-Nica-Pimsner algebra. **Theorem 5.5.18**.: _Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\), wherein \(X_{\underline{i}}\) admits a finite frame for all \(i\in[d]\). Let \(F=[r]\) for some \(r<d\). Consider the product system \(Y^{F}\) over \(B^{F^{\perp}}\) related to \(X\) and \(F\), and define the ideal_ \[I_{Y^{F}}:=\ker\{B^{F^{\perp}}\to\mathcal{NT}_{Y^{F}}/\left\langle\overline{ \pi}_{Y^{F}}(B^{F^{\perp}})\overline{q}_{Y^{F},\underline{i}}\mid i\in F \right\rangle\}.\] _Then the canonical \(*\)-isomorphism \(\mathcal{NT}_{Y^{F}}\cong\mathcal{NT}_{X}\) descends to a \(*\)-isomorphism_ \[\mathcal{NO}_{[Y^{F}]_{I_{Y^{F}}}}\cong\mathcal{NT}_{X}/\left\langle\overline{ \pi}(A)\overline{q}_{\underline{i}}\mid i\in F\right\rangle.\] **Proof.** For notational convenience, we set \(B:=B^{F^{\perp}}\), \(Y:=Y^{F}\) and \(I:=I_{Y^{F}}\). We define \[\mathfrak{J}_{Y}:=\left\langle\overline{\pi}_{Y}(B)\overline{q}_{Y_{ \underline{i}}}\mid i\in F\right\rangle\subseteq\mathcal{NT}_{Y}\quad\text{ and}\quad\mathfrak{J}_{X}:=\left\langle\overline{\pi}(A)\overline{q}_{\underline{i}} \mid i\in F\right\rangle\subseteq\mathcal{NT}_{X}.\] By applying Proposition 5.5.8 to \(Y\), we have that \[\mathcal{NO}_{[Y]_{I}}\cong\mathcal{NT}_{Y}/\mathfrak{J}_{Y}.\] Let \(\pi\times t\colon\mathcal{NT}_{Y}\to\mathcal{NT}_{X}\) be the canonical \(*\)-isomorphism of Theorem 5.5.17. We will show that \((\pi\times t)(\mathfrak{J}_{Y})=\mathfrak{J}_{X}\). This ensures that \(\pi\times t\) descends to a \(*\)-isomorphism on the quotients, and thus \[\mathcal{NO}_{[Y]_{I}}\cong\mathcal{NT}_{Y}/\mathfrak{J}_{Y}\cong\mathcal{NT} _{X}/\mathfrak{J}_{X},\] as required. To this end, fix \(i\in F\). We have that \((\pi\times t)(\overline{p}_{Y\underline{i}})=\overline{p}_{\underline{i}}\) by use of a finite frame expansion, and thus \[(\pi\times t)(\overline{\pi}_{Y}(b)\overline{q}_{Y\underline{i}})=b\overline{ q}_{\underline{i}},\text{ for all }b\in B.\] Next fix \(\underline{n}\in\mathbb{Z}_{+}^{d}\) such that \(\underline{n}\perp F\), and observe that \[\overline{q}_{\underline{i}}\overline{t}_{\underline{n}}(X_{\underline{n}})= \overline{t}_{\underline{n}}(X_{\underline{n}})\overline{q}_{\underline{i}} \subseteq[\overline{t}_{\underline{n}}(X_{\underline{n}})\overline{\pi}(A) \overline{q}_{\underline{i}}]\subseteq\mathfrak{J}_{X},\] using Proposition 2.5.9 in the first equality. By taking adjoints, we also deduce that \[\overline{t}_{\underline{n}}(X_{\underline{n}})^{*}\overline{q}_{\underline{i }}=\overline{q}_{\underline{i}}\overline{t}_{\underline{n}}(X_{\underline{n}})^ {*}\subseteq\mathfrak{J}_{X}.\] Hence we have that \[\overline{t}_{\underline{n}}(X_{\underline{n}})\overline{t}_{\underline{m}}(X_{ \underline{m}})^{*}\overline{q}_{\underline{i}}\subseteq\mathfrak{J}_{X},\text{ for all }\underline{n},\underline{m}\in\mathbb{Z}_{+}^{d}\text{ satisfying }\underline{n},\underline{m}\perp F.\] Thus, by taking finite linear combinations and their norm-limits and using (5.14), we derive that \(B\overline{q}_{\underline{i}}\subseteq\mathfrak{J}_{X}\) and thus in particular \[(\pi\times t)(\overline{\pi}_{Y}(b)\overline{q}_{Y\underline{i}})=b\overline{ q}_{\underline{i}}\in\mathfrak{J}_{X},\text{ for all }b\in B.\] Therefore \(\pi\times t\) maps the generators of \(\mathfrak{J}_{Y}\) into \(\mathfrak{J}_{X}\), and it follows that \((\pi\times t)(\mathfrak{J}_{Y})\subseteq\mathfrak{J}_{X}\). For the reverse inclusion, fix \(i\in F\). Then we have that \[\overline{\pi}(a)\overline{q}_{\underline{i}}=(\pi\times t)(\overline{\pi}_{Y} (\overline{\pi}(a))\overline{q}_{Y,\underline{i}}),\text{ for all }a\in A.\] Note that \(\overline{\pi}_{Y}(\overline{\pi}(A))\overline{q}_{Y,\underline{i}}\subseteq \mathfrak{J}_{Y}\), and so the generators of \(\mathfrak{J}_{X}\) are contained in \((\pi\times t)(\mathfrak{J}_{Y})\). Thus \(\mathfrak{J}_{X}\subseteq(\pi\times t)(\mathfrak{J}_{Y})\), completing the proof. Having generalised the first part of Proposition 5.5.8, next we account for the injectivity clause. We will need the following proposition. Recall that the projections \(\overline{p}_{\underline{i}}\) can be defined for a (just) compactly aligned product system by Remark 2.5.8. **Proposition 5.5.19**.: _Let \(X\) be a compactly aligned product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\). Let \(F=[r]\) for some \(r<d\) and fix \(i\in F\). If \(X_{\underline{i}}\) is injective, then the map_ \[\Phi\colon(B^{F^{\perp}})^{\beta_{F^{\perp}}|_{B^{F^{\perp}}}}\to\mathcal{L}( \mathcal{F}X);b\mapsto b\overline{p}_{\underline{i}}\text{ for all }b\in(B^{F^{\perp}})^{\beta_{F^{\perp}}|_{B^{F^{\perp}}}}\] _is an injective \(*\)-homomorphism._ **Proof.** For notational convenience, we set \(B:=B^{F^{\perp}}\). Note that \(\overline{p}_{\underline{i}}\in\overline{t}_{\underline{n}}(X_{\underline{n}} )^{\prime}\) for all \(\underline{n}\perp F\), and therefore \(\Phi\) is a well-defined \(*\)-homomorphism since \[B^{\beta_{F^{\perp}}|_{B}}=\overline{\operatorname{span}}\{\overline{t}_{ \underline{n}}(X_{\underline{n}})\overline{t}_{\underline{n}}(X_{\underline{n }})^{*}\mid\underline{n}\perp F\}.\] For each \(m\in\mathbb{N}\), define the finite \(\vee\)-closed set \(\mathcal{S}_{m}:=\{\underline{n}\in\mathbb{Z}_{+}^{d}\mid\underline{n}\leq m \cdot\underline{1}_{F^{c}}\}\), and notice that \(B^{\beta_{F^{\perp}}|_{B}}\) is the direct limit of the C*-subalgebras \[B^{\beta_{F^{\perp}}|_{B}}_{m}:=\operatorname{span}\{\overline{\psi}_{ \underline{n}}(\mathcal{K}(X_{\underline{n}}))\mid\underline{n}\in\mathcal{S}_ {m}\}\text{ for all }m\in\mathbb{N}.\] Hence, to show that \(\Phi\) is injective it suffices to show that \(\Phi\) is injective on every \(B^{\beta_{F^{\perp}}|_{B}}_{m}\). To this end, fix \(m\in\mathbb{N}\) and let \[b:=\sum_{\underline{n}\in\mathcal{S}_{m}}\overline{\psi}_{\underline{n}}(k_{ \underline{n}})\in\ker\Phi.\] Then, in particular, we have that \[b|_{\sum_{\underline{l}\geq\underline{i}}X_{\underline{l}}}=\Phi(b)|_{\sum_{ \underline{l}\geq\underline{i}}X_{\underline{l}}}=0.\] Notice that \(\underline{m}+\underline{i}\geq\underline{i}\) whenever \(\underline{m}\perp F\), and therefore \[\sup\{\|b|_{X_{\underline{m}+\underline{i}}}\|\mid\underline{m}\perp F\}\leq \|b|_{\sum_{\underline{l}\geq\underline{i}}X_{\underline{l}}}\|=0.\] For each \(\underline{n}\in\mathcal{S}_{m}\), we have that \(\overline{\psi}_{\underline{n}}(k_{\underline{n}})=\sum_{\underline{l} \geq\underline{n}}\underline{\ell}_{\underline{n}}(k_{\underline{n}})\in \mathcal{L}(\mathcal{F}X)\). Hence for each \(\underline{m}\perp F\) we have that \[\|b|_{X_{\underline{m}+\underline{i}}}\|=\|\sum_{\underline{n}\in\mathcal{S}_ {m}}\sum_{\underline{\ell}\geq\underline{n}}\underline{\ell}_{\underline{n}}(k _{\underline{n}})|_{X_{\underline{m}+\underline{i}}}=\|\sum_{\begin{subarray}{ c}\underline{n}\in\mathcal{S}_{m}\\ \underline{n}\leq\underline{m}+\underline{i}\end{subarray}}\underline{\ell}_{ \underline{n}}^{\underline{m}+\underline{i}}(k_{\underline{n}})\|.\] We then compute \[0 =\sup\{\|b|_{X_{\underline{m}+\underline{i}}}\|\mid\underline{m} \perp F\}=\sup\{\|\sum_{\begin{subarray}{c}\underline{n}\in\mathcal{S}_{m} \\ \underline{n}\leq\underline{m}+\underline{i}\end{subarray}}\underline{\ell}_{ \underline{n}}^{\underline{m}+\underline{i}}(k_{\underline{n}})\|\mid \underline{m}\perp F\}\] \[=\sup\{\|\sum_{\begin{subarray}{c}\underline{n}\in\mathcal{S}_{m} \\ \underline{n}\leq\underline{m}\end{subarray}}\underline{\ell}_{\underline{n}}^{ \underline{m}+\underline{i}}(k_{\underline{n}})\|\mid\underline{m}\perp F\}= \sup\{\|\underline{\ell}_{\underline{m}}^{\underline{m}+\underline{i}}(\sum_{ \underline{n}\in\mathcal{S}_{m}}\underline{\ell}_{\underline{n}}^{\underline{m}}(k _{\underline{n}}))\|\mid\underline{m}\perp F\}\] \[=\sup\{\|\sum_{\begin{subarray}{c}\underline{n}\in\mathcal{S}_{m} \\ \underline{n}\leq\underline{m}\end{subarray}}\underline{\ell}_{\underline{n}}^{ \underline{m}}(k_{\underline{n}})\|\mid\underline{m}\perp F\}=\|\sum_{ \begin{subarray}{c}\underline{m}\perp F\end{subarray}}\sum_{\begin{subarray}{c} \underline{n}\in\mathcal{S}_{m}\\ \underline{n}\leq\underline{m}\end{subarray}}\underline{\ell}_{\underline{n}}^{ \underline{m}}(k_{\underline{n}})\|\] \[=\|\sum_{\underline{n}\in\mathcal{S}_{m}}\sum_{\begin{subarray}{c} \underline{m}\perp F\\ \underline{m}\geq\underline{n}\end{subarray}}\underline{\ell}_{\underline{n}}^{ \underline{m}}(k_{\underline{n}})\|=\|b|_{\sum_{\underline{m}\perp F}X_{ \underline{m}}}\|=\|b\|.\] In the second line we use that \(\underline{n}\leq\underline{m}+\underline{i}\) if and only if \(\underline{n}\leq\underline{m}\), whenever \(\underline{n}\in\mathcal{S}_{m}\) and \(\underline{m}\perp F\). In the third line we use injectivity of \(X_{\underline{i}}\) to deduce that \(\frac{m+\underline{i}}{\underline{m}}\) is isometric for all \(\underline{m}\perp F\). In the final line we apply Corollary 5.5.13. In total, we have that \(b=0\) and the proof is complete. **Proposition 5.5.20**.: _Let \(X\) be a compactly aligned product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\). Let \(F=[r]\) for some \(r<d\) and let \(i\in F\). If \(X_{\underline{i}}\) admits a finite frame and is injective, then \(Y_{\underline{i}}^{F}\) admits a finite frame and is injective._ **Proof.** For notational convenience, we set \(B:=B^{F^{\perp}}\) and \(Y:=Y^{F}\). Let \(\{\xi_{\underline{i}}^{(j)}\}_{j\in[N]}\) be a finite frame of \(X_{\underline{i}}\). Then \(\{\overline{t}_{\underline{i}}(\xi_{\underline{i}}^{(j)})\}_{j\in[N]}\) defines a finite frame of \(Y_{\underline{i}}\) by Proposition 5.5.15. Next let \(b\in\ker\phi_{Y_{\underline{i}}}\), so that \[b^{*}b\overline{p}_{\underline{i}}=\sum_{j=1}^{N}b^{*}b\overline{t}_{ \underline{i}}(\xi_{\underline{i}}^{(j)})\overline{t}_{\underline{i}}(\xi_{ \underline{i}}^{(j)})^{*}=\sum_{j=1}^{N}b^{*}\left(\phi_{Y_{\underline{i}}}(b )\overline{t}_{\underline{i}}(\xi_{\underline{i}}^{(j)})\right)\overline{t}_{ \underline{i}}(\xi_{\underline{i}}^{(j)})^{*}=0.\] Noting that \(\overline{p}_{\underline{i}}\in\mathcal{N}\mathcal{T}_{X}^{\beta_{F^{\perp}}}\), we obtain that \(E_{\beta_{F^{\perp}}}(b^{*}b)\overline{p}_{\underline{i}}=0\) by using that \(E_{\beta_{F^{\perp}}}\) is a bimodule map over \(\mathcal{N}\mathcal{T}_{X}^{\beta_{F^{\perp}}}\). An application of Proposition 5.5.19 gives that \(E_{\beta_{F^{\perp}}}(b^{*}b)=0\), and since \(E_{\beta_{F^{\perp}}}\) is faithful we obtain that \(b^{*}b=0\). We conclude that \(b=0\), and the proof is complete. **Corollary 5.5.21**.: _Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\), wherein \(X_{\underline{i}}\) admits a finite frame for all \(i\in[d]\). Let \(F=[r]\) for some \(r<d\). If \(X_{\underline{i}}\) is injective for all \(i\in F\), then \(Y^{F}\) is regular, and the canonical \(*\)-isomorphism \(\mathcal{NT}_{Y^{F}}\cong\mathcal{NT}_{X}\) descends to a \(*\)-isomorphism_ \[\mathcal{NO}_{Y^{F}}\cong\mathcal{NT}_{X}/\left<\overline{\pi}(A)\overline{q}_{ \underline{i}}\mid i\in F\right>.\] **Proof.** By Propositions 2.5.1 and 5.5.20, we have that \(Y^{F}\) is regular. Next, recall the definition of the ideal \(I_{Y^{F}}\) of Theorem 5.5.18. An application of the last part of Proposition 5.5.8 to \(Y^{F}\) gives that \(I_{Y^{F}}=\{0\}\). Thus the final claim follows by a direct application of Theorem 5.5.18. The description in Theorem 5.5.18 first applies the \(Y^{F}\) construction to \(X\) and then passes to a quotient. We can have an alternative route by first considering a quotient of \(X\) and then applying the \(Y^{F}\) construction. To avoid confusion, we will denote by \(Y^{F}_{X}\) the product system induced by \(F=[r]\) and \(X\), as in Definition 5.5.9. **Theorem 5.5.22**.: _Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\), wherein \(X_{\underline{i}}\) admits a finite frame for all \(i\in[d]\). Let \(F=[r]\) for some \(r<d\), and let_ \[\mathfrak{J}:=\left<\overline{\pi}_{X}(A)\overline{q}_{X_{\underline{i}}}\mid i \in F\right>\subseteq\mathcal{NT}_{X}\quad\text{and}\quad I^{F}_{X}:=\mathcal{ L}_{\emptyset}^{\mathfrak{J}}\subseteq A.\] _Then the following hold:_ * \(I^{F}_{X}=\{a\in A\mid\lim_{\underline{n}\in\mathbb{Z}_{+}^{r}}\|\phi_{( \underline{n},\underline{0})}(a)\|=0\}\)_._ * \([X_{\underline{i}}]_{I^{F}_{X}}\) _is injective for all_ \(i\in F\)_._ * _The product system_ \(Y_{[X]_{I^{F}_{X}}}^{F}\) _is regular._ * _The association_ \(X\to[X]_{I^{F}_{X}}\to Y_{[X]_{I^{F}_{X}}}^{F}\) _induces a_ \(*\)_-isomorphism_ \(\mathcal{NT}_{X}/\mathfrak{J}\cong\mathcal{NO}_{Y_{[X]_{I^{F}_{X}}}^{F}}\)_._ **Proof.** To ease notation, we write \(I:=I^{F}_{X}\). Item (i) follows by [30, Proposition 4.3]. By following the same arguments as in the proof of Proposition 5.5.8, but for \(i\in F\) instead of \(i\in[d]\), we obtain that \([X_{\underline{i}}]_{I}\) is injective for all \(i\in F\). An application of Corollary 5.5.21 yields that \(Y_{[X]_{I}}^{F}\) is regular, and that there is a canonical \(*\)-isomorphism \[\mathcal{NT}_{[X]_{I}}/\left<\overline{\pi}_{[X]_{I}}([A]_{I})\overline{q}_{[X]_ {I^{\underline{i}}}}\mid i\in F\right>\cong\mathcal{NO}_{Y_{[X]_{I}}^{F}}.\] It suffices to show that the canonical map \([\,\cdot\,]_{I}\colon\mathcal{NT}_{X}\to\mathcal{NT}_{[X]_{I}}\) descends to a \(*\)-isomorphism \[\Phi\colon\mathcal{NT}_{X}/\mathfrak{J}\to\mathcal{NT}_{[X]_{I}}/\left<\overline{ \pi}_{[X]_{I}}([A]_{I})\overline{q}_{[X]_{I},\underline{i}}\mid i\in F \right>.\] To this end, we have that \([\overline{\pi}_{X}(a)\overline{q}_{X,\underline{i}}]_{I}=\overline{\pi}_{[X]_{I}} ([a]_{I})\overline{q}_{[X]_{I,\underline{i}}}\) for all \(a\in A\) and \(i\in F\). Therefore \[[\mathfrak{J}]_{I}=\left\langle\overline{\pi}_{[X]_{I}}([A]_{I})\overline{q}_{ [X]_{I,\underline{i}}}\mid i\in F\right\rangle,\] and thus \(\Phi\) is a well-defined \(*\)-epimorphism. On the other hand, by applying item (ii) of Proposition 4.2.1 to \(\mathfrak{J}\) and noting that \(I\equiv\mathcal{L}_{\emptyset}^{\mathfrak{J}}\), we obtain a canonical \(*\)-epimorphism \[\Psi\colon\mathcal{NT}_{[X]_{I}}\to\mathcal{NO}([\mathcal{L}^{\mathfrak{J}}] _{I},[X]_{I})\cong\mathcal{NT}_{X}/\mathfrak{J},\] such that \[\Psi(\overline{\pi}_{[X]_{I}}([a]_{I}))=\overline{\pi}_{X}(a)+\mathfrak{J} \quad\text{and}\quad\Psi(\overline{t}_{[X]_{I},\underline{n}}([\xi_{\underline {n}}]_{I})=\overline{t}_{X,\underline{n}}(\xi_{\underline{n}})+\mathfrak{J}\] for all \(a\in A,\xi_{\underline{n}}\in X_{\underline{n}}\) and \(\underline{n}\in\mathbb{Z}_{+}^{d}\). In particular, we have that \(\Psi(\overline{p}_{[X]_{I},\underline{i}})=\overline{p}_{X,\underline{i}}+ \mathfrak{J}\) for all \(i\in F\) using a finite frame expansion, and thus \[\Psi(\overline{\pi}_{[X]_{I}}([a]_{I})\overline{q}_{[X]_{I},\underline{i}})= \overline{\pi}_{X}(a)\overline{q}_{X,\underline{i}}+\mathfrak{J}=0,\text{ for all }a\in A,i\in F,\] by definition of \(\mathfrak{J}\). Hence \(\Psi\) descends to a canonical \(*\)-epimorphism \[\tilde{\Psi}\colon\mathcal{NT}_{[X]_{I}}/\left\langle\overline{\pi}_{[X]_{I}} ([A]_{I})\overline{q}_{[X]_{I},\underline{i}}\mid i\in F\right\rangle\to \mathcal{NT}_{X}/\mathfrak{J}.\] By definition \(\tilde{\Psi}\) is a left inverse of \(\Phi\) and thus \(\Phi\) is a \(*\)-isomorphism, as required. Theorems 5.5.18 and 5.5.22 show that there is no difference as when to consider the quotient product system. **Corollary 5.5.23**.: _Let \(X\) be a product system over \(\mathbb{Z}_{+}^{d}\) with coefficients in a C*-algebra \(A\), wherein \(X_{\underline{i}}\) admits a finite frame for all \(i\in[d]\). Let \(F=[r]\) for some \(r<d\). On the one hand, define the positively invariant ideal_ \[I_{Y_{X}^{F}}:=\ker\{Y_{X,\underline{0}}^{F}\to\mathcal{NT}_{Y_{X}^{F}}/\left \langle\overline{\pi}_{Y_{X}^{F}}(Y_{X,\underline{0}}^{F})\overline{q}_{Y_{X}^ {F},\underline{i}}\mid i\in F\right\rangle\}\] _for the product system \(Y_{X}^{F}\) related to \(X\) and \(F\). On the other hand, define the positively invariant ideal_ \[I_{X}^{F}:=\ker\{A\to\mathcal{NT}_{X}/\left\langle\overline{\pi}_{X}(A) \overline{q}_{X,\underline{i}}\mid i\in F\right\rangle\}\] _for \(X\), and consider the product system \(Y_{[X]_{I_{X}^{F}}}^{F}\) related to \([X]_{I_{X}^{F}}\) and \(F\). Then there are canonical \(*\)-isomorphisms_ \[\mathcal{NC}_{[Y_{X}^{F}]_{I_{Y_{X}^{F}}}}\cong\mathcal{NT}_{X}/\left\langle \overline{\pi}_{X}(A)\overline{q}_{X,\underline{i}}\mid i\in F\right\rangle \cong\mathcal{NC}_{Y_{[X]_{I_{X}^{F}}}^{F}}.\] _If, in addition, \(X_{\underline{i}}\) is injective for all \(i\in F\), then \(Y_{X}^{F}\) is regular, \(I_{Y_{X}^{F}}=\{0\}\) and \(I_{X}^{F}=\{0\}\)._ **Proof.** It suffices to comment on the last part. If \(X_{\underline{i}}\) is injective for all \(i\in F\), then \(Y_{X}^{F}\) is regular by Corollary 5.5.21. Thus it follows that \(I_{Y_{X}^{F}}=\{0\}\) since the coefficient algebra embeds in the Cuntz-Nica-Pimsner algebra. Finally, every \(\phi_{(\underline{n},\underline{0})}\) with \(\underline{n}\in\mathbb{Z}_{+}^{r}\) is isometric, and thus \(I_{X}^{F}=\{0\}\) by item (i) of Theorem 5.5.22.
2308.06528
Learning Abstract Visual Reasoning via Task Decomposition: A Case Study in Raven Progressive Matrices
Learning to perform abstract reasoning often requires decomposing the task in question into intermediate subgoals that are not specified upfront, but need to be autonomously devised by the learner. In Raven Progressive Matrices (RPM), the task is to choose one of the available answers given a context, where both the context and answers are composite images featuring multiple objects in various spatial arrangements. As this high-level goal is the only guidance available, learning to solve RPMs is challenging. In this study, we propose a deep learning architecture based on the transformer blueprint which, rather than directly making the above choice, addresses the subgoal of predicting the visual properties of individual objects and their arrangements. The multidimensional predictions obtained in this way are then directly juxtaposed to choose the answer. We consider a few ways in which the model parses the visual input into tokens and several regimes of masking parts of the input in self-supervised training. In experimental assessment, the models not only outperform state-of-the-art methods but also provide interesting insights and partial explanations about the inference. The design of the method also makes it immune to biases that are known to be present in some RPM benchmarks.
Jakub Kwiatkowski, Krzysztof Krawiec
2023-08-12T11:02:21Z
http://arxiv.org/abs/2308.06528v2
Learning Abstract Visual Reasoning via Task Decomposition: A Case Study in Raven Progressive Matrices ###### Abstract One of the challenges in learning to perform abstract reasoning is that problems are often posed as monolithic tasks, with no intermediate subgoals. In Raven Progressive Matrices (RPM), the task is to choose one of the available answers given a context, where both contexts and answers are composite images featuring multiple objects in various spatial arrangements. As this high-level goal is the only guidance available, learning is challenging and most contemporary solvers tend to be opaque. In this study, we propose a deep learning architecture based on the transformer blueprint which, rather than directly making the above choice, predicts the visual properties of individual objects and their arrangements. The multidimensional predictions obtained in this way are then directly juxtaposed to choose the answer. We consider a few ways in which the model parses the visual input into tokens and several regimes of masking parts of the input in self-supervised training. In experimental assessment, the models not only outperform state-of-the-art methods but also provide interesting insights and partial explanations about the inference. The design of the method also makes it immune to biases that are known to exist in some RPM benchmarks. ## 1 Introduction One of the key attributes of general intelligence is abstract reasoning, which, among others, subsumes the capacity to model and complete sequential patterns. To quantify such capabilities in humans and diagnose related deficiencies, John C. Raven devised in the 1930s a visual test contemporarily known as Raven Progressive Matrices (RPM) [1]. An RPM problem (Fig. 1) is a 3x3 grid of _context panels_ containing arrangements of 2D geometric objects. The objects adhere to rules that govern the relationships between the panels in rows and columns, e.g. progression of the number of objects, a logical operation concerning their presence, etc. The lower-right _query panel_ is masked out, and the task is to complete it by choosing the correct image from the list of eight provided _answer panels_. Solving the task requires 'disentangling' the rules corresponding to rows and columns and capturing the analogies between the observed patterns. RPMs have been more recently adopted in the AI community for assessing similar capabilities in artificial intelligent systems, along with other benchmarks like Bongard problems [2] and Hofstadter's analogies [3]. Recent advances in machine learning accelerated this trend, with deep learning becoming the primary paradigm of choice for designing such systems [4]. The original RPM collection, Standard Progressive Matrices [1], comprised just 60 tasks, which is not enough to effectively train data-hungry ML models. To address this challenge, several larger datasets and task generators have been devised, among them RAVEN [5] and I-RAVEN [6] being most popular nowadays (see [4] for review). In this process, it became evident that designing a representative, diverse, and varying in difficulty collection of RPM tasks is nontrivial and prone to several types of biases. The primary cause is the limited number of answer panels (typically 8) that need to be filled with the correct answer and 7 incorrect ones. However, the latter should not be trivial to distinguish from the correct answer (e.g. contain shapes that do not occur in the context panels) nor feature alternative correct answers to the task, to avoid ambiguity. While avoiding 'obviously wrong' answers in task generation can be easily achieved, it is hard to avoid more subtle 'leaks'. This was epitomized with so-called _context-blind_ methods [7] that achieve almost perfect scores on RAVEN by disregarding the context entirely and making decisions based on answer panels only. In this study, we circumvent this challenge by decomposing the task into two stages: 1) prediction of properties of the query panel, and 2) identifying the answer panel with properties that match those predicted ones the most. To this aim, we use the abstract properties available in RAVEN benchmarks and design a bespoke deep learning architecture based on the transformer blueprint [8]. The resulting approach, which we dub Abstract Compositional Transformer (ACT), is not only more transparent than end-to-end neural architectures, but also less prone to biases, and consequently capable of exceeding the state-of-the-art performance. More specifically, the property prediction stage (1st stage) is immune to biases because it does not involve the potentially biased answer panels, while the 2nd stage it does not involve learning and thus by definition cannot be biased. Also, to the best knowledge of the authors, this is the first attempt of predicting symbolic descriptors of RPM puzzles and the first study on self-supervised learning for RPM. The two-stage approach, the model architecture, and an extensive empirical analysis form the main contributions of this study. ## 2 The proposed approach Rather than training models that choose answer panels in RPM, we propose to rely on _property prediction_, in which models generate an abstract, structured representation of the missing panel (Fig. 2). To this aim, we rely on the RAVEN dataset [5], in which tasks have been generated from symbolic specifications expressed in an image description grammar that captures visual concepts such as position, type of shape, color, number of objects, inside-outside, etc. The objective of the model is to predict these semantically grounded properties for the query panel and for answer panels. A trained property prediction model is then subsequently used to solve the RPM choice tasks. Our model comprises three modules: image tokenizer, transformer, and property predictor; we describe them in subsections that follow (see Sec. SM1 in the Supplementary Material for technical details). Even though RPM problems have an inherent 2D structure, we rely on _sequence-to-sequence_ transformers [8] and demonstrate that effective RPM solvers can be obtained without explicitly informing the model about the spatial structure of the problem. Figure 1: An exemplary RPM task. Figure 2: The architecture of the model (yellow boxes) and its training process, guided by the loss function that compares the predicted and actual properties of RPM panels. The model learns from completed RPM tasks, with one of the panels (context or query panel) masked out, and predicts the properties of all panels. ### Image tokenizer The tokenizer maps the 2D raster representation of an RPM problem to a sequence of abstract tokens using a convolutional neural network (CNN) that gradually contracts the input image to lower spatial resolutions in consecutive layers, while increasing the number of channels. The multi-channel superpixels produced by the last layer form the tokens, i.e. a token is the vector of channels produced by the CNN at a given location. In experiments, our CNN is the EfficientNetV2B0 [9] pretrained on the ImageNet database [10]. We consider the three following types of tokenizers that vary in how they perceive the panel rasters (when necessary, the single-channel monochrome RPM image is broadcasted to RGB input channels of the CNN). **Panel tokenizer.** In this variant, the raster image representing each panel is tokenized independently. The CNN is applied to each raster (84x84 pixel in RAVEN) and produces a 3x3 array of 128-channel superpixels, which is then flattened row-wise into a sequence of 9 128-dimensional tokens. This is repeated for all 9 panels, and the resulting sequences are concatenated, producing 81 tokens in total. **Task tokenizer.** In this variant, the raster image of the entire RPM task is tokenized with a single invocation of the CNN. For RAVEN, this means applying the CNN to a 252x252 pixel image, which results in a 8x8 array of 128-channel superpixels, then flattened row-wise, leading to a sequence of 64 128-dimensional tokens. **Row tokenizer.** In this variant, the rasters representing individual panels in each row are first stacked channel-wise, resulting in three 3-channel 84x84 images corresponding to the top, middle and bottom row of the puzzle. The CNN is queried on each of those images independently and produces a 3x3 array of superpixels in each query, which are then flattened row-wise, resulting in 9 128-dimensional tokens. Finally, the subsequences for the top, middle, and bottom RPM rows are concatenated, resulting in 27 tokens. By stacking the panel images, we directly juxtapose them in input channels, allowing so the CNN to form low-level features that capture the differences between the individual images. The RPM images from the left, central and right columns end up being interpreted by the CNN as, respectively, the red, green, and blue channels. While this may seem semantically disputable, this practice has been successfully applied in other works; also, the pretrained CNN is trained alongside the entire model, and can thus adapt to the characteristics of RPM. The relatively small sizes of rasters, combined with 18 convolutional layers of the CNN (cf. Table 1 in [11]), cause the receptive fields of units in the last layer to span the entire input image. Therefore, for all tokenizers, each token may in principle depend on the entire input raster (panel raster for Panel and Row tokenizers, and task raster for the Task tokenizer). Also, only the Panel tokenizer is guaranteed to ensure some degree of selective correspondence between RPM panels and tokens. In the Task tokenizer, the representation is more entangled, as the receptive fields of the CNN are allowed to span _multiple_ neighboring panels. In the Row tokenizer, the consecutive groups of 9 tokens form an entangled representation of the top, middle and bottom row of panels. However, the degree of entanglement depends on the characteristics of features acquired in training, and the actual _effective receptive fields_ can be smaller (see, e.g., [12]). ### Sequence-to-sequence transformer The transformer processes the one-dimensional sequences of tokens \(X\) produced by an image tokenizer by first encoding each token independently using the encoder, \[Z=map(Encoder_{\theta_{E}},X), \tag{1}\] which is implemented as a dense linear layer, primarily meant to match the dimensionality of the tokens to the input dimensionality of the transformer. Then, the transformer maps the sequence of encoded tokens \(Z\) to a sequence of output tokens \(O\) of the same length: \[O=Transformer(Z;\theta_{T}), \tag{2}\] The model is equipped with a _learnable positional encoding_, applied to the input tokens in \(Z\). As the number of tokens is constant, we encode the _absolute_ positions of tokens in \(Z\), which can be achieved with a fixed-size learnable embedding. There is a single entry in the embedding for each position in the input sequence, and thus 81, 64, and 27 entries for respectively the Panel, Task, and Row tokenizer. The embedding vectors are added to respective tokens in \(Z\) before passing them to the transformer. Internally, the transformer is a stacked sequence of transformer blocks, each of them consisting of multi-head attention mechanism \(Attn(\theta_{A})\), layer normalization layers \(LayerNorm\), a feed-forward network \(f(\theta_{f})\) and skip connections. The processing for the \(i\)th token can be summarized as: \[\begin{split} a_{i}&=Attn(LayerNorm(z_{i};\theta_{N}); \theta_{A})\\ m_{i}&=a_{i}+z_{i}\\ o_{i}&=f(LayerNorm(m_{i};\theta_{N^{\prime}}); \theta_{f})+m_{i}\end{split} \tag{3}\] The resulting tokens \(o_{i}\) form the output sequence \(O\). For a detailed description of transformer and multi-head attention, see [8]. ### Property predictor The property predictor maps the sequence of tokens \(O\) produced by the transformer to the properties of individual panels, as follows: 1. The sequence is sliced into 9 _chunks_ of equal length, 2. The tokens in each chunk are concatenated to form a single vector, 3. Each vector is independently mapped to a _property vector_ (Sec. 2.4) using a dense subnetwork, in what is technically a multi-label classification. For the Panel tokenizer, which produces 81 tokens, the chunk length is 9; for the Row tokenizer, which produces 27 tokens, the chunk length is 3; for the Task tokenizer, producing 64 tokens, each chunk comprises 7 tokens, and the last token is discarded. The consecutive 9 prediction vectors obtained in this way are assumed to correspond to RPM panels, traversed row-wise and left-to-right in each row (8 context panels and the query panel). Associating the chunks with panels forces the transformer to both _combine and disentangle_ the information carried by the input tokens. The combining is necessitated by the task, which requires detecting the patterns adhered to by the context panels. The disentanglement, on the other hand, is necessary for the Task and Row tokenizers, which do not derive tokens from individual context panels _independently_, but aggregate information from multiple panels. ### Property vectors Following the definition of the RAVEN family of benchmarks [5, 6, 13], we assume the panels to be composed of _objects_ and distinguish 7 _spatial arrangements_, each containing at least one object, with the maximum number of objects as follows: _center-single_ (1), _distribute-four_ (4), _distribute-nine_ (9), _in-center-single-out-center-single_ (2), _in-distribute-four-out-center-single_ (5), _left-center-single-right-center-single_ (2), _up-center-single-down-center-single_ (2). An RPM panel is represented as a _property vector_ of fixed dimensionality that contains the information about all objects in every possible arrangement; there are thus in total 25 object'slots' in the vector. Each object is characterized by three _appearance properties_ with the following admissible values: * color: 255, 224, 196, 168, 140, 112, 84, 56, 28, 0 (10 values), * size: 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 (6 values), * type: triangle, square, pentagon, hexagon, circle (5 values). Another appearance property used in [5] is the rotation angle of individual objects. However, the angle is drawn at random when generating an image, i.e. it does not influence the reasoning rules. A successful solver should disregard this characteristic, hence we do not include it in our representation. In total, a property vector comprises thus 101 variables: * _arrangement_: the identifier of the arrangement (1 variable), * _present\({}_{i}\)_: a group of 25 binary variables, each indicating the presence/absence in the \(i\)th object slot, * 75 appearance properties describing the object in each slot. **Relevance of properties**. As it follows from the above description, property vectors are contextual, i.e. a value of an element of a property vector may deem some other elements _relevant_ or _irrelevant_. For instance, if _arrangement = distribute-four_, only _present\({}_{i}\)_ for four indices \(i\) matter; if then only two of them are set to 1, only the corresponding \(2\times 3=6\) appearance properties describing the two indicated objects are relevant. The number of relevant properties varies from 5 for _center-single_ (1 _arrangement + present\({}_{i}\)_ for a single object + 3 appearance properties) to 37 for _distribute-nine_ (1 _arrangement + 9 present\({}_{i}\)_ properties + \(9\times 3\) appearance properties). The distinction between relevant and irrelevant panel properties is essential; in particular, the loss function used to train models and the metrics used to assess their quality are calculated from relevant properties only. When confronting two property vectors, always one of them serves as the _source of relevance_; this depends on the use case (more details in Sec. SM2 and in the experimental part). **Encoding of property vectors**. To make properties amenable to differentiable loss functions, we represent all variables with one-hot-encoding, which results in the low-level representation of a panel being a binary vector of \(7+25+25\times(10+6+5)=557\) dimensions. Respectively, models produce for each panel a \(557\)-dimensional vector with values in \([0,1]\), which is assured by forcing the dimensions representing a given categorical variable through the softmax activation function; for instance, the first 7 elements of the output vector represent the probability distribution over possible \(7\) arrangements. To calculate the loss, the distributions predicted by the model are confronted with the corresponding one-hot target distributions using categorical entropy. The resulting entropies are multiplied by weights tuned and fixed in the early stages of this project (see Sec. SM3 of the Supplementary Material). The loss is calculated only for the distributions of relevant properties, with the target property vector acting as the source of relevance. The overall loss on a given RPM task is a weighted sum of the losses for individual panels, where weights depend on the type of masking applied in training (Sec. 3). ## 3 Model training Similarly to natural language models, our models are trained via self-supervision, i.e. they are tasked to predict a masked-out element of the input sequence, given the context of the visible elements. While in typical use cases the elements in question are tokens, solving an RPM task requires making decisions about context panels, each of which is represented here by a subsequence (_chunk_) of tokens (Sec. 2.3). For this reason, we apply masking _to panels_ and train models to produce correct complete sequences of output tokens. In each training step, the tokenizer is applied to the task with a single context panel masked out and produces a (complete) sequence of input tokens. Then the transformer is applied to that sequence, producing a sequence of output tokens of the same length, which are subsequently sliced into chunks and mapped to property vectors. The discrepancy between the predicted and true properties, as captured by the loss function, produces the gradients that update the model. In RPM, one can make predictions in both directions of the sequence of context panels, due to the nature of the underlying patterns (e.g. the progression of the number of objects). It seems thus desirable to mask randomly chosen panels (rather than just the last panel), to facilitate learning of patterns _across the entire RPM puzzle_, and prospectively use that knowledge for making decisions about the query panel. However, the query panel is by definition empty, so it seems natural to treat it as a masked-out one too. Given that, masking out an _additional_ context panel could be detrimental for training, for at least two reasons: (i) it would be inconsistent with test-time querying (problem solving), as then only the query panel is invisible, (ii) absence of two panels could lead to ambiguity, i.e. multiple correct answers to the puzzle. Therefore, we split training into two phases: 1. **Random masking phase**. Each task is completed with the correct answer from the answer panels, and a randomly chosen panel (one of 9) is masked out. Random masking is realized 'on the fly', so the same task has different panels masked out in different epochs. 2. **Query masking phase**. The RPM tasks are presented as-is, with the query panel masked out, and no additional masking is applied. The weight of the loss related to the query panel is multiplied by 0.01, in order for this phase to act as fine-tuning after Phase 1. The loss function is not applied to the non-masked panels, i.e. the model is not penalized at all for making predictions there. By dividing training into these phases, we allow the model to first learn the patterns across the entire RPM board, and only then require it to focus on the query panel. Masking requires replacing a panel with some 'neutral' image. We analyzed several types of masks, among others empty (zeroed) panels and images filled with random noise. Ultimately, the _trainable masks_ performed best: the masking image is initialized with random noise and considered a parameter of the model, i.e. it is continuously updated in training, ultimately expressing the cumulative input that the model 'expects' at masked panels. ## 4 Related work RPM has been long considered an interesting benchmark for abstract reasoning systems, along with Bongard problems [2], Hofstadter's analogies [3], Numbo [14], or Sudoku, to name a few. The advent of deep learning only intensified this interest, with an output of studies proposing various architectures and learning approaches. A recent survey [4] cites at least 34 papers, most of them published within the last five years; Tables 8 and 9 cite those of them that achieved best performances on RAVEN [5] and I-RAVEN [6] benchmarks. Of those, the model that bears most architectural similarity to the approach proposed in this paper is the Attention Relation Network (ARNe) [15], which engages a transformer to facilitate spatial abstract reasoning. However, like almost all other studies that the authors of this study are aware of, ARNe is directly trained to choose answer panels, rather than predicting panel properties (and, consequently, makes use of the transformer blueprint in a fundamentally different way from this study). In terms of the taxonomy proposed in [4], our approach could be classified as a relational reasoning network, as part of the model is delegated to learn relations between context panels. A notable representative of this class is the Wild Relation Network (WRNe) [16], in which a relation network is used to score the answer panels. In contrast, our approach does not model the relations between panels explicitly, but delegates relational learning to the transformer, while encoding the spatial characteristics of the task as a sequence of tokens. Our approach bears resemblance to some past works in the hierarchical networks category delineated in [4]. More specifically, the way in which our Row tokenizer stacks panels channel-wise is analogous to the design of 'perceptual backbones' in, e.g., the Stratified Rule-Aware Network (SRAN) approach ([6]; see also Fig. 8 in [4]). Notice, however, that the panel stacking used in the Row tokenizer is the only way in which we explicitly reveal the relationships between panels to the model. All remaining logic about the correspondence, succession, progression, etc. of patterns in the panels needs to be autonomously learned by juxtaposing the input tokens. This mitigates manual modeling of relationships and, implicitly, human biases. Last but not least, there were several works in which the models, apart from choosing the right answer panel, were required to make predictions about the _rules_ that govern the generation of RPM tasks. This has been attempted via auxiliary terms in the loss function in [5], but the models were not benefitting from this extension, or even underperforming when compared to the reference architecture (Sec. 6.4 therein). Similar negative results have been reported in, among others, [6], [17] and several follow-up studies (see Sec. 3.2 in [4]). Some preliminary encouraging results in addressing these challenges have been achieved in [18]. ## 5 Results: property prediction In this section, we cover the experimental results for the property prediction tasks; in Sec. 6, we use the trained models to solve the choice tasks. Implementation details are available in the Supplementary Material. Following [5], we use the original division of the \(70{,}000\) tasks from the RAVEN database1 (7 spatial arrangements \(\times 10{,}000\) tasks) into training, validation and test sets of, respectively, \(42{,}000\), \(14{,}000\), and \(14{,}000\) tasks. We train nine models in total, using the three types of image tokenizers and three masking regimes in training (Sec. 3): **Combined** comprising 200 epochs of random masking and 30 epochs of query masking, and two ablative regimes: **Query**-only (query masking for 200 epochs) and **Random**-only (random masking for 200 epochs). In the first phase of masking (whether followed by a second phase or not), the weight of the loss for the masked panel is multiplied by 2, to emphasize its importance (this multiplier was found beneficial in preliminary experiments). Validation takes place after each epoch, and the model with the lowest validation error is selected. Footnote 1: [https://github.com/WellyZhang/RAVEN](https://github.com/WellyZhang/RAVEN) We define the following metrics to assess the models' capacity of predicting the properties of the query panel. The metrics are calculated on the test set and on the relevant properties only, with the target property vector acting as the source of relevance, i.e. properties that are deemed relevant for a given task are determined based on _the target property vector_ (Secs. 2.4 and SM2). * **Correct**: The primary metric in further considerations. Amounts to 1 for a given RPM task if _all_ relevant properties of the query panel have been correctly predicted by the model; otherwise 0. Averaged across all tasks in the testing set. * **PropRate**: The fraction of correctly predicted relevant properties, across all test tasks. * **AvgProp**: The fraction of relevant properties that have been correctly predicted in a given task, averaged over tasks. For a given task, it amounts to 1 if all relevant properties have been predicted correctly, and 0 if none. Because the number of relevant properties varies by task and panel, AvgProp is not equivalent to PropRate. * **AvgH**: The Hamming distance between predictions and the target, as measured on the relevant properties, averaged over tasks. For a given task, the best attainable value of this metric is obviously 0, while the worst one corresponds to the scenario with 9 objects (_distribute-nine_ arrangement) and amounts to 37 (1 for the incorrect identifier of the arrangement, plus 9 for the incorrect setting of the 9 corresponding presence/absence variables, plus \(9\times 3=27\) incorrect values of the color, size and type of an object). **Results**. In the _Property prediction_ part of Table 1, we report the performance of particular models. Both the type of tokenizer and the masking scheme are strong determinants of model capabilities. The models that tokenize each panel separately (Panel) fare the worst on all metrics. Tokenizing the entire task at once (Task) leads to much better predictive accuracy. Nevertheless, the model that involves the Row tokenizer fares best, and does so quite systematically when juxtaposed with the others, which suggests that superimposing panels as separate channels before tokenization facilitates inferring relevant patterns. In principle, these differences might be in part due to the number of tokens used in particular architectures (81, 64, and 27 for Panel, Task and Row). However, the best-faring Row tokenizer uses the _fewest_ tokens, so it is in principle most likely to suffer from 'information bottleneck'; in spite of that, it outperforms the other two types of models. This suggests that the way in which the sequence of input tokens 'folds' the task image is more important than its length. Concerning the masking schemes, masking only the query panel (Query) throughout the entire training process turns out to be very ineffective. In contrast, Random masking performs much better. This may seem paradoxical, as making predictions about the query panel is less demanding for the learner: as it is located at the end of the third row and the third column of the RPM grid, predicting its properties requires _extrapolation_ of the properties observed in the other rows and columns. In contrast, masking random panels forces the model to make predictions about second panels in rows and columns (requiring _interpolation_ on top of the above extrapolation) and about first panels (requiring extrapolation in the opposite direction). A model trained with Random mode has to master all these skills, yet it proves better when evaluated only at the query panel on the test set. This shows that forcing the transformer to detect and reason about patterns observed across the entire puzzle makes it generalize better. While training in the Random mode outperforms the query masking mode by a large margin, Table 1 suggests also that even better predictive accuracy can be attained when the former is followed by the latter in training (Combined mode). Focusing on the query panel in the later stages of training is thus beneficial. To corroborate these observations, in the _Classification_ part of Table 1 we report the values of selected metrics _calculated for the context panels_. As these panels are not masked out, the model can directly observe them, and it is much easier to predict their properties, as they do not need to be inferred from the logical rules that govern the puzzle (hence labeling this mode as _classification_ of properties). The metrics are thus much better, with some models attaining almost perfect values. The type of tokenizer has a largely opposite impact than for property prediction: Panel models perform the best, presumably because separate tokenization reduces the 'cross-talk' between panels. The complete set of metrics for classification are given in Table SM2. In Table 2, we characterize the sizes and computational requirements of particular models. The Row model that excels at predictive accuracy is also the smallest and cheapest at querying. The slight differences in the number of parameters of transformers result from the number of tokens, which determines the number of entries in the learnable embedding used for positional encoding. Relative differences in the total number of parameters are somewhat larger, and stem from combined dimensionalities of the chunks of output tokens; a chunk is mapped to a property vector using a dense layer and thus its size determines the number of parameters in that layer. Despite these differences in the number of parameters being moderate, the computational cost of querying the Row model is several times lower than for Panel and Task tokenizers. This is due to the larger number of tokens processed by transformers in those models, which leads to quadratically more query-key interactions. \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{\begin{tabular}{c} Toke- \\ -nizer \\ \end{tabular} }} & \multicolumn{3}{c}{Property prediction} & \multicolumn{3}{c}{Classification} \\ \cline{3-8} & & Correct & PropRate & AvgProp & AvgH & Correct & AvgProp \\ \hline \multirow{2}{*}{\begin{tabular}{c} Panel \\ \end{tabular} } & Query & 1.53 & 62.23 & 58.08 & 4.55 & 97.54 & 99.50 \\ & Random & 20.82 & 82.52 & 80.34 & 2.23 & 98.38 & 99.44 \\ & Combined & 22.17 & 83.33 & 81.29 & 2.14 & 98.28 & 99.49 \\ \hline \multirow{2}{*}{\begin{tabular}{c} Task \\ \end{tabular} } & Query & 18.91 & 81.23 & 79.25 & 2.41 & 89.69 & 98.35 \\ & Random & 72.63 & 95.80 & 95.25 & 0.65 & 88.44 & 98.00 \\ & Combined & 75.63 & 96.15 & 95.64 & 0.61 & 88.33 & 97.98 \\ \hline \multirow{2}{*}{ \begin{tabular}{c} Row \\ \end{tabular} } & Query & 20.56 & 79.96 & 78.15 & 2.66 & 84.03 & 97.68 \\ & Random & 75.44 & 96.17 & 95.69 & 0.60 & 87.64 & 98.22 \\ \cline{1-1} & Combined & 77.58 & 96.47 & 95.99 & 0.56 & 87.85 & 98.25 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of configurations on the property prediction task. **Does transformer matter?** One of the research questions of this study concerns the importance of the transformer blueprint, i.e. whether learning to model direct interactions between tokens representing parts of the input brings any advantage in comparison to more straightforward approaches. To verify this hypothesis, we consider baseline architectures in which the transformer is replaced with a dense subnetwork, which we dub _densecformers_: the tokens produced by the tokenizer are concatenated into a vector and fed into a subnetwork comprising 5 dense layers of the same size. The output of the last dense layer is then passed to property predictor and undergoes further processing, as in our model, i.e. it is sliced into 9 chunks used to predict the properties of individual panels (Section 2.3). Apart from the replacement of the transformer subnetwork with the denesformer, there are no other differences between this baseline architecture and our model. Given that the row-wise tokenizer proved most capable (Table 1), we design the dense architectures so that they are comparable to it. We considered two variants, with the size of dense layers 336 and 512, so that the total number of parameters and computational cost of querying, summarized in Table 3, are close to those of the row-wise transformer model (Table 2). We consider two variants, with and without regularization. The regularization consists of layer normalization [19] and dropout applied between consecutive dense layers. Both denesformer models are trained in the Random masking mode. Table 4 summarizes the performance of densecformers in terms of metrics from Table 1. The densformers are clearly inferior to the transformers: except for the AvgH metric, none of them attains even the worst value of the corresponding metric for the transformers. While layer normalization [19] has a positive impact on predictive accuracy, increasing the layer size from 336 to 512 improves the accuracy only slightly, which suggests that boosting it further, beyond 512 units, is unlikely to lead to significant improvements. We thus conclude that the 'cross-talk' between tokens representing parts of the RPM task, facilitated by the transformer architecture, brings significant added value, and perhaps is even essential for this kind of tasks. **The structure of errors**. We encode the appearance properties as unordered categorical variables, but in fact they are ordinal. In an additional analysis detailed in Section SM7, we determined that models are much more likely to commit small errors on these properties than large ones, which implies that they correctly discovered and exploited this piece of domain knowledge, even though it was not engraved in their architectures nor conveyed to them explicitly in training. Notice that this kind of insight is not available in RPM approaches that train end-to-end models that directly choose answer panels. \begin{table} \begin{tabular}{l l c c c c} \hline \hline Dense size & Layer normal. & CorrRate & PropRate & AvgProp & AvgH \\ \hline 336 & No & 0.24 & 48.41 & 43.08 & 6.03 \\ & Yes & 0.45 & 55.19 & 51.24 & 5.43 \\ \hline 512 & No & 0.28 & 49.76 & 44.76 & 5.88 \\ & Yes & 0.88 & 55.92 & 52.03 & 5.36 \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of densformer models. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Tokenizer} & \multicolumn{2}{c}{\#parameters [M]} & \multicolumn{2}{c}{MFLOPS} \\ \cline{2-5} & Transformer & Total & Transformer & Total \\ \hline Panel & 2.65 & 11.45 & 87.77 & 2326.16 \\ Task & 2.65 & 11.19 & 52.84 & 2002.38 \\ Row & 2.64 & 10.68 & 29.02 & 793.97 \\ \hline \hline \end{tabular} \end{table} Table 2: The sizes and querying costs for particular models. \begin{table} \begin{tabular}{l c c c c} \hline \hline Dense size & \#parameters [M] & \multicolumn{2}{c}{MFLOPS} \\ \cline{2-5} & Transformer & Total & Transformer & Total \\ \hline 336 & 2.67 & 10.70 & 5.34 & 770.23 \\ 512 & 4.33 & 12.37 & 8.67 & 773.57 \\ \hline \hline \end{tabular} \end{table} Table 3: The sizes and querying costs for dense models. ## 6 Results: choice tasks In this section, we use the models trained for property prediction in Sec. 5 for solving RPM tasks. To this aim, we devise the Direct Choice Maker algorithm (DCM) that makes decisions by comparing the responses given by a model to the task (context panels) and to answer panels. Given a trained model \(P\) and a task \(T\), DCM proceeds as follows: 1. \(P\) is queried on \(T\) as in property prediction, i.e., on the context panels of \(T\), with the query panel masked out. The 9th property vector \(p\) produced in response, corresponding to the query panel, is the model's **prediction** of the answer to the task. 2. For each of the 8 answer panels, \(i=1,\ldots,8\), \(P\) is queried on \(T\) with the query panel replaced with the \(i\)th answer panel. In each of those queries, the 9th property vector is stored as \(p_{i}\). This will be referred to as **classification** of a panel (in terms of its properties). 3. A distance function \(d\) is applied to the pairs \((p,p_{i})\), and the answer panel with the minimal \(d(p,p_{i})\) is returned as the solution to \(T\). The distance functions (explained in the following) take into account only the relevant properties, where the relevance is determined by \(p_{i}\) (see Sec. SM2). We use \(p_{i}\) as the source of relevance when calculating \(d(p,p_{i})\), because classification is in general easier than prediction (cf. Table 1), so \(p_{i}\)s are less likely to make mistakes in determining the relevance of properties. We devise three performance metrics based on the outcome of DCM, each calculating the percentage of tasks for which DCM selects the correct answer panel. The metrics vary in the type of property vectors (categorical or encoded) and \(d\). **AccUnique** uses DCM with categorical property vectors and the Hamming distance as \(d\). A tie (\(d(p,p_{i})\) being minimized by two or more answer panels) counts as a failure. **AccTop** operates like AccUnique, except that a tie on the closest matches counts as a success. This metric is similar to the top-\(k\)-style metrics, i.e. a success is declared if the correct answer is among the closest matches. Finally, **AccProb** applies DCM to encoded property vectors, i.e. probability distributions produced by the model, and uses cross-entropy as \(d\). Binary cross-entropy is used for vector elements corresponding to binary properties (e.g., object presence), categorical cross-entropy is used for multi-valued properties (e.g. object size), and AccProb sums these quantities over all relevant properties. Similarly to loss functions and metrics used in Sec. 5, all metrics are calculated on the test set. **Optimistic bounds**. We first estimate the informal optimistic performance bounds, i.e. the test-set metrics that DCM would attain _if the true property vectors were known for answer panels_. These vectors are provided in RAVEN database, so we use them as \(p_{i}\)s in step 2 of DCM (and let them determine the relevance of properties), rather than querying the model. For AccProb, this implies comparing the continuously-valued probabilities produced by the model with one-hot vectors representing the categorical values of true properties. Table 5 presents the resulting estimates. As expected, AccUnique is the most demanding metric, as it requires the predicted property vector to be strictly closest to the property vector for exactly one of the answer panels. In contrast, AccTop treats ties as successes, reporting thus significantly better scores. However, this metric does not reflect model's capability of pointing to a unique solution among the answer panels. In contrast, AccProb is the most pragmatic metric, due to low likelihood of ties between answer panels and sensitivity to nuanced, continously-valued responses of the model, so we focus on this metric in the following. The relations between the models in Table 5 correlate with the quality of property prediction (Table 1), with the Row tokenizer being on average better than Task and Panel, the Combined masking mode slightly outperforming the Random mode, and the latter one in turn being much better than the Query-only mode. Expectedly, high accuracy of property prediction translates into better performance at choice tasks. \begin{table} \begin{tabular}{l l r r r} \hline \hline Tokenizer & Masking & AccProb & AccTop & AccUnique \\ \hline Panel & Query & 19.00 & 39.48 & 6.16 \\ & Random & 55.66 & 70.87 & 37.47 \\ & Combined & 57.57 & 72.34 & 39.27 \\ \hline Task & Query & 65.09 & 74.18 & 38.18 \\ & Random & 94.84 & 96.89 & 90.70 \\ & Combined & 95.39 & 97.23 & 92.69 \\ \hline Row & Query & 66.45 & 72.04 & 36.10 \\ & Random & 95.49 & 96.99 & 92.62 \\ & Combined & 95.90 & 97.48 & 94.04 \\ \hline \hline \end{tabular} \end{table} Table 5: Optimistic bounds, with DCM relying on _target_ property vectors. **Accuracy**. Table 6 presents the actual metrics summarizing models' capability of solving RPM choice tasks, i.e. with \(p_{i}\)s resulting from model's classification of answer panels. Bar two models, AccProb is noticeably worse than in Table 5, which was expected, because the classifications \(p_{i}\) of properties of answer panels can now diverge from the true vectors. Indeed, we calculated also the Correct metric (used in Sec. 5 for assessing the accuracy of property prediction) on classification alone in this setting, and it amounted to 75.95% and 58.77% for respectively Task and Row. This difference is likely the main factor that makes the former model fare much better in Table 6. We hypothesize that the root cause for this difference is that querying the Row models in classification means replacing an entire input channel of the input image (corresponding to the query panel) with an answer panel, while in training that panel was continuously providing the 'neutral' values from a learnable mask. For the Task tokenizer, this affects only \(\nicefrac{{1}}{{9}}\) of the input raster of the entire task (recall that tokens' receptive fields capture the entire input image in all tokenizers). The models trained in the Query masking mode fare the worst again; clearly, low capacity of predicting properties (Table 1) prevents them from choosing the right panels with DCM. For the Panel and Task tokenizer, the pattern is consistent with previous tables: the Combined mode performs better than Random. Due to the high computational cost of training, we report the metrics for a single model per configuration. To establish statistical significance, we conducted additional runs for the best performing configurations and report the resulting averages and.95-confidence intervals for sample size 3 obtained in this way in the AccProb\({}_{(n=3)}\) column of Table 6. The figures largely confirm our earlier observations. The fact that in testing, models are queried on 'completed' tasks, with all nine panels present and no panel masked out, may have significant impact on their overall performance. In training, the models perform classification for the eight unmasked panels and prediction for the single masked panel, but they are never asked to perform classification for all panels. This may be particularly relevant for the Task tokenizer, which always observes a single panel being masked in training (in contrast to Panel and Row); in particular, in the Query-only mode, it is always the lower-right panel. This may explain the particularly bad performance of that model, with the AccProb below the 12.5% achievable with choosing answer panels at random2. \begin{table} \begin{tabular}{l l r r r r} \hline Tokenizer & Masking & AccProb & AccProb\({}_{(n=3)}\) & AccTop & AccUnique \\ \hline Panel & Query & 17.79 & — & 39.13 & 6.33 \\ & Random & 41.39 & — & 59.75 & 25.65 \\ & Combined & 46.85 & — & 63.82 & 30.74 \\ \hline Task & Query & 5.44 & — & 42.72 & 0.96 \\ & Random & 96.33 & 95.81\(\pm\)1.47 & 96.22 & 83.00 \\ & Combined & 96.97 & 96.45\(\pm\)1.13 & 96.57 & 86.30 \\ \hline Row & Query & 30.97 & — & 63.27 & 5.32 \\ & Random & 79.23 & 80.58\(\pm\)5.67 & 94.68 & 25.47 \\ & Combined & 82.84 & 84.66\(\pm\)6.28 & 95.43 & 33.53 \\ \hline \end{tabular} \end{table} Table 6: Accuracy on choice tasks, based on _predicted_ property vectors. \begin{table} \begin{tabular}{l l r r r} \hline Tokenizer & Masking & AccProb & AccTop & AccUnique \\ \hline Panel & Query & 45.44 & 64.89 & 27.81 \\ & Random & 53.59 & 70.22 & 34.61 \\ & Combined & 60.16 & 74.08 & 41.24 \\ \hline Task & Query & 12.34 & 51.68 & 2.50 \\ & Random & 94.90 & 95.42 & 86.43 \\ & Combined & 95.39 & 95.61 & 88.95 \\ \hline Row & Query & 51.35 & 76.74 & 14.30 \\ & Random & 86.35 & 95.10 & 42.41 \\ & Combined & 88.52 & 95.55 & 52.09 \\ \hline \end{tabular} \end{table} Table 7: Accuracy on the test set of I-RAVEN benchmark. Table 7 summarizes the performance of the same models on the testing part of the I-RAVEN benchmark [6]3, which features the same tasks as RAVEN, however with answer panels generated in an unbiased way. Apart from Task+Query and Task+Combined models that fared best on RAVEN and observe slight deterioration, all remaining models perform better on I-RAVEN. Because the context panels and the correct answer panels are identical in both benchmarks, so must be the predictions of properties made by the models for them. Therefore, the differences between Tables 6 and 7 can be only due to the predictions made for the incorrect answer panels. Apparently, the unbiased answer panels from I-RAVEN are less likely to result in property vectors that distort the assessment of relative similarities of the answer panels to the predicted answer. Footnote 3: Earlier published under the name Balanced-RAVEN [20]. **Comparison with state-of-the-art**. Following a recent survey [4], in Table 8 we reproduce the test-set accuracy of five RPM solvers reported in past literature on the topic, which attain the best performance on the test part of the RAVEN collection. Table 9 presents analogous top results for the I-RAVEN benchmark (see [4] for the performance of other, less capable methods). The figures reported in these tables should be juxtaposed with the AccProb metric from previous tables. For reference, we quote also the estimated accuracy of the human performance. Compared to these approaches, the performance of several variants of our ACT models is very good, with two of them equipped with the Task tokenizer outperforming not only the reported human accuracy on RAVEN [5], but also _all previously reported methods on this benchmark_, the best of which attained 94.1% (column AccProb in Table 6 vs. Table 8). For I-RAVEN, ACT beats all-but-one of the SotA methods (Table 7 vs. Table 9). **Examples**. Figure 3 compares visually the behavior of Task and Row model for two tasks from the test set (rotation angle is fixed when rendering models' predictions and classifications). In the first example (Fig. 3a), both models produce perfect predictions and similar, though imperfect, classifications of answer panels. However, the Row model fails to choose the correct answer, as its classification \(p_{8}\) for the last answer panel is incorrect: a square in the answer panel is classified as a triangle. As a consequence, \(p_{8}\) is more similar to prediction \(p\) than \(p_{7}\) (even though both \(p_{7}\) and \(p_{8}\) look identical when rendered as images, the raw outputs of models apparently varied when assessed with cross-entropy, which the distance function \(d\) in DCM is based on). In contrast, the Task model produces a more faithful classification \(p_{8}\) of the last answer panel (square rather than triangle), which allows it to correctly point to \(p_{7}\). The second task (Fig. 3b) is harder by involving more objects and more complex rules. As a result, not only the classifications, but also the predictions are far from perfect, with imprecise predictions for sizes, colors, and occasionally even shapes of objects (object presence is always correctly predicted and reproduced). Nevertheless, the Task model manages to be more consistent when predicting and classifying, which allows it to point to the correct answer. More examples are provided in Sec. SM8 and Figs. SM2-SM4 in the Supplementary Material. \begin{table} \begin{tabular}{l c} \hline \hline Method & Accuracy \\ \hline Rel-Base [21] & 91.7 \\ CoPINet + AL [22] & 93.5 \\ DCNet [23] & 93.6 \\ CoPINet + ACL [22] & 93.7 \\ Rel-AIR [21] & 94.1 \\ \hline Human [5] & 84.4 \\ \hline \hline \end{tabular} \end{table} Table 8: State-of-the-art results on the RAVEN test set (source: [4]). \begin{table} \begin{tabular}{l c} \hline \hline Method & Accuracy \\ \hline SRAN [6] & 60.8 \\ SRAN MLCL+DA & \\ & [18] & 73.3 \\ MRNet [13] & 86.8 \\ SCL [7] & 95.0 \\ SCL MLCL+DA & \\ & [18] & 96.8 \\ \hline \hline \end{tabular} \end{table} Table 9: State-of-the-art results on the I-RAVEN test set (source: [4]). **Discussion**. The ultimate superiority of the Task models (Tables 6 and 7) suggests that it is desirable to tokenize tasks as they appear, so that the transformer can detect and learn the RPM patterns on its own. Manual engineering of representation, which we attempted here with the Row tokenizer that directly juxtaposes panels as image channels, does not necessarily help - even though those models ranked top when predicting properties of masked panels, they underperformed at classifying answer panels shown to them. This study demonstrated that rephasing a learning task in a multi-dimensional, multi-label fashion can be beneficial for generalization. The conventional RPM tasks are in a sense 'closed', as the space of responses expected from the model is narrowed down to 8 given answer panels. Compared to that, learning to classify the properties of visible panels and to predict the properties of masked-out panels is more open-ended, and in a sense generative. It forces the model to derive more detailed patterns from the data and, consequently, leads to better generalization. Moreover, by predicting properties, one may mitigate the biases inadvertently introduced in benchmarks (Table 7). Last but not least, tracing the behavior of the model when classifying and predicting panel properties provides interesting insights (Fig. 3). In mapping an image to a sequence of tokens, the models considered here form an interesting middle ground between purely symbolic approaches and conventional deep learning, in a spirit similar to the Vision Transformer architecture proposed in [24] and neuro-symbolic systems. It is interesting to see that the transformer blueprint is helpful also when approaching a problem that is more abstract than conventional image classification. As evidenced in the presented results (in particular by the failure of the densitormer models; Sec. 5), explicit 'perceptual chunking' of representation provided by tokenization and the subsequent contextual reasoning realized with query-key interactions in the transformer allow learning the abstract patterns necessary to predict the missing panel and determine the right answer panel. ## 7 Conclusion and future work We have shown that the proposed approach of solving RPM tasks by learning to predict the properties of panels outperforms state-of-the-art models trained to choose answer panels and avoids the pitfall of biases present in training data. The models fare well despite flattening the 2D structure of the puzzle and can be inspected to a greater extent than end-to-end neural models. In future research, we will consider making the choice makers trainable alongside with the model, to allow them to adapt to the deficiencies of classification (identified in Sec. 6 and exemplified in Fig. 3) and so enable further improvements. Figure 3: Solving two RPM tasks (a and b) with Task and Row models (both trained in Combined masking mode). **Left:** the task. **Middle:** the correct answer and rendering of models’ predictions \(p\) (Step 1 of DCM) for the Row and Task model. **Right:** answer panels and the renderings of classifications \(p_{i}\) generated by Row and Task model (Step 2 of the DCM). The panels corresponding to the most similar property vectors marked with thicker borders. Predictions and classifications are rendered from property vectors produced by the model while fixing rotation angles, as the angle was irrelevant in these tasks. To facilitate analysis, we render the _color_ property using pseudocoloring (it is conventionally rendered in grayscale). See Figs. SM2–SM4 for more examples. The explicit partitioning of the inference process into property prediction and choice of an answer panel with DCM can be seen as a special case of _task decomposition_, with the properties predicted and classified in the first stage acting as subgoals. In this study, we exploited the subgoals available in the RAVEN benchmark. Prospectively, it would be interesting to engage other sources of information, or even attempt to synthesize subgoals automatically.
2303.13531
Discovering Hierarchical Process Models: an Approach Based on Events Clustering
Process mining is a field of computer science that deals with discovery and analysis of process models based on automatically generated event logs. Currently, many companies use this technology for optimization and improving their processes. However, a discovered process model may be too detailed, sophisticated and difficult for experts to understand. In this paper, we consider the problem of discovering a hierarchical business process model from a low-level event log, i.e., the problem of automatic synthesis of more readable and understandable process models based on information stored in event logs of information systems. Discovery of better structured and more readable process models is intensively studied in the frame of process mining research from different perspectives. In this paper, we present an algorithm for discovering hierarchical process models represented as two-level workflow nets. The algorithm is based on predefined event ilustering so that the cluster defines a sub-process corresponding to a high-level transition at the top level of the net. Unlike existing solutions, our algorithm does not impose restrictions on the process control flow and allows for concurrency and iteration.
Antonina K. Begicheva, Irina A. Lomazova, Roman A. Nesterov
2023-03-12T11:05:40Z
http://arxiv.org/abs/2303.13531v1
# Discovering Hierarchical Process Models: an Approach Based on Events Clustering + ###### Abstract Process mining is a field of computer science that deals with discovery and analysis of process models based on automatically generated event logs. Currently, many companies use this technology for optimization and improving their processes. However, a discovered process model may be too detailed, sophisticated and difficult for experts to understand. In this paper, we consider the problem of discovering a hierarchical business process model from a low-level event log, i.e., the problem of automatic synthesis of more readable and understandable process models based on information stored in event logs of information systems. Discovery of better structured and more readable process models is intensively studied in the frame of process mining research from different perspectives. In this paper, we present an algorithm for discovering hierarchical process models represented as two-level workflow nets. The algorithm is based on predefined event clustering so that the cluster defines a sub-process corresponding to a high-level transition at the top level of the net. Unlike existing solutions, our algorithm does not impose restrictions on the process control flow and allows for concurrency and iteration. _Keywords:_ process mining; Petri nets; workflow nets; process discovery; hierarchical process model; event log ## 1 Introduction Over the past decade, companies whose processes are supported by various information systems have become convinced of the need to store as much potentially useful information about the execution of processes within the system as possible. This was facilitated by the qualitative development of areas related to the extraction of valuable information from recorded data, which helps to correct the work of organizations in time and, thus, save and increase their resources. Process mining is a field of science that provides a palette of tools for extracting the logic of system behavior, as well as modeling and optimizing the processes occurring in it. In particular, process mining methods allow you to find inconsistencies between the planned and actual behavior of the system, and also timely track the appearance of inefficient or incorrect behavior. Despite the fact that more and more attention is being paid to preserving the optimal amount of necessary information about the execution of processes, process execution data is not always available in a convenient format and with the necessary degree of detail, since system logs are generated automatically. Process discovery is aimed at extracting processes from event logs and their representation in the form of a model. Most of the available process discovery methods provide a model with the same level of detail as a given event log [1]. Because of this, a promising area for research is the task of discovering a more readable process model from a detailed event log, while preserving important information about the process execution for experts. Readability can be provided in various ways. The most commonly used methods are filtering rare behavior from the original event log, skipping "minor" events (the significance of an event is assessed according to the chosen methodology), and abstraction, where some events are considered indistinguishable from each other. In our study, we consider the latter approach, when more readable models are the result of model abstraction -- they are more compact and have an optimal level of detail for the work of experts than what could be obtained by direct discovery methods. In order not to lose important information, we are dealing not only with abstract (high-level) models, but also with hierarchical models that store low-level information in the form of sub-processes. Thus, in this paper, we propose an algorithm for discovering hierarchical process models from event logs. Processes are modeled with workflow nets [2], a special subclass of Petri nets for modeling a control flow of business processes. This study continues our previous study [3], in which we have proposed an approach to discover abstract models for processes without cycles. Here we provide a more general solution by overcoming the prohibition of cyclic behavior. Hierarchical models allows us to have a high-level view of the model by "folding" the behavior of an individual sub-process into a high-level transition with the ability to unfold it back. So, at the top level there is a high-level model, in which each individual transition corresponds to a sub-process built from low-level events. The history of the detailed behavior of the process is recorded in a low-level log. Regarding the number of levels in the hierarchy, we will only use two levels -- high and low, but the algorithm can naturally be extended to any number of levels. The paper is structured as follows. Section 2 presents the review of related research. Section 3 gives theoretical preliminaries and definitions used in the text. In Section 4, we discuss the basics of the hierarchical process discovery algorithm. Section 5 presents the main discovery algorithm and the proof of its correctness. Section 6 reports the outcomes from the experimental evaluation. In Section 7, we conclude the paper and discuss the possible future work directions. Related Work Research connected with our paper can be classified into approaches to abstracting event logs and process models and approaches to discovering hierarchical process models from event logs. The recent work [4] gives a comprehensive review of approaches and methods that can be applied for low-level events abstraction. The authors divide the methods according to: the learning strategy (supervised or unsupervised), the structure of the process models (strictly sequential or with interleaving), the low-level events grouping approach (deterministic or probabilistic), the nature of the processed data (discrete or continuous data). For instance, the method presented in [5] is a supervised method that converts a low-level event log into its high-level representation using detected behavioral patterns, such as sequence, selection, parallelism, interleaving and repetition. A supervised approach to event abstraction was also presented in [6]. This method takes a low-level event log and transforms it to an event log at the desired level of abstraction, using behavioral patterns: sequence, choice, parallel, interleaving and repetition of events. This technique allows us to obtain a reliable mapping from low-level events to activity patterns automatically and construct a high-level event log using these patterns. Another supervised event abstraction method was discussed in [7]. The nature of this method is as follows. We annotate a low-level event with the correct high-level event using the domain knowledge from the actual process model by the special attribute in an event log. In addition, this paper assumes that multiple high-level events are executed in parallel. This allows us to interpret a sequence of identical values as the single instance of a high-level event. A general approach to the representation of multi-level event logs and corresponding multi-level hierarchical models was studied in [8]. The authors especially note that this approach can combine multiple formalisms for representing different levels in a multi-level process models. There are many ways of abstracting process models by reducing their size in order to make them more convenient to work with. Each method may be useful depending on a group of interrelated factors: the abstraction purposes, the presence of certain patterns and constructs, and the specifics of modeling notation. Reducing the size of the model by abstraction can be done as the "convolution" of groups of elements, or implemented by throwing some parts away (insignificant in a particular case). The importance of the low-level event log abstraction is emphasized, among the others, in [9]. The researchers determine, which level of abstraction is appropriate for a particular case in different ways, but the main criterion is that the model should be readable and understandable. In [10], the abstraction of a process model occurs through "simplification" automatically: a user determines only the desired degree of detail, but not the actual correctness of identifying high-level events. Conversely, the paper [5] stressed the importance of the abstraction level dependence on the domain expert knowledge. Petri nets [11] can also be extended by adding the hierarchy as, e.g., in Colored Petri nets (CPN) [12]. Hierarchy also allows one to construct more compact, readable and understandable process models. Hierarchy of CPN models can be used as an abstraction, in the case of two levels: a high-level _abstract_ model and a low-level _refined_ model. In our paper the high-level model is a model with abstract transitions. An abstract transition refers to a Petri net subprocess, which refines the activity represented by this high-level transition. The complete low-level ("flat") process model can be obtained from a high-level model by substituting subprocesses for high-level transitions. "Flat" synthesis (when process model and log are at the same-level) is a standard process discovery problem, which has been extensively studied in the literature. A wide range of process discovery algorithms supports the automated flat process model synthesis [1]. _Inductive miner_[13] is one of the most widely used process discovery algorithm that produces well-structured process models recursively built from building blocks for standard behavioral patterns. They can be potentially used for constructing high-level process models. However, this technique does not take the actual correspondence between low-level events and subprocesses. In [14], the authors also used the recognition of behavioral patterns in a process by a structural clustering algorithm and then define a specific workflow schema for each pattern. In [15], a two-phase approach to mining hierarchical process models was presented. Process models here were considered as interactive and context-dependent maps based on common execution patterns. On the first phase, an event log is abstracted to the desired level by detecting relevant execution patterns. An example of such pattern is the maximal repeat that captures typical sequences of activities the log. Every pattern is then estimated by its frequency, significance, or some other metric needed for accurate abstraction. On the second phase, Fuzzy miner discovery algorithm [10] adapted to process maps discovery is applied to the transformed log. _FlexHMiner_[16] is a general algorithm based on process trees implemented in ProM software. The authors stresses the flexibility of this approach: to identify the hierarchy of events, the method supports both supervised methods and methods using the general knowledge on a process. The limitations of this method include the fact that each of the sub-processes can be executed only once, which means that the method is not suitable for processes with cycles. Detecting high-level events based on the patterns of behavior in an event log does not make it possible to refine the accuracy of abstraction based on the general knowledge of the system or provide it only partially. Patterns provide only the ability to change the scale, but not to participate in the selection of correct high-level events. This could be useful only for a superficial analysis. However, there is a real risk of combining unrelated low-level events into a single high-level event only because they are executed sequentially, but not because they belong to the same logical component of a system. A large amount of literature is devoted to the problem of discovering structured models from event logs. Researchers offer different techniques to improve the structure of discovered models, e.g., in [17], and to produce already well-structured process models [18, 19]. Different ways of detecting subprocesses in event logs using low-level transition systems were discussed in [20, 21, 22]. These papers do not consider mining hierarchical process models from event logs. In our previous paper [3], we presented an algorithm for the discovery of a high-level process model from the event log for acyclic processes. The method takes the initial data about abstraction in the form of a set of detailed events grouped into high-level ones, which means that any method of identifying abstract events can potentially be used, including those based on expert knowledge. Moreover, the algorithm is based on event logs pre-processing. This includes converting an initial event log into a high-level form and improving a target high-level process model After pre-processing, we can use any of the existing process discovery algorithms suitable for the same-level (flat) process discovery. Both possibilities of using existing approaches as components make the earlier proposed algorithm flexible. This paper expands the conditions of the applicability of the algorithm from [3] since it only works for acyclic models. For the algorithm to find and process potential cycles in the event log, we will reuse the method for detecting the repetitive behavior in a event log proposed in [23], which partially covers the general solution of the cycle detection problem. ## 3 Preliminaries By \(\mathbb{N}\) we denote the set of non-negative integers. Let \(X\) be a set. A _multiset_\(m\) over a set \(X\) is a mapping: \(m:X\rightarrow\mathbb{N}\), where \(\mathbb{N}\) - is the set of natural numbers (including zero), i.e., a multiset may contain several copies of the same element. For an element \(x\in X\), we write \(x\in m\), if \(m(x)>0\). For two multisets \(m,m^{\prime}\) over \(X\) we write \(m\subseteq m^{\prime}\) iff \(\forall x\in X:m(x)\leq m^{\prime}(x)\) (the inclusion relation). The sum, the union and the subtraction of two multisets \(m\) and \(m^{\prime}\) are defined as usual: \(\forall x\in X:(m+m^{\prime})(x)=m(x)+m^{\prime}(x),(m\cup m^{\prime})(x)=max(m (x),m^{\prime}(x)),(m-m^{\prime})(x)=m(x)-m^{\prime}(x)\), if \(m(x)-m^{\prime}(x)\geq 0\), otherwise \((m-m^{\prime})(x)=0\). By \(\mathcal{M}(X)\) we denote the set of all multisets over \(X\). For a set \(X\), by \(X^{*}\) with elements of the form \(\langle x_{1},\ldots,x_{k}\rangle\) we denote the set of all finite sequences (words) over \(X\), \(\langle\rangle\) denotes the empty word, i. e., the word of zero length. Concatenation of two words \(w_{1}\) and \(w_{2}\) is denoted by \(w_{1}\cdot w_{2}\). Let \(Q\subseteq X\) be a subset of \(X\). The projection \(\mathord{\upharpoonright}_{Q}\colon X^{*}\to Q^{*}\) is defined recursively as follows: \(\langle\rangle\mathord{\upharpoonright}_{Q}=\langle\rangle\), and for \(\sigma\in X^{*}\) and \(x\in X\): \[(\sigma\cdot\langle x\rangle)\mathord{\upharpoonright}_{Q}=\begin{cases} \sigma\mathord{\upharpoonright}_{Q}\text{ if }x\notin Q\\ \sigma\mathord{\upharpoonright}_{Q}\cdot\langle x\rangle\text{ if }x\in Q\end{cases}\] We say that \(X=X_{1}\cup X_{2}\cup\cdots\cup X_{n}\) is a partition of the set \(X\) if for all \(1{\leq}i,j{\leq}n\) such that \(i\neq j\) we have \(X_{i}\cap X_{j}=\emptyset\). ### Petri Nets Let \(P\) and \(T\) be two disjoint finite sets of places and transitions respectively, and \(F:(P\times T)\cup(T\times P)\to\mathbb{N}\) be an arc-weight function. Let also \(A\) be a finite set of _event names_ (or _activities_) representing observable actions or events, \(\tau\) -- a special label for _silent_ or invisible action, \(\lambda:T\to A\cup\{\tau\}\) is a transition labeling function. Then \(N=(P,T,F,\lambda)\) is a _labeled Petri net_. Graphically, a Petri net is designated as a bipartite graph, where places are represented by circles, transitions by boxes, and the flow relation \(F\) by directed arcs. A _marking_ in a Petri net \(N=(P,T,F,\lambda)\) is a function \(m:P\to\mathbb{N}\) mapping each place to some number of tokens (possibly zero). Hence, a marking in a Petri net may be considered as a multiset over its set of places. Tokens are graphically designated by filled circles, and then a current marking \(m\) is represented by putting \(m(p)\) tokens into each place \(p\in P\). A _marked Petri net_\((N,m_{0})\) is a Petri net \(N\) together with its initial marking \(m_{0}\). For transition \(t\in T\), its _preset_ (denoted \({}^{\bullet}t\)) and its _postset_ (denoted \(t^{\bullet}\)) are defined as sets of its input and output places respectively, i. e., \({}^{\bullet}t=\{p\mid F(p,t)\neq 0\}\) and \(t^{\bullet}=\{p\mid F(t,p)\neq 0\}\). A transition \(t\in T\) is _enabled_ in a marking \(m\), if for all \(p\in{}^{\bullet}t\), \(m(p)\geq F(p,t)\). An enabled transition \(t\) may _fire_ yielding a new marking \(m^{\prime}\), such that \(m^{\prime}(p)=m(p)-F(p,t)+F(t,p)\) for each \(p\in P\) (denoted \(m\stackrel{{\lambda(t)}}{{\to}}m^{\prime}\), or just \(m\to m^{\prime}\)). A marking \(m^{\prime}\) is reachable from a marking \(m\), if there exists a sequence of firings \(m=m_{0}\to m_{1}\to\ldots m_{k}=m^{\prime}\). By \(\mathcal{R}(N,m)\) we denote the set of all markings reachable from marking \(m\) in a net \(N\). Let \((N,m_{0})\) be a marked Petri net with transitions labeled by activities from \(A\cup\{\tau\}\), and let \(m_{0}\stackrel{{ a_{1}}}{{\to}}m_{1}\stackrel{{ a_{2}}}{{\to}}\ldots\) be a finite or infinite sequence of firings in \(N\), which starts from the initial marking \(m_{0}\) and cannot be extended. Then a sequence of observable activities \(\rho\), such that \(\rho=\langle a_{1},a_{2},\ldots\rangle\!\upharpoonright_{A}\), is called a _run_, i. e., a run is a sequence of observable activities representing a variant of Petri net behavior. For a finite run \(\rho\), which corresponds to a sequence of firings \(m_{0}\stackrel{{ a_{1}}}{{\to}}\ldots\stackrel{{ a_{t}}}{{\to}}m_{k}\), we call \(m_{0}\) and \(m_{k}\) its initial and final markings respectively. A transition \(t\in T\) is called _dead_ for a marked net \((N,m_{0})\), if for each reachable marking \(m\in\mathcal{R}(N,m_{0})\), \(t\) is not enabled in \(m\). In our study we consider _workflow nets_ -- a special subclass of Petri nets [24] for workflow modeling. A _workflow net_ is a (labeled) Petri net with two special places: \(i\) and \(f\). These places mark the beginning and the ending of a workflow process. A (labeled) marked Petri net \(N=(P,T,F,\lambda,m_{0})\) is called a workflow net (WF-net) if the following conditions hold: 1. There is one source place \(i\in P\) and one sink place \(f\in P\), such that \({}^{\bullet}i=f^{\bullet}=\emptyset\). 2. Every node from \(P\cup T\) is on a path from \(i\) to \(f\). 3. The initial marking \(m_{0}\) in \(N\) contains the only token in its source place. place \(i\), and by \([f]\) -- its _final_ marking with the only token in place \(f\). An example of a workflow net that simulates a simple process of handling ticket refund requests, is shown in Fig. 1[25]. Here \(p_{0}\) is the source place, and \(p_{6}\) -- the sink place. Soundness [24] is the main correctness property for workflow nets. A WF-net \(N=(P,T,F,\lambda,[i])\) is called _sound_, if 1. For any marking \(m\in R(N,[i])\), \([f]\in\mathcal{R}(N,m)\); 2. If for some \(m\in R(N,[i])\), \([f]\subseteq m\), then \(m=[f]\); 3. There are no dead transitions in \(N\). ### Event Logs Most information systems record history of their process execution into event logs. An _event record_ usually contains case ID, an activity name, a time step, and some information about resources, data, etc. For our study, we use case IDs for splitting an event log into traces, timestamps -- for ordering events within each trace, and abstract from event information different from event names (activities). Let A be a finite set of activities. A _trace_\(\sigma\) is a finite sequence of activities from \(A\), i.e., \(\sigma\in A^{*}\). By \(\#a(\sigma)\) we denote the number of occurrences of activity \(a\) in trace \(\sigma\). An _event log_\(L\) is a finite multi-set of traces, i.e., \(L\in\mathcal{M}(A^{*})\). A log \(L^{\prime}\) is called a _sub-log_ of \(L\), if \(L^{\prime}\subseteq L\). Let \(X\subseteq A\). We extend projection \(\lceil_{X}\) to event logs, i.e., for an event log \(L\in M(A^{*})\), its projection is a sub-log \(L\rceil_{X}\), defined as the multiset of projections of all traces from \(L\), i. e., \(L\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Let \(N\) be a WF-net with transition labels from \(A\), an initial marking \([i]\), and a final marking \([f]\). Let \(\sigma\) be a trace over \(A\). We say that trace \(\sigma=\langle a_{1},\ldots,a_{k}\rangle\)_perfectly fits_\(N\), if \(\sigma\) is a run in \(N\) with initial marking \([i]\) and final marking \([f]\). A log \(L\) perfectly fits \(N\), if every trace from \(L\) perfectly fits \(N\). ## 4 Discovering Hierarchical WF-Nets ### Hierarchical WF-Nets Let \(A\) denote the set of low level process activity names. Let \(\tilde{A}=\{\alpha_{1},\alpha_{2},\ldots,\)\(\alpha_{k}\}\) denote the set of sub-process names, which represent high-level activity names, respectively. Here we define _hierarchical workflow_ (HWF) nets with two levels of representing the process behavior. The _high level_ is a WF-net, whose transitions are labeled by the names of sub-processes from \(\tilde{A}\). The _low level_ is a set of WF-nets corresponding to the _refinement_ of transitions in a high-level WF-net. Transitions in a low level WF-net are labeled by the names of activities from \(A\). Below we provide the necessary definitions and study the semantics of HWF-nets. An HWF-net is a tuple \(\mathcal{N}=(\tilde{N},N_{1},N_{2},\ldots,N_{k},\ell)\), where: 1. \(\tilde{N}=(\tilde{P},\tilde{T},\tilde{F},\tilde{\lambda},[\tilde{i}])\) is a high-level WF-net, where \(\tilde{\lambda}\colon\tilde{T}\to\tilde{A}\) is a bijective transition labeling function, which assigns sub-process names to transitions in \(\tilde{N}\); 2. \(N_{i}=(P_{i},T_{i},F_{i},\lambda_{i},[i]_{i})\) is a low-level WF-net for every \(i\in[1,k]\) with a transition labeling function \(\lambda_{i}\colon T_{i}\to A_{i}\), where \(A_{i}\subseteq A\) is a subset of low level activities for \(N_{i}\); 3. \(\ell\colon\tilde{A}\to\{N_{1},N_{2},\ldots,N_{k}\}\) is a bijection which establish the correspondence between a sub-process name (transition in a high-level WF-net) and a low-level WF-net. Accordingly, every transition in a high-level WF-net \(\mathcal{N}\) has a corresponding low-level WF-net modeling the behavior of a sub-process. The example of an HWF-net is shown in Figure 2. We only show the refinement of two transitions \(t_{1}\) and \(t_{2}\) in the high-level WF-net \(\tilde{N}\) with two low-level WF-nets \(N_{1}\) and \(N_{2}\). They represent the detailed behavior of two sub-processes \(\alpha_{1}\) and \(\alpha_{2}\) correspondingly. We next consider the operational semantics of an HWF-nets by defining its run. For what follows, let \(\mathcal{N}=(\tilde{N},N_{1},N_{2},\ldots,N_{k},\ell)\) be an HWF-net. Let \(\tilde{m}\) be a reachable marking in a high-level WF-net of an HWF-net \(\mathcal{N}\) and \(T_{\tilde{m}}\) be a set of transitions enabled at \(\tilde{m}\). Intuitively, a set of transitions enabled in a high-level WF-net \(\tilde{m}\) corresponds to a set of sub-processes for which we can start to fire their low level transitions. If some transitions in a high-level WF-net enabled at \(\tilde{m}\) share common places, then there is a _conflict_, and we can choose, which sub-process to start, while the other sub-processes corresponding to conflicting transitions in a high-level WF-net will not be able to run. If some transitions in a high-level WF-net enabled at \(\tilde{m}\) do not share common places, then they are enabled _concurrently_, and we can start all sub-processes corresponding to these concurrently enabled transitions using the ordinary interleaving semantics. The firing of a transition in a high-level WF-net will be complete when the corresponding sub-process reaches its final marking. For instance, let us consider the HWF-net shown in Figure 2. After firing high-level transition \(t_{3}\) and executing a corresponding sub-process \(\alpha_{3}\) (not provided in Figure 2), two high-level transitions \(t_{1}\) and \(t_{2}\) become enabled. They do not share common places, i.e., high-level transitions \(t_{1}\) and \(t_{2}\) are enabled concurrently. Thus, the corresponding sub-processes \(\alpha_{1}\) (low-level WF-net \(N_{1}\)) and \(\alpha_{2}\) (low-level WF-net \(N_{2}\)) can also be executed concurrently. We can obtain a sequence \(\rho=\langle\alpha_{3},e_{1},e_{5},e_{2},e_{6},\alpha_{4}\rangle\), which will represent a possible run of an HWF-net from Figure 2. High-level activities \(\alpha_{3}\) and \(\alpha_{4}\) should also be replaced with corresponding sub-process runs. Lastly, we give a straightforward approach to transforming an HWF-net \(\mathcal{N}=(\tilde{N},N_{1},N_{2},\dots,\)\(N_{k},\ell)\) to the corresponding _flat_ WF-net denoted by \(\textbf{fl}(\mathcal{N})=(P,T,F,\lambda,[i])\). We need to replace transitions in a high-level WF-net with their sub-process implementation given by low-level WF-net corresponding by \(\ell\). When a transition \(t\) in a high-level WF-net \(\tilde{N}\) is replaced by a low-level WF-net \(N_{i}\), we need to fuse a source place in \(N_{i}\) with all input places of \(t\) and to fuse a sink place in \(N_{i}\) with all output places of \(t\). For instance, the flat WF-net \(\textbf{fl}(\mathcal{N})\) constructed for the HWF-net, shown in Figure 2, is provided in Figure 3. We replaced transition \(t_{1}\) with \(N_{1}\) and transition \(t_{2}\) with \(N_{2}\) as determined by the labels of low-level WF-nets. This figure also shows the double-line contours of corresponding high-level transitions. Proposition 1 gives the main connection between an HWF-net and its flat Figure 2: A HWF-net with two refined transitions representation. **Proposition 1**.: _Let \(\mathcal{N}=(\tilde{N},N_{1},N_{2},\ldots,N_{k},\ell)\) be a HWF-net, and \(\mathbf{fl}(\mathcal{N})\) be the corresponding flat WF-net. A sequence \(\rho\) is a run in \(\mathcal{N}\) if and only if \(\rho\) is a run in \(\mathbf{fl}(\mathcal{N})\)._ In other words, the set of all possible runs of a HWF-net is exactly the same as the set of all possible runs of the corresponding flat WF-net. Proof of this proposition directly follows from the construction of the flat WF-net and from the way we define the sequential semantics of a hierarchical WF-net. To sum up, the constructive representation of the HWF-net sequential semantics fully agrees with the ordinary Petri net firing rule and the definition of a run discussed in Section 3. ### Problem Statement Let \(L\) be a log over a set \(A\) of activities, and let \(A=A_{1}\cup A_{2}\cup\cdots\cup A_{k}\) be a partition of \(A\). Let also \(\tilde{A}=\{\alpha_{1},\alpha_{2},\ldots\alpha_{k}\}\) be a set of high-level activities (sub-process names). The problem is to construct a HWF-net \(\mathcal{N}=(\tilde{N},N_{1},N_{2},\ldots,N_{k},l)\), where for each \(i\in[1,k]\), \(N_{i}\) is a sub-process (WF-net) labeled by \(\alpha_{i}\) over the set of activities \(A_{i}\). Runs of \(\mathcal{N}\) should conform to traces from \(L\). We suppose that partitioning \(A\) into subsets \(A_{1},\ldots A_{k}\) is made either by an expert, or automatically based on some information contained in extended Figure 3: The flat WF-net for the HWF-net in Fig. 2 action records, such as resources or data. In Section 6 we give two examples of partitioning activities for a real log. Then we consider that a sub-process is defined by its set of activities, and we suppose that sets of activities for two sub-processes do not intersect. If it is not the case and two sub-processes include some common activities like 'close the file', one can easily distinguish them by including resource or file name into activity identifier. One more important comment concerning partitioning activities: we suppose that it does not violate the log control flow. Specifically, if there are iterations in the process, then for a set of iterated activities \(B\) and for each sub-process activities set \(A_{i}\), we suppose that either \(B\cap A_{i}=\emptyset\), or \(B\subseteq A_{i}\), or \(A_{i}\subseteq B\). Note that this is a reasonable requirement, taking into account the concept of a sub-process. If still it is not true, i. e., only a part of \(A_{i}\) activities are iterated, then the partition can be refined, exactly \(A_{i}\) can be splitted into two subsets, within and out of iteration. ### Basics of the Proposed Solution Now we describe the main ideas and the structure of the algorithm for discovery of hierarchical WF-net from event log. Let once more \(L\) be a log with activities from \(A\), and let \(A=A_{1}\cup A_{2}\cup\dots\cup A_{k}\) be a partition of \(A\). Let also \(\tilde{A}=\{\alpha_{1},\alpha_{2},\dots\alpha_{k}\}\) be a set of high-level activities (sub-process names). A hierarchical WF-net \(\mathcal{N}\) consists of a high-level WF-net \(\tilde{N}\) with activities from \(\tilde{A}=\{\alpha_{1},\dots,\alpha_{k}\}\), and \(k\) sub-process WF-nets \(N_{1},N_{2},\dots,N_{k}\), where for each \(N_{i}\), all its activities belong to \(A_{i}\). Sub-process WF-nets \(N_{1},N_{2},\dots,N_{k}\) can be discovered directly. To discover \(N_{i}\) we filter log \(L\) to \(L_{i}=L\mathord{\restriction}_{A_{i}}\). Then we apply one of popular algorithms (e. g., Inductive Miner) to discover WF-model from the log \(L_{i}\). Fitness and precision of the obtained model depend on the choice of the discovery algorithm. Discovering high-level WF-model is not so easy and is quite a challenge. Problems with it are coursed by possible interleaving of concurrent sub-processes and iteration. A naive solution could be like follows: in the log \(L\) replace each activity \(a\in A_{i}\) by \(\alpha_{i}\) -- the name of the corresponding sub-process. Remove'stuttering', i. e., replace wherever possible several sequential occurrences of the same high-level activity by one such activity. Then apply one of popular discovery algorithms to the new log over the set \(\tilde{A}\) of activities. However, this does not work. Consider examples in Fig. 4. Fragment (a) in Fig.4 shows two concurrent sub-processes \(\beta\) and \(\gamma\), going after sub-process \(\alpha\), which consists of just one transition. After replacing of low level activities with the corresponding sub-process names and removing stuttering, for the fragment (a), we get runs \(\langle\alpha,\beta,\gamma,\dots\rangle\), \(\langle\alpha,\gamma,\beta,\dots\rangle\), \(\langle\alpha,\beta,\gamma,\beta,\gamma,\dots\rangle\), \(\langle\alpha,\gamma,\beta,\gamma,\beta,\dots\rangle\), etc. Fragment (b) in Fig. 4 shows a cycle. The body of this cycle is the sequence of two sub-processes \(\beta\) and \(\gamma\). Among runs for the fragment (b) we also have \(\langle\alpha,\beta,\gamma,\dots\rangle\), \(\langle\alpha,\beta,\gamma,\beta,\gamma,\dots\rangle\). So, iterations should be considered separately. Discovering high-level WF-nets for acyclic models (i. e., logs without iteration) was studied earlier in [3], where all details can be found. Here we call this algorithm as Algorithm \(\mathfrak{A}_{0}\) and illustrate it with the example in Fig. 4(a). The algorithm \(\mathfrak{A}_{0}\) to discover a high-level WF-model from a log \(L\) without iterations reduces this problem to the classical discovery problem, which can be solved by many popular algorithms, such as e.g. Inductive Miner. Therefore, we parameterize Algorithm \(\mathfrak{A}_{0}\) with Algorithm \(\mathfrak{D}\) for solving classical discovery problem. _Algorithm_\(\mathfrak{A}_{0}(\mathfrak{D})\): 1. For all traces in \(L\), replace low level activities with the corresponding sub-process names and remove stuttering. 2. For each trace \(\sigma\) with more than one occurrence of the same activity replace \(\sigma\) with the set of all possible clones of \(\sigma\) by removing for each activity \(\alpha\), all its multiple occurrences except one, and by removing (newly formed) stuttering. For example, the trace \(\langle\alpha,\beta,\gamma,\beta,\gamma,\dots\rangle\) will be replaces by two traces \(\langle\alpha,\beta,\gamma,\dots\rangle\) and \(\langle\alpha,\gamma,\beta,\dots\rangle\) obtained by keeping the first occurrences of \(\beta\) and \(\gamma\), and correspondingly by keeping the first occurrence of \(\gamma\) and the second occurrence of \(\beta\). In this example, constructing clones by keeping other occurrences of \(\gamma\) does not generate new traces. 3. Let \(\tilde{L}\) be the log resulting from executing two previous steps. To obtain a high-level WF-net \(\tilde{N}\), apply a popular algorithm \(\mathfrak{D}\) to discover a WF-net from event log \(\tilde{L}\). It was proved in [3] that if an algorithm used in Step 3 of the Algorithm \(\mathfrak{A}_{0}\) for each input log \(L\) discovers a WF-net perfectly fitting \(L\), then the Algorithm \(\mathfrak{A}_{0}\), given a log \(L\) without repetitive behavior, produces a HWF-net \(\mathcal{N}\), such that \(\mbox{\rm\bf fl}(\mathcal{N})\) perfectly fits \(L\). Figure 4: Interleaving and iteration of sub-processes Now we come to logs with repetitive behavior. The main idea here is to represent a loop body as a subset of its activities. Then a body of a loop can be considered as a sub-process with a new loop sub-process name. To discover repetitive behavior we use methods from [23, 26], which allow to determine causal, concurrency, and repetitive relations between events in an event log. Actually, for our purpose we need only repetitive relations and based on them loop discovery. The algorithm in [26] (we call it here as Algorithm \(\mathfrak{B}\)) allows us to discover elementary loop bodies as sets of their activities and process them recursively, starting with inner elementary loops. Thus, at every iteration we deal with a loop body without inner loops. To obtain a sub-trace, corresponding to a loop body with a set of activities \(B\) from a log trace \(\sigma\) we construct the projection \(\sigma\!\upharpoonright_{B}\). After filtering all current traces this way, we get an event log for discovering a WF-net modeling the loop body behavior by applying Algorithm \(\mathfrak{A}_{0}\). Then the result high-level WF-net is built recursively by substituting discovered loop bodies for loop sub-process names, starting from inner loops. ## 5 Algorithm for Discovering HWF-Nets from Low Level Event Logs Here we describe our discovery algorithm in more details. Let \(A\) be a set of activities and \(L\) -- a log over \(A\). Let then \(A=A_{1}\cup\cdots\cup A_{k}\) be a partition of \(A\), where for \(i\in[1,k]\), \(A_{i}\) is a set of activities of a sub-process \(\alpha_{i}\) and \(\tilde{A}=\{\alpha_{1},\ldots,\alpha_{k}\}\) -- a set of sub-process names. Then Algorithm \(\mathfrak{A}(\mathfrak{D})\) constructs a HWF-net \(\mathcal{N}=(\tilde{N},N_{1},N_{2},\ldots,N_{k},\ell)\) with high-level WF-net \(\tilde{N}=(\tilde{P},\tilde{T},\tilde{F},\tilde{\lambda},[\tilde{i}])\), where \(\tilde{\lambda}\colon\tilde{T}\to\tilde{A}\) and for each \(\alpha_{i}\in\tilde{A}\), \(\ell(\alpha_{i})=N_{i}\), i. e., sub-process name \(\alpha_{i}\) corresponds to low-level WF-net \(N_{i}\) in \(\mathcal{N}\). _Algorithm_\(\mathfrak{A}(\mathfrak{D})\): By \(\tilde{B}\) we denote a set of loop body names with elements \(\{\beta,\beta^{\prime},\ldots\}\) and by \(\ell_{B}\) -- a function which maps a name from \(\tilde{B}\) to a WF-net that implements the loop body. For a WF-net \(N\), denote by \(\mathit{Loop}(N)\) a WF-net that is a loop with body \(N\). 1. Set \(\tilde{B}=\emptyset\) and \(\ell,\ell_{B}\) to be two functions with empty domains. 2. Apply Algorithm \(\mathfrak{B}\) to \(L\) to find a set \(B\) of activities of some inner elementary loop body. If there is no repetitive behavior in \(L\), _go to Step 2_, otherwise do the following. Construct the projection \(L\!\upharpoonright_{B}\) and apply Algorithm \(\mathfrak{A}_{\mathfrak{o}}(\mathfrak{D})\) to it (with respect to the partition \(A=A_{1}\cup\cdots\cup A_{k}\)). Let \(\tilde{N}\) be the result high level WF-net over the set \(\tilde{A}\) of sub-process names, \(N_{i_{1}},\ldots,N_{i_{j}}\) -- result WF-nets for sub-processes with names respectively (for each \(\alpha_{i}\), such that \(A_{i}\subseteq B\), i. e., for sub-processes within the loop body), and let \(\beta\) be a new name. Then * For each \(\beta^{\prime}\in\tilde{B}\), if there is a transition labeled by \(\beta^{\prime}\) in some of \(N_{i_{1}},\ldots,N_{i_{j}}\) or \(\tilde{N}\), replace this transition with sub-process \(Loop(\ell_{B}(\beta^{\prime}))\), as was done when constructing a flat WF-net in Subsection 4.1, i. e., substitute inner loops in the loop body. Remove \(\beta^{\prime}\) from \(\tilde{A}\), \(\tilde{B}\) and from the domains of \(\ell\) and \(\ell_{B}\). * Add \(\beta\) to \(\tilde{B}\) and extend \(\ell_{B}\) by defining \(\ell_{B}(\beta)=\tilde{N}\). Extend also \(\ell\) by defining \(\ell(A_{i_{1}})=N_{i_{1}},\ldots,\ell(A_{i_{j}})=N_{i_{j}}\). * Replace by \(\beta\) all occurrences of activities from \(B\) in \(L\) and remove stuttering. * If for some \(i\in[1\ldots k]\), \(B\subseteq A_{i}\), then add \(\beta\) to \(A_{i}\) (and respectively to \(A\)) as one more activity. Otherwise, add \(\beta\) to \(A\), as well as add \(\{\beta\}\) to partition of \(A\). (Thus, \(\beta\) is both an activity and a process name, which should not be confusing.) _Repeat Step 1_. * Apply Algorithm \(\mathfrak{A}_{\mathsf{o}}(\mathfrak{D})\) to current log \(L\) with respect to the current partition of activities. Let \(\tilde{N}\) be a result high level WF-net. * While \(\tilde{B}\) is not empty, for each \(\beta\in\tilde{B}\), replace a transition labeled by \(\beta\) in \(\tilde{N}\) with the sub-process \(Loop(\ell_{B}(\beta))\), as it is done in Step 1. The resulting net \(\tilde{N}\) is a high-level WF-net for the HWF-net constructed by the Algorithm. Its low-level WF-nets are defined by function \(\ell\), also built during the Algorithm operation. _Correctness of the Algorithm \(\mathfrak{A}(\mathfrak{D})\)_ is justified by the following statement. **Theorem 1**.: _Let \(A\) be a set of activities and \(L\) -- a log over \(A\). Let also \(A=A_{1}\cup\cdots\cup A_{k}\) be a partition of \(A\), and \(\tilde{A}=\{\alpha_{1},\ldots,\alpha_{k}\}\) -- a set of sub-process names._ _If Algorithm \(\mathfrak{D}\) for any log \(L^{\prime}\) discovers a WF-net \(N^{\prime}\), such that \(N^{\prime}\) perfectly fits \(L^{\prime}\), then Algorithm \(\mathfrak{A}(\mathfrak{D})\) constructs a HWF-net \(\mathcal{N}=(\tilde{N},N_{1},N_{2},\ldots,N_{k},\ell)\), such that \(\mathsf{fl}(\mathcal{N})\) perfectly fits \(L\)._ Proof.: To prove that the HWF-net built using Algorithm \(\mathfrak{A}(\mathfrak{D})\) perfectly fits the input log, provided that Algorithm \(\mathfrak{D}\) discovers models with perfect fitness, we use three previously proven assertions. Namely, * Theorem in [3] states that when \(\mathfrak{D}\) is an discovery algorithm with perfect fitness, Algorithm \(\mathfrak{A}_{0}(\mathfrak{D})\) discovers a high-level WF-net, whose refinement perfectly fits the input log without repetitions (i. e., the log of an acyclic process). * In [26] it is proved that, given a log \(L\), Algorithm \(\mathfrak{B}\) correctly finds in \(L\) all repetitive components that correspond to supports of T-invariants in the Petri net model for \(L\). * Proposition 1 in Subsection 4.1 justify correctness of refining a high-level WF-net by substituting sub-process modules for high-level transitions. With all this, proving the Theorem is straightforward, though quite technical. So, we informally describe the logic of the proof here. Let Algorithm \(\mathfrak{D}\) be a discovery algorithm, which discovers a perfectly fitting WF-net for a given event log. At each iteration of Step 1, an inner elementary repetitive component in the log is discovered using Algorithm \(\mathfrak{B}\). Activities of this component are activities of an inner loop body, which itself does not have repetitions. Then a WF-net \(N\) for this loop body is correctly discovered using Algorithm \(\mathfrak{A}_{0}(\mathfrak{D})\), the loop itself is folded into one high-level activity \(\beta\), and \(N\) is kept as the value \(\ell_{B}(\beta)\). WF-nets for sub-processes within the body of this loop are also discovered by Algorithm \(\mathfrak{A}_{0}(\mathfrak{D})\) and accumulated in \(\ell\). If loop activity \(\beta\) is itself within an upper loop body, then with one more iteration of Step 1, the upper loop \(N^{\prime}\) is discovered, the transition labeled with \(\beta\) in it is replaced with \(N\), and \(N^{\prime}\) is itself folded into a new activity. After processing all loops, the Algorithm proceeds to Step 2, where after reducing all loops to high-level activities, Algorithm \(\mathfrak{A}_{0}(\mathfrak{D})\) is applied to a log without repetitions. In Step 3 all transitions labeled with loop activities in a high-level and low-level WF-nets are replaced by WF-nets for these loops, kept in \(\ell_{B}\). So, we can see that while Algorithm \(\mathfrak{A}_{0}(\mathfrak{D})\) ensures perfect fitness between acyclic fragments of the model (when loops are folded to transitions), Algorithm \(\mathfrak{B}\) ensures correct processing of cyclic behavior, and Proposition 1 guarantees that replacing loop activities by corresponding loop WF-nets does not violate fitness, the main algorithm provides systematic log processing and model construction. ## 6 Experimental Evaluation In this section, we report the main outcomes from a series of experiments conducted to evaluate the algorithm for discovering two-level hierarchical process models from event logs. To support the automated experimental evaluation, we implemented the hierarchical process discovery algorithm described in the previous section using the open-source library PM4Py [27]. The source files of our implementation are published in the open GitHub repository [28]. We conducted experiments using two kinds of event logs: 1. _Artificial_ event logs generated by manually prepared process models; 2. _Real-life_ event logs provided by various information systems. Event logs are encoded in a standard way as XML-based XES files. _Conformance checking_ is an important part of process mining along with process discovery [29]. The main aim of conformance checking is to evaluate the quality of process discovery algorithm by estimating the corresponding quality of discovered process models. Conformance checking provides four main quality dimensions. _Fitness_ estimates the extent to which a discovered process model can execute traces in an event log. A model perfectly fits an event log if it can execute all traces in an event log. According to Theorem 1, the hierarchical process discovery algorithm yields perfectly fitting process models. _Precision_ evaluates the ratio between the behavior allowed by a process model and not recorded in an event log. A model with perfect precision can only execute traces in an event log. The perfect precision limits the use of a process model since an event log represents only a finite "snapshot" of all possible process executions. Generalization and precision are two dual metrics. The fourth metric, _simplicity_, captures the structural complexity of a discovered model. We improve simplicity by the two-level structure of a discovered process models. Within the experimental evaluation, we estimated fitness and precision of process models discovered from artificially generated and real-life event logs. Fitness was estimated using alignments between a process model and an event log as defined in [30]. Precision was estimated using the complex ETC-align measures proposed in [31]. Both measures are values in the interval \([0,1]\). ### Discovering HWF-Nets from Artificial Event Logs The high-level source for generating artificial low-level event logs was the Petri net earlier shown in Figure 1. In this model, we refined its transitions with different sub-processes containing sequential, parallel and cyclic executions of low-level events. The example of refining the Petri net from Figure 1 is shown in Figure 5, where we show the corresponding flat representation of an HWF-net. Generation of low-level event logs from the prepared model was implemented using the algorithm presented in [32]. Afterwards, we transform a low-level event log into a high-level event log by grouping low-level events into a single high-level event and by extracting the information about cyclic behavior. The corresponding high-level WF-net discovered from the artificial low-level event log generated from the WF-net shown in 5 is provided in Figure 6. Intuitively, one can see that this high-level WF-net is rather similar to the original Figure 5: A flat WF-net with generated by refining the WF-net in Fig. 1 As for the quality evaluation for the above presented high-level model, we have the following: 1. The discovered high-level WF-net perfectly fits the high-level event log obtained from a low-level log, where we identified cycles and grouped activities correspondingly. 2. The flat WF-net obtained by refining transitions in a discovered high-level WF-net almost perfectly fits (0.9534) a low-level log. The main reason for this is the combination of the coverability lack and the straightforward accuracy indicators of the algorithm. 3. Precision of the flat WF-net is 0.3729, which is the result of identifying independent sub-processes generalizing the behavior recorded in an initial low-level event log. Other examples of process models that were used for the artificial event log generation are also provided in the main repository [28]. ### Discovering HWF-Nets from Real-Life Event Logs We used two real-life event logs provided by _BPI_ (Business Process Intelligence) _Challenge_ 2015 and 2017 [33]. These event logs were also enriched with the additional statistical information about flat process models. The _BPI Challenge 2015_ event log was provided by five Dutch municipalities. The cases in this event log contain information on the main application and objection procedures in various stages. A flat low-level WF-net for case \(f1\) discovered using the Inductive miner is shown in Figure 7. It is easy to see that this model is absolutely inappropriate for the visual analysis. The code of each event in the _BPI Challenge 2015_ event log consists of three parts: two digits, a variable number of characters, and three digits more. From Figure 6: A high-level WF-net discovered from an event log generated by refining the WF-net in Fig. 5 the event log description we know that the first two digits and the characters indicate the sub-process the event belongs to, which allows us to assume an option of identifying the sub-processes. We used the first two parts of the event name to create the mapping between low-level events and sub-proces names. After applying our hierarchical process discovery algorithm in combination with the Inductive Miner, we obtained a high-level model presented in Figure 8 that is far more comprehensible than the flat model mainly because of its size. The _BPI Challenge 2017_ event log pertains to a loan application process of a Dutch financial institute. The data contains all applications filed trough an online system from 2016 till February of 2017. Here, as a base for mapping low-level events to sub-proces names, we used the mark of the event type in its name -- application, offer or workflow. Thus, a mapping could be based on various features of event data dependint on the expert's needs. The flat model for this data is presented in Figure 9, which is also difficult to read because of Figure 8: A high-level WF-net discovered from the ’BPI Challenge 2015’ event log Figure 7: A flat WF-net discovered from BPI Challenge 2015 event log its purely sequential representation. Applying the principle of mapping low-level events in the _BPI Challenge 2017_ event log described above, we obtained the high-level WF-net shown in Fig. 10, which clearly demonstrates sub-processes (if necessary, they can be expanded) and their order. Table 1 shows fitness and precision evaluation for flat and high-level WF-nets discovered from real-life _BPI Challenge 2015_ and _2017_ event logs. _Fitness 1_ shows the fitness evaluation between the flat WF-net constructed from the high-level WF-net by refining transitions with low-level subprocesses. \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline \multirow{2}{*}{Event log} & \multicolumn{3}{c|}{High-level WF-net} & \multicolumn{2}{c|}{Flat WF-net} \\ \cline{2-6} & Fitness 1 & Fitness 2 & Precision & Fitness & Precision \\ \hline BPI Challenge 2015 & 0.9122 & 1 & 0.5835 & 0.9700 & 0.5700 \\ \hline BPI Challenge 2017 & 0.9393 & 1 & 0.3898 & 0.9800 & 0.7000 \\ \hline \end{tabular} \end{table} Table 1: Comparing metrics for flat and high-level WF-nets discovered from ’BPI Challenge’ event logs Figure 10: A high-level WF-net discovered from the ’BPI Challenge 2017’ event log Figure 9: A flat WF-net discovered from the ’BPI Challenge 2017’ event log _Fitness 2_ shows the fitness evaluation between the high-level WF-net and an event log with low-level events grouped into sub-processes. This confirms the formal correctness results of the hierarchical process discovery algorithm. Similar to the experimental results for artificial event logs, here we also observe a decrease in the precision for the identification of sub-processes, therefore, generalizing traces in an initial low-level event log. ## 7 Conclusion and Future Work In this research, we provide a new process discovery technique for solving the problem of discovering a hierarchical WF-net model from a low-level event log, based on "folding" sub-processes into high-level transitions according to event clustering. Unlike the previous solutions, we allow cycles and concurrency in process behavior. We prove that the proposed technique makes it possible to obtain hierarchical models, which fit event logs perfectly. The technique was also evaluated on artificial and real event logs. Experiments show that fitness and precision of obtained hierarchical models are almost the same as for the standard "flat" case, while hierarchical models are much more compact, more readable and more visual. To implement our algorithm and check it on real data we used _Python_ and one of the most convenient instruments for process mining at the moment -- the _PM4Py_[27]. The implementation is provided in the public _GitHub_ repository [28]. In further research, we plan to develop and evaluate various event clustering methods for automatic discovery of hierarchical models.
2310.13409
Explicit Alignment and Many-to-many Entailment Based Reasoning for Conversational Machine Reading
Conversational Machine Reading (CMR) requires answering a user's initial question through multi-turn dialogue interactions based on a given document. Although there exist many effective methods, they largely neglected the alignment between the document and the user-provided information, which significantly affects the intermediate decision-making and subsequent follow-up question generation. To address this issue, we propose a pipeline framework that (1) aligns the aforementioned two sides in an explicit way, (2)makes decisions using a lightweight many-to-many entailment reasoning module, and (3) directly generates follow-up questions based on the document and previously asked questions. Our proposed method achieves state-of-the-art in micro-accuracy and ranks the first place on the public leaderboard of the CMR benchmark dataset ShARC.
Yangyang Luo, Shiyu Tian, Caixia Yuan, Xiaojie Wang
2023-10-20T10:27:24Z
http://arxiv.org/abs/2310.13409v1
# Explicit Alignment and Many-to-many Entailment Based Reasoning for Conversational Machine Reading ###### Abstract Conversational Machine Reading (CMR) requires answering a user's initial question through multi-turn dialogue interactions based on a given document. Although there exist many effective methods, they largely neglected the alignment between the _document_ and the _user-provided information_, which significantly affects the intermediate decision-making and subsequent follow-up question generation. To address this issue, we propose a pipeline framework that (1) aligns the aforementioned two sides in an explicit way, (2) makes decisions using a lightweight many-to-many entailment reasoning module, and (3) directly generates follow-up questions based on the document and previously asked questions. Our proposed method achieves state-of-the-art in micro-accuracy and ranks the first place on the public leaderboard1 of the CMR benchmark dataset ShARC. Footnote 1: [https://sharc-data.github.io/leaderboard.html](https://sharc-data.github.io/leaderboard.html) ## 1 Introduction The Conversational Machine Reading (CMR) task Saeidi et al. (2018) requires an agent to answer an initial question from users through multi-turn dialogue interactions based on a given document. As shown in Figure 1, a typical process involves two steps, (1) the agent first makes a decision classification among IRRELEVANT, YES, NO and MORE, (2) if the decision is MORE, the agent generates a question to clarify an unmentioned condition in the given document, otherwise responds directly. Recent research Verma et al. (2020); Lawrence et al. (2019); Zhong and Zettlemoyer (2019); Gao et al. (2020); Guo et al. (2020); Ouyang et al. (2021); Zhang et al. (2022) has explored how to improve the abilities of _decision-making_ and _question generation_. For _decision-making_, one common approach first segments the document into many text spans at different granularity levels (e.g., sentences or Elementary Discourse Units (EDUs)). Then complex modules are adopted to predict the entailment state for each document span based on user scenario and previous dialogue history (both are user-provided information). Finally, decisions are made based on the entailment states of all document spans. One effective module for predicting entailment states is transformer blocks Vaswani et al. (2017), which are widely adopted Gao et al. (2020); Ouyang et al. (2021); Zhang et al. (2022). However, the aforementioned approach has overlooked the explicit alignment between the document and the user-provided information, such as text spans marked with the same color as shown in Figure 1. Since not all user-provided information is relevant to a particular document span, the lack of explicit alignment leads to sparse attention and introduces noises that Figure 1: An example of Conversational Machine Reading from ShARC dataset Saeidi et al. (2018). affect the prediction of the entailment state. Furthermore, recent work (Ouyang et al., 2021) tries to leverage relational graph convolutional networks, which results in a heavyweight decision module, therefore greatly imposing a substantial burden on computation and memory resources. For _question generation_, most works (Zhong and Zettlemoyer, 2019; Gao et al., 2020; Gao et al., 2020; Ouyang et al., 2021) first extract an unmentioned span in the document and then rewrite it into a follow-up question. The extract-then-rewrite method relies heavily on extracted spans, the failure to properly extract an unmentioned span results in generating a redundant or even irrelevant follow-up question. To address these issues, we propose a pipeline approach consisting of a reasoning model based on **B**ipartite **A**lignment and many-to-many **E**ntailment (BiAE) for _decision-making_, and a directly fine-tuned model for _question generation_2. Our approach (1) explicitly aligns the document and the user-provided information by introducing supervision from an external model, (2) uses a lightweight core decision module with only linear layers, which predicts many-to-many entailment states using aligned information and feature vectors, (3) directly uses the whole document and previously asked questions to generate follow-up questions without extracting underspecified document spans. Through extensive experiments on the CMR benchmark dataset ShARC (Saeidi et al., 2018), we demonstrate that BiAE significantly outperforms baselines with lightweight decision modules by at least 12.7% in micro accuracy and the finetuned model outperforms all baselines using extract-then-rewrite generation method. Our contributions can be summarized as follows: Footnote 2: [https://github.com/AidenYo/BiAE](https://github.com/AidenYo/BiAE) * We propose a method for constructing bipartite connection, which provides explicit alignment for document and user provided information. * We propose the BiAE model which utilizes a lightweight module to make decisions by introducing explicit alignment, and a direct method which generates questions using the document and previously asked questions. * Our approach ranks the first place on the leaderboard of ShARC and outperforms the previous SOTA method on key metrics, with a significant reduction in decision module parameters. ## 2 Related Work Machine Reading Comprehension (MRC) is a classical and fruitful research field with various tasks and focuses, such as extractive tasks (Rajpurkar et al., 2016; Trischler et al., 2017; Joshi et al., 2017; Saha et al., 2018), cloze-style and multiple choice tasks (Xie et al., 2018; Hermann et al., 2015; Richardson et al., 2013; Lai et al., 2017; Onishi et al., 2016), multi-document tasks (Feng et al., 2021; Nguyen et al., 2016; Qiu et al., 2022; Dhingra et al., 2017). Among them, we focus on Conversational Machine Reading (CMR) (Saeidi et al., 2018), which is a critical but more challenging task: (1) it requires determining complex intermediate states, such as whether the document is relevant to user's query, or whether it is necessary to make clarification before answering; (2) it requires multiple interactions with the user through dialogue in order to output final answers; and (3) the document that the agent has to consult about usually has complicated discourse structures describing multiple rules and constraints. Due to the characteristic of determining the state before responding, the pipeline method consisting of _decision-making_ and _question generation_ is more suitable for this task, which is adopted by most existing methods and achieves great success (Zhong and Zettlemoyer, 2019; Gao et al., 2020; Gao et al., 2020; Ouyang et al., 2021). In order to improve the ability of _decision-making_, Ouyang et al. (2021) focus on improving the representation of the document and use relational GCNs (Schlichtkrull et al., 2018) to construct the discourse relations of the document. Other works focus on reasoning the entailment state of document rules, which is highly relevant to Recognizing Textual Entailment (RTE) (Bowman et al., 2015; Mou et al., 2016; Zhang et al., 2020; Wang et al., 2021). To do this, Gao et al. (2020) modify a Recurrent Entity Network (Henaff et al., 2017), Gao et al. (2020) use a Transformer encoder, and Zhang et al. (2022) use a T5 decoder. To improve the ability of _question generation_, existing works (Zhong and Zettlemoyer, 2019; Gao et al., 2020; Ouyang et al., 2021) extract a span and then rewrite it into a follow-up question, which heavily relies on the quality of the extraction. In comparison with these works, our work focuses on the explicit alignment of information from both the document and the user, and employs a simpler entailment reasoning structure. Then we adopt a new approach to directly generate follow-up questions based on the document and the questions asked. ## 3 Methodology The CMR task can be formulated as follows: give input \(X=(D,Q,S,H)\), \(D\) is the document, \(Q\) is the user's initial question, \(S\) is the user's scenario, \(H=(f_{1},a_{1}),\cdots,(f_{n_{H}},a_{n_{H}})\), \(f_{i}\) is a follow-up question that was already asked, \(a_{i}\in\{\text{YES},\text{NO}\}\), is the dialogue history, a CMR system \(G\) makes a response \(Y=G(X)\). We propose a classification model based on Bipartite Alignment and Entailment (BiAE) for intermediate _decision-making_. If the decision is IR-RELEVANT, YES or NO, the system provides a direct response. If the decision is MORE, a fine-tuned model is used for _question generation_. The overall architecture of classification and generation is displayed in Figure 2. ### Segmentation and Encoding Assuming that the document is a set of hypotheses and the user-provided information is a set of premises, a segmentation step is taken first to construct hypothesis and premise sets before encoding. We not only segment documents Gao et al. (2020), but also make clear segmentation of user-provided information. Figure 3 shows an example of how both parts in Figure 1 is segmented. **Segmentation.** Following Gao et al. (2020), we use the Segbot Li et al. (2018) to divide the document into several Elementary Discourse Units (EDUs), with each EDU containing exactly one condition. Suppose a document can be divided into \(m\) EDUs and these EDUs constitute a hypothesis set \(\mathbf{D}\). \(D\rightarrow\mathbf{D}:\mathbf{D_{1}},\cdots,\mathbf{D_{m}}\). We divide the scenario into individual sentences using NLTK3. \(S\rightarrow\mathbf{S}:\mathbf{S_{1}},\mathbf{S_{2}},\cdots\). We concatenate the follow-up question and answer in a dialogue turn and add the roles in the conversation to form a premise provided by the user. \(H\rightarrow\mathbf{T}:\mathbf{T_{1}},\mathbf{T_{2}},\cdots\), where \(\mathbf{T_{1}}=\textit{``System:''}\)\(f_{i}\)_"Client:"_\(a_{i}\). The two parts combined form the premise set \(\mathbf{U}=\mathbf{S};\mathbf{T}\) with a total number of \(n\). Footnote 3: [https://www.nltk.org](https://www.nltk.org) **Encoding.** As shown in Figure 2.1(a), we use a pre-trained language model (PLM) to encode the hypothesis set \(\mathbf{D}\), initial question \(U_{q}\), and premise set \(\mathbf{U}\). We insert a special token \([H]\) before each hypothesis \(\mathbf{D_{i}}\), and \([CLS]\) before both the initial question and each premise \(\mathbf{U_{i}}\), separate the two parts by \([SEP]\), resulting in the input of PLM as \(X\) with the length of \(L\). The encoding of the input sequence is \[encoding=PLM(X)\in\mathbb{R}^{L\times d} \tag{1}\] where \(d\) is the dimension of PLM hidden state. The representation of each hypothesis \(\mathbf{d_{i}}\), initial question \(\mathbf{u_{q}}\) and premise \(\mathbf{u_{i}}\) is determined by selecting the vector of special tokens \([H]\) and \([CLS]\) in \(encoding\). More specifically, \[\mathbf{d_{i}}=Select(encoding,Index(\mathbf{D_{i}}))\in\mathbb{R}^{d}, \tag{2}\] \[\mathbf{u_{i}}=Select(encoding,Index(\mathbf{U_{i}}))\in\mathbb{R}^ {d}, \tag{3}\] Figure 2: The overall architecture of our proposed method. Our system consists of: (1) a decision classification model based on Bipartite Alignment and Entailment (BiAE) and (2) a follow-up question generation model. It is noteworthy that only when the classification result of BiAE is MORE will the question generation model be activated. _cat\(\&att\)_ in 1(d) is the operation of concatenation and attention. where \(Select(encoding,i)\) denotes selecting the hidden state at index \(i\) from \(encoding\), and \(Index(\cdot)\) denotes the index of \(\cdot\) in the input sequence \(X\). We use DeBERTaV3 He et al. (2021) as the PLM. ### Explicit Alignment The objective of explicit alignment is to align a document hypothesis \(\mathbf{d_{i}}\) that describes a certain condition to a premise \(\mathbf{u_{j}}\) provided by the user. We calculate the unnormalized alignment matrix \(\hat{A}\) for each hypothesis-premise pair \((\mathbf{d_{i}},\mathbf{u_{j}})\) by the following formula: \[\hat{A}_{ij}=\mathbf{w_{A}}[\mathbf{d_{i}};\mathbf{u_{j}}]^{T}+\mathbf{b_{A}} \in\mathbb{R}, \tag{4}\] where \(\mathbf{w_{A}}\) and \(\mathbf{b_{A}}\) are parameters of a linear layer, \(\hat{A_{ij}}\) is the hidden value of each element in the alignment matrix. Then we use the softmax function for each row to get the final alignment score matrix \(A\) as shown in Figure 2.1(b), \[A_{i}=softmax(\hat{A_{i}})\in\mathbb{R}^{n}, \tag{5}\] where the element \(A_{ij}\in[0,1]\). We use contrastive learning to train bipartite alignment and the loss can be formulated as \[\mathcal{L}_{align}=\sum_{i=1}^{m}H(A_{i},l_{i}^{align}), \tag{6}\] where \(H(p,q)\) represents the cross-entropy function, \(l_{i}^{align}\) is the weakly supervised alignment label. In order to construct \(l_{i}^{align}\), we use Sentence-BERT Reimers and Gurevych (2019) to compute the semantic similarity between the user provided premise set and the document hypothesis set offline. Specifically, we calculate the cosine distance between sentence vector pairs and select the hypothesis with the maximal cosine distance as the alignment label for each user premise.4 Footnote 4: This method shows 92% consistency with manual selection on a subset of 100 randomly selected samples. ### Many-to-many Entailment The textual entailment task involves inferring the relationship of hypothesis-premise pair, which is generally classified into three categories: entailment, contradiction, and neutral MacCartney and Manning (2008). Entailment refers to the case where the hypothesis can be inferred from the premise, taking Figure 1 as an example, the user's premise _"I'm still working right now and I just turned in the notice."_ entails the document hypothesis _"(You qualify for Statutory Maternity Leave if) you give your employer the correct notice"_. Contradiction represents the case where the hypothesis contradicts the premise, while neutral indicates that the relationship of the hypothesis-premise pair is unknown or irrelevant. Inspired by Mou et al. (2016), we adopt four simple yet effective features to predict the entailment states as shown in Figure 2.1(c). Specifically, we initialize three learnable vectors \(\mathbf{e}_{E},\mathbf{e}_{C},\mathbf{e}_{N}\in\mathbb{R}^{d}\) to represent the three entailment states, use four well-designed features to predict the probabilities of the three states, and represent the entailment state of a hypothesis-premise pair as a probabilistic weighted sum of the three vectors. This process can be expressed as \[\hat{E_{ij}} =\mathbf{W_{E}}[\mathbf{d_{i}};\mathbf{u_{j}};\mathbf{d_{i}}- \mathbf{u_{j}};\mathbf{d_{i}}\circ\mathbf{u_{j}}]^{T}+\mathbf{b_{E}}, \tag{7}\] \[E_{ij} =softmax(\hat{E_{ij}})\in\mathbb{R}^{3}, \tag{8}\] where \(\circ\) denotes element-wise product, \(\hat{E_{ij}}\in\mathbb{R}^{3}\) is the logits of three states, and \(E_{ij}=[E_{ij}^{(E)},E_{ij}^{(C)},E_{ij}^{(N)}]\) is their probabilities after softmax. The final state vector for a single hypothesis across all premises weighted by alignment scores is represented as \[\mathbf{e_{i}}=\sum_{j=1}^{n}A_{ij}\sum_{K\in\{E,C,N\}}E_{ij}^{(K)}\mathbf{e}_ {K}\in\mathbb{R}^{d}. \tag{9}\] The expression for the entailment loss is \[\mathcal{L}_{entail}=\sum_{(i,j)\in\mathcal{P}}H(E_{ij},l_{ij}^{entail}), \tag{10}\] where \(\mathcal{P}\) is the set of premise-hypothesis pairs, \(l_{ij}^{entail}\) denotes the weakly supervised entailment label. We adopt the three-state label proposed by Gao et al. (2020) to make such supervision. Figure 3: An example of document, scenario and conversation history segmentation. ### Decision Classification The decision unit in Figure 2.1(d) integrates all semantic vectors and all entailment states of the hypothesis set to obtain a holistic representation \(\mathbf{s}\) of the entire document, using the attention mechanism, \[\hat{a_{i}} =\mathbf{w_{a}}[\mathbf{d_{i}};\mathbf{e_{i}}]^{T}+\mathbf{b_{a}} \in\mathbb{R}, \tag{11}\] \[a =softmax(\hat{a})\in\mathbb{R}^{m},a_{i}\in[0,1],\] (12) \[\mathbf{s} =\sum_{i=1}^{m}a_{i}[\mathbf{d_{i}};\mathbf{e_{i}}]\in\mathbb{R} ^{2d}. \tag{13}\] Subsequently, the representation \(\mathbf{s}\) is employed to generate the probabilities \(p\) of four aforementioned decision categories together with the semantic representation of initial question \(\mathbf{u_{q}}\). And the corresponding decision loss is \[p =\mathbf{W_{D}}[\mathbf{u_{q}};\mathbf{s}]^{T}+\mathbf{b_{D}}\in \mathbb{R}^{4}, \tag{14}\] \[L_{dec} =H(softmax(p),l^{d}), \tag{15}\] where \(l^{d}\) is the true decision label. Furthermore, bipartite alignment and many-to-many entailment are employed to augment the _decision-making_ process, and a joint loss function is introduced incorporated with a weight parameter \(\lambda\), \[\mathcal{L}=\lambda\mathcal{L}_{dec}+\mathcal{L}_{align}+\mathcal{L}_{ entail}. \tag{16}\] ### Question Generation If the predicted decision is MORE, the system is required to propose a follow-up question to obtain new premises for clarification and continuing the reasoning process. Although it is intuitive to extract a hypothesis with a neutral entailment state and then rewrite it into a clarification question, the _question generation_ process heavily depends on the extracted hypothesis. Current language models, such as T5 (Raffel et al., 2020) and BART (Lewis et al., 2020), have strong generative capabilities. Hence, we directly fine-tune T5 with the entire document \(D\) and the sequence of previously asked questions \(F=f_{1},f_{2},\cdots\), while treating the ground-truth follow-up question as the generation target. We use the generation loss implemented in Raffel et al. (2020) for training. We also perform data augmentation to alleviate data sparsity. Specifically, we reduce the dialogue history by one turn to construct \(F\) for the data with decision labels other than MORE, and use the question in the last turn as the target question to be generated. ## 4 Experiments ### Dataset and Metrics **Dataset.** Our experiments are carried out on the CMR benchmark dataset ShARC (Saeidi et al., 2018), which was crawled from government legal documents across 10 unique domains. This dataset comprises 35% bullet point documents (e.g. the example shown in Figure 1), while the rest are regular documents. The dialogues are constructed based on an annotation protocol (Saeidi et al., 2018) in the form of question-answer pairs with (or not) an extra scenario. The sizes of the train, development, and test sets are 21,890, 2,270, and 8,276, respectively. The test set is withheld and not publicly available. **Metrics.** For _decision-making_, Micro and Macro Accuracy are used for evaluation, whereas BLEU (Papineni et al., 2002) is used for evaluating _question generation_. ### Baselines (1) **Baseline-NMT**(Saeidi et al., 2018) is an end-to-end NMT-copy model based on LSTM and GRU. (2) **Baseline-CM**(Saeidi et al., 2018) is a pipeline combined model using Random Forest, Surface Logistic Regression and rule-based generation. (3) **BERTQA**(Zhong and Zettlemoyer, 2019) is an extractive QA model. (4) **UracNet**(Verma et al., 2020) uses artificially designed heuristic-based patterns. (5) **BiSon**(Lawrence et al., 2019) utilizes placeholders for bidirectional generation rather than autoregressive unidirectional generation. (6) \(\mathbf{E}^{3}\)(Zhong and Zettlemoyer, 2019) performs rule extraction from documents, rule entailment from user information, and rule editing into follow-up questions jointly. (7) **EMT**(Gao et al., 2020) uses a gated recurrent network with augmented memory that updates rule entailment state for _decision-making_ by sequentially reading user information. (8) **DISCERN**(Gao et al., 2020) subdivides a document into fine-grained EDUs and employs an inter-sentence transformer encoder for entailment prediction. (9) **DGM**(Ouyang et al., 2021) primarily employs relational GCNs (Schlichtkrull et al., 2018) to model the rhetorical structure of documents for _decision-making_. (10) **ET5**(Zhang et al., 2022) proposes an end-to-end generation approach with duplex decoders and a shared encoder based on T5. Baselines (8), (9) and (10) use heavyweight modules for core _decision-making_. Please refer to Ap pendix A for implementation details. ## 5 Results and Analysis ### Main Results We report the results of BiAE and baselines on the blind held-out test set of the ShARC dataset in Table 1. BiAE significantly outperforms the baselines with lightweight decision modules, with at least a \(12.7\%\) improvement in Micro Accuracy and an \(8.7\%\) improvement in Macro Accuracy. Compared to baselines with heavyweight decision modules, BiAE achieves comparable results while greatly reducing the parameters from \(27M\) (decision module of DGM) to only \(31.7K\). Moreover, BiAE achieves state-of-the-art performance in terms of Micro Accuracy. For generation, T5BiaE outperforms all the methods using span extraction. Note that the generation metrics are only calculated when both the classification decision and the true label is MORE. Therefore, since the test sets differ among the methods used to calculate generation scores, the results are not strictly comparable. To compare the _decision-making_ ability of different methods using base and large pre-trained language models fairly, we also report the results on the development set in Table 1. Regardless of whether based on a base or large pre-trained language model, BiAE significantly outperforms the baselines with lightweight decision modules, meanwhile, it achieves higher Micro Accuracy of \(1.0\) (base) and \(1.9\) (large) and Macro Accuracy of \(0.7\) (base) and \(1.0\) (large) than the strong baseline DGM. We also report class-wise Micro Accuracy of BiAE and several baseline models on four different categories in Appendix B. BiAE greatly improves the abilities of deterministic _decision-making_ (_YES_ and _NO_). C for the prompt template we used and some output cases. ### Ablation Study **Effect of Alignment and Entailment.** To explore the effects of bipartite alignment and many-to-many entailment on _decision-making_, we conduct ablation experiments based on DeBERTaV3-base on the development set, as shown in Table 3. The results indicate that both alignment and entailment have an impact on _decision-making_ and the impact is greater when both are considered together. Simultaneously removing both alignment and entailment losses leads to a decrease of \(3.6\) points in Micro Accuracy and \(3.4\) points in Macro Accuracy. Furthermore, we also conduct ablation experiments on the encoders, the results based on ELECTRA-base exhibits a slight decrement when compared to those based on DeBERTaV3-base. This demonstrates that the improvement in reasoning performance is minimally influenced by encoders and mainly comes from our incorporation of explicit alignment and the modeling of many-to-many entailment in the core decision module. **Effect of Data Augmentation for Generation.** Only the data with the decision label MORE requires generating follow-up questions, accounting for \(31.08\%\) of the training set. We perform data augmentation to address this data sparsity issue and organize ablation experiments, results are shown in Table 4. Data augmentation improves all generation metrics, with increases of 1.4, 1.7, 1.9 and 2.0 for BLEU1-4, respectively. Appendix D is a simple case study of generation. ### Interpretation of Document and User Information To investigate the comprehension ability of BiAE for document and user information, we divide the development set into six subsets based on whether the documents contain bullet points and whether the user information includes scenario or conversation history. The sizes of each subset are shown in Table 5. We calculate the Micro and Macro Accuracy of the strong baseline DGM and BiAE on different subsets, as shown in Figure 4(a). The results show that BiAE performs better on all subsets. Understanding bullet point documents is more challenging than regular documents, but BiAE reduces the gap by improving the Micro Accuracy of bullet point documents by \(3.3\) and regular documents by \(2.2\). Understanding scenarios is still a significant challenge (Saeidi et al., 2018; Gao et al., 2020; Ouyang et al., 2021) because subsets without scenarios have significantly higher accuracy than those with scenarios. BiAE achieves the most significant improvement (\(+3.9\)) on subsets containing history. These performance improvements are likely due to our splitting of user information and the explicit alignment between documents and user information. ### Many-to-many Entailment In BiAE, the final decision is based on: the encoding of user initial question, the encoding and the final entailment state of each hypothesis in a document. To investigate the holistic textual entailment of the document hypothesis set on the final \begin{table} \begin{tabular}{l c c c c c} \hline \hline **State** & \(\beta\) & \(\bar{\alpha}\) & \(\sigma_{\alpha}^{2}\) & \(Q_{1}^{\alpha}\) & \(Q_{2}^{\alpha}\) & \(Q_{3}^{\alpha}\) \\ \hline **Success** & 0.45 & 0.75 & 0.29 & 0.50 & 0.80 & 1.00 \\ **Fail** & 0.24 & 0.72 & 0.24 & 0.60 & 0.75 & 0.89 \\ \hline \hline \end{tabular} \end{table} Table 6: \(\beta\) and some statistics of \(\alpha\). \(\sigma^{2}\) denotes the variance and \(Q_{n}\) denotes the \(n\)-th quartile. \begin{table} \begin{tabular}{l c c} \hline \hline **Model** & **micro-acc** & **macro-acc** \\ \hline BiAE(ELECTRA) & 76.2 & 80.3 \\ BiAE(DeBERTaV3) & 76.2 & 80.5 \\ w/o Align & 74.3 & 78.9 \\ w/o Entail & 74.1 & 78.3 \\ w/o Align \& Entail & 72.6 & 76.9 \\ \hline \hline \end{tabular} \end{table} Table 3: Results of ablation experiments for decision reasoning based on the ELECTRA-base and DeBERTaV3-base models on the ShARC development set. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Model** & **BLEU1** & **BLEU2** & **BLEU3** & **BLEU4** \\ \hline T5\({}_{\text{BiAE}}\) & 62.8 & 55.8 & 51.7 & 48.6 \\ w/o aug. & 61.4 & 54.1 & 49.8 & 46.6 \\ \hline \hline \end{tabular} \end{table} Table 4: Results of ablation experiments for follow-up generation on the ShARC development set. \begin{table} \begin{tabular}{l c c c c} \hline \hline **Subset** & **\#Count** & **Subset** & **\#Count** \\ \hline Bullet Point & 999 & Regular & 1271 \\ Scenario & 1839 & NoScenario & 431 \\ History & 1509 & NoHistory & 761 \\ All & 2270 & & \\ \hline \hline \end{tabular} \end{table} Table 5: Subset Size of ShARC Development Set. decision, we define \(\alpha\) and \(\beta\) as follows: \[\alpha=\frac{\sum_{i=1}^{m}\mathbb{I}(s_{i}^{p}=s_{i}^{q})}{m}, \tag{17}\] \[\beta^{\mathcal{P}}=\frac{\sum_{\alpha\in\mathcal{P}}\mathbb{I}( \alpha=1.0)}{|\mathcal{P}|}, \tag{18}\] where \(m\) is the number of hypotheses in a document, \(\mathcal{P}\) is a subset, \(s_{i}^{p}\) and \(s_{i}^{q}\) is the predicted and constructed label for the \(i\)-th hypothesis, \(\mathbb{I}(\cdot)\) denotes the indicator function. \(\alpha\) measures the degree of correctness in entailment reasoning for an individual document and \(\beta\) represents the proportion of documents with perfect entailment reasoning in a subset. Figure 4 illustrates the distribution and density estimation curve of \(\alpha\) under successful and failed prediction states. The statistics in Table 6 and Figure 4 show that, compared with failed predictions, the values of \(\alpha\) are more concentrated around \(1.0\) in successful predictions, indicating deeper understanding of the entailment of all hypotheses in the document (corresponding to larger \(\alpha\) values), and hence leading to higher prediction accuracy. In addition, \(\beta^{Success}\) is much larger than \(\beta^{Fail}\), while the difference between \(\bar{\alpha}^{Success}\) and \(\bar{\alpha}^{Fail}\) is small, indicating that the final decision not only requires one-to-one entailment but also relies on accurate many-to-many entailment of all hypotheses in the document. ## 6 Conclusion We propose a new framework for Conversational Machine Reading in this paper. Our classification model, BiAE, leverages many-to-many entailment reasoning, enhanced by explicit alignment, for _decision-making_. And our \(\text{TS}_{\text{BiAE}}\) is directly fine-tuned for generating follow-up questions to clarify underspecified document spans. BiAE significantly reduces the parameters in the core _decision-making_ module and achieves results comparable to strong baselines. Extensive experiments demonstrate the effectiveness of our framework. Through analysis, we believe that improving the ability of entailment reasoning among the overall hypotheses of a document is crucial for enhancing _decision-making_ ability. ## Limitations Although our approach exceeds the previous state-of-the-art model on the main metrics, our work still has two limitations. (1) We conduct experiments on the ShARC dataset, which is the benchmark for the CMR task but consists of relatively short documents. Due to limited computational resources, it is challenging for us to organize experiments with longer documents or larger datasets. However, we believe that BiAE has the potential to perform well on longer documents. We will strive to scale up our computational resources and validate our approach on such datasets. (2) Our proposed method for explicit alignment of documents and dialogues is based on semantic similarity. Although it demonstrates effectiveness in our experiments, we acknowledge that various knowledge bases, such as knowledge graphs, can provide alignment information beyond the semantic level. In the future, we will explore better alignment methods by leveraging diverse knowledge bases. Figure 4: (a)Accuracy of BiAE and DGM(Strong Baseline) on Different Subsets. (b)Histogram and Density Estimation of the \(\alpha\) Distribution under Successful and Failed Prediction States. ### Ethics Statement Our research focuses on conversational machine reading. The dataset we used is publicly available and consists of documents from government websites and dialogues from crowdsourcing workers who have received fair and legal remuneration. We utilize open-source pre-trained models, including large language models. GPT-3.5 is only used for a simple evaluation. We believe that our work does not involve personal privacy or societal biases. ## Acknowledgements The work is partially supported by State Grid Corporation of China's Science and Technology Project "Construction of Electric Power Cognitive Large Model and key Techniques of Its Applications on Operation, Maintenance and Detection" (Project No: 5700-202313288A-1-1-ZN). We thank the anonymous reviewers for their insightful comments and Max Bartolo for executing the evaluation on the reserved test set.
2306.16297
A Meta-Learning Method for Estimation of Causal Excursion Effects to Assess Time-Varying Moderation
Twin revolutions in wearable technologies and health interventions delivered by smartphones have greatly increased the accessibility of mobile health (mHealth) interventions. Micro-randomized trials (MRTs) are designed to assess the effectiveness of the mHealth intervention and introduce a novel class of causal estimands called "causal excursion effects." These estimands enable the evaluation of how intervention effects change over time and are influenced by individual characteristics or context. However, existing analysis methods for causal excursion effects require prespecified features of the observed high-dimensional history to build a working model for a critical nuisance parameter. Machine learning appears ideal for automatic feature construction, but their naive application can lead to bias under model misspecification. To address this issue, this paper revisits the estimation of causal excursion effects from a meta-learner perspective, where the analyst remains agnostic to the supervised learning algorithms used to estimate nuisance parameters. We present the bidirectional asymptotic properties of the proposed estimators and compare them both theoretically and through extensive simulations. The results show relative efficiency gains and support the suggestion of a doubly robust alternative to existing methods. Finally, the proposed methods' practical utilities are demonstrated by analyzing data from a multi-institution cohort of first-year medical residents in the United States (NeCamp et al., 2020).
Jieru Shi, Walter Dempsey
2023-06-28T15:19:33Z
http://arxiv.org/abs/2306.16297v2
# A Meta-Learning Method for Estimation of Causal Excursion Effects to Assess Time-Varying Moderation ###### Abstract Twin revolutions in wearable technologies and smartphone-delivered digital health interventions have significantly expanded the accessibility and uptake of mobile health (mHealth) interventions across various health science domains. Sequentially randomized experiments called micro-randomized trials (MRTs) have grown in popularity to empirically evaluate the effectiveness of these mHealth intervention components. MRTs have given rise to a new class of causal estimands known as "causal excursion effects", which enable health scientists to assess how intervention effectiveness changes over time or is moderated by individual characteristics, context, or responses in the past. However, current data analysis methods for estimating causal excursion effects require pre-specified features of the observed high-dimensional history to construct a working model of an important nuisance parameter. While machine learning algorithms are ideal for automatic feature construction, their naive application to causal excursion estimation can lead to bias under model misspecification, potentially yielding incorrect conclusions about intervention effectiveness. To address this issue, this paper revisits the estimation of causal excursion effects from a meta-learner perspective, where the analyst remains agnostic to the choices of supervised learning algorithms used to estimate nuisance parameters. The paper presents asymptotic properties of the novel estimators and compares them theoretically and through extensive simulation experiments, demonstrating relative efficiency gains and supporting the recommendation for a doubly robust alternative to existing methods. Finally, the practical utility of the proposed methods is demonstrated by analyzing data from a multi-institution cohort of first-year medical residents in the United States (NeCamp et al., 2020). _Keywords:_ Debiased/Orthogonal Estimation, Machine Learning, Double Robustness, Causal Excursion Effect, Mobile Health, Time-Varying Treatment. Introduction The use of smart devices (e.g., smartphones, smartwatches) and other wearables to deliver digital interventions to improve health outcomes has grown significantly in the past few years. Low-cost, accessible digital interventions can be delivered everywhere, anytime, and in any amount, even to reticent or hard-to-reach populations. Interventions of this type are hypothesized to result in meaningful short- and long-term behavior changes. The assessment of such time-varying effects prompted the development of micro-randomized trials (MRTs), in which individuals are randomized to receive notifications at hundreds or thousands of decision points. The MRT enables estimation of proximal or lagged effects of push notifications on pre-specified outcomes of interest, referred to as "causal excursion effects" (Boruvka et al., 2018; Qian et al., 2020; Dempsey et al., 2020; Shi et al., 2022). Semiparametric inference of the causal excursion effects can be conducted via a weighted, centered least squares (WCLS) criterion (Boruvka et al., 2018). A key feature of implementing the WCLS criterion is that health scientists must pre-specify features from the high-dimensional observed history to formulate a linear working model for a critical nuisance parameter, which is a challenging, non-trivial task. Machine learning (ML) algorithms offer powerful tools to automatically construct features for the nuisance components, but their naive application to semiparametric inference can lead to bias in estimation. Chernozhukov et al. (2018) shows that Neyman-orthogonal moments and cross-fitting can remove the impact of regularization bias and overfitting caused by naive application of the ML methods. Later, several meta-learner algorithms emerged that can take advantage of supervised learning or regression methods in ML and statistics --such as random forests, Bayesian Additive Regression Trees (BART) or neural networks-- to estimate the nuisance components (Hill, 2011; Semenova and Chernozhukov, 2021; Kunzel et al., 2019; Nie and Wager, 2021; Kennedy, 2020). These papers provide flexible, well-performing methods for estimating conditional average treatment effects (CATE) in randomized controlled trials and observational studies, and illustrate how ML methods can be applied for semiparametric inference. While meta-learning approaches have been developed extensively for CATE estimation, their application to longitudinal studies has been relatively limited. Viviano and Bradic (2021) proposed a dynamic covariate balancing method when high-dimensional covariates are present. Bodory et al. (2022) used this approach to examine effects under dynamic treatment regimes using Double Machine Learning (DML) (Chernozhukov et al., 2018) and semiparametrically efficient estimation. In this setting, DML is used to control for observed and time-varying covariates in a data-driven way, across treatment sequences in different time periods. Lewis and Syrgkanis (2020) proposed a new DML approach for estimating causal effect under dynamic treatment regimes by using g-estimation - a sequential residualization approach that uses supervised learning of debiased outcomes on debiased treatments over a specific time period based on linear parameterization of blip functions. While prior studies on DML methods for longitudinal analysis have mainly focused on estimating the average treatment effect (ATE) on a distal outcome, often under pre-specified or dynamic non-random treatment sequences, our current work takes a different perspective. Specifically, we propose a meta-learning framework that assesses time-varying causal effect moderation in MRTs, providing a novel solution for estimating moderated treatment effects on proximal outcomes using DML methods. The proposed method can help health scientists improve their ability to answer critical scientific questions regarding time-varying effect moderation, and find out when, in what context, and what intervention content to deliver to each person to make the intervention most effective (Qian et al., 2022). ### Outline The rest of the paper proceeds as follows. Section 2 reviews existing analytic techniques for MRTs and ML methods in causal inference. We then summarize our main contributions in Section 3 and explain why allowing for the use of ML approaches in WCLS is challenging. We propose two new inferential methods in Section 4, which leverage supervised learning methods to improve the efficiency of time-varying treatment effect estimation. In Section 5.1, we outline the inferential algorithms. Section 5.2 and 5.3 present the asymptotic theory as well as discuss the relative efficiency gain of the proposed methods. Section 6 discusses the extension of the proposed methods to settings such as missing data, lagged effects, and binary outcomes. Section 7 uses simulation studies to compare various estimators and standard errors. Section 8 illustrates the efficiency improvement using our proposed methods with a recent MRT: the Intern Health Study (NeCamp et al., 2020). The paper concludes with a brief discussion in Section 9. All technical proofs are collected in the Supplementary Materials. ## 2 Preliminaries ### Micro-Randomized Trials (MRT) An MRT consists of a sequence of within-subject decision times \(t=1,\ldots,T\) at which treatment options are randomly assigned (Liao et al., 2016). Individual-level data can be summarized as \(\{O_{0},O_{1},A_{1},O_{2},A_{2},\ldots,O_{T},A_{T},O_{T+1}\}\) where \(t\) indexes a sequence of decision points, \(O_{0}\) is the baseline information, \(O_{t}\) is the information collected between time \(t-1\) and \(t\), and \(A_{t}\) is the treatment option provided at time \(t\); here we consider binary treatment options, i.e., \(A_{t}\in\{0,1\}\). In an MRT, \(A_{t}\) is randomized with randomization probabilities that may depend on the complete observed history \(H_{t}:=\{O_{0},O_{1},A_{1},\ldots,A_{t-1},O_{t}\}\), denoted \(\mathbf{p}=\{p_{t}(A_{t}\,|\,H_{t})\}_{t=1}^{T}\). Treatment options are designed to impact a proximal response, denoted by \(Y_{t+1}\), which is a function of the observed history and the latest treatment, i.e., \(Y_{t+1}=y(H_{t},A_{t})\)(Dempsey et al., 2020). ### Estimands and Inferential Methods: A Review The class of estimands, referred to as "causal excursion effects", was developed to assess whether mobile health interventions influence the proximal health outcomes they were designed to impact (Heron and Smyth, 2010). These time-varying effects are a function of the decision point \(t\) and a set of moderators \(S_{t}\) and marginalize over all other observed and unobserved variables (Dempsey et al., 2020; Qian et al., 2020). We provide formal definitions using potential outcomes (Rubin, 1978; Robins, 1986). Let \(Y_{t+1}(\bar{a}_{t-1})\) denote the potential outcome for the proximal response under treatment sequence \(\bar{a}_{t-1}\). Let \(O_{t}(\bar{a}_{t-1})\) denote the potential information collected between time \(t-1\) and \(t\). Let \(S_{t}(\bar{a}_{t-1})\) denote the potential outcome for a time-varying effect moderator which is a deterministic function of the potential history up to time \(t\), \(H_{t}(\bar{a}_{t-1})\). We consider the setting in which the potential outcomes are i.i.d over users according to a distribution \(\mathcal{P}\) i.e., \(\left\{O_{t}(\bar{a}_{t-1}),Y_{t+1}(\bar{a}_{t-1})\right\}_{t=1}^{T}\stackrel{{ \mathrm{i.i.d}}}{{\sim}}\mathcal{P}\). The causal excursion effect estimand is defined as: \[\beta_{\mathbf{p}}(t;s)=\mathbb{E}_{\mathbf{p}}\left[Y_{t+1}\left(\bar{A}_{t-1},A_{t}=1\right)-Y_{t+1}\left(\bar{A}_{t-1},A_{t}=0\right)\,|\,S_{t}(\bar{A}_{ t-1})=s\right]. \tag{1}\] Equation (1) is defined with respect to a reference distribution \(\mathbf{p}\), i.e., the joint distribution of treatments \(\bar{A}_{t-1}:=\left\{A_{1},A_{2},\ldots,A_{t-1}\right\}\). We follow common practice in observational mobile health studies where analyses such as GEEs (Liang and Zeger, 1986) are conducted marginally over \(\mathbf{p}\). To express the proximal response in terms of the observed data, we assume positivity, consistency, and sequential ignorability (Robins, 1994, 1997): **Assumption 2.1**.: We assume consistency, positivity, and sequential ignorability: * Consistency: For each \(t\leq T\), \(\left\{Y_{t+1}(\bar{A}_{t}),O_{t}(\bar{A}_{t-1}),A_{t}(\bar{A}_{t-1})\right\} =\left\{Y_{t+1},O_{t},A_{t}\right\}\), i.e., observed values equal the corresponding potential outcomes; * Positivity: if the joint density \(\left\{A_{t}=a_{t},H_{t}=h_{t}\right\}\) is greater than zero, then \(P(A_{t}=a_{t}\,|\,H_{t}=h_{t})>0\); * Sequential ignorability: For each \(t\leq T\), the potential outcomes \(\left\{Y_{t+1}(\bar{a}_{t}),O_{t+1}(\bar{a}_{t}),A_{t+1}(\bar{a}_{t}),\ldots,Y_ {T+1}(\bar{a}_{T})\right\}\) are independent of \(A_{t}\) conditional on the observed history \(H_{t}\). Under Assumption 2.1, (1) can be re-expressed in terms of observable data: \[\beta_{\mathbf{p}}(t;s)=\mathbb{E}\left[\mathbb{E}_{\mathbf{p}}\left[Y_{t+1}\ |\ A_{t}=1,H_{t}\right]-\mathbb{E}_{\mathbf{p}}\left[Y_{t+1}\ |\ A_{t}=0,H_{t}\right]\,|\ S_{t}=s\right]. \tag{2}\] The causal excursion effect is typically assumed to take a known linear form, i.e., \(\beta_{\mathbf{p}}(t;s)=f_{t}(s)^{\top}\beta^{\star}\), where \(f_{t}(s)\in\mathbb{R}^{q}\) is a feature vector comprised of a \(q\)-dimensional summary of observed information depending only on state \(s\) and decision point \(t\). MRTs are experimental studies with pre-specified randomization probabilities. It is therefore common to impose the following condition: **Assumption 2.2**.: _The randomization probability \(p_{t}(A_{t}|H_{t})\) is known or correctly specified via a parametric model \(p_{t}(A_{t}|H_{t};\theta)\) for \(\theta\in\mathbb{R}^{d}\)._ A consistent estimator \(\hat{\beta}_{n}\) can be obtained by minimizing a weighted and centered least squares (WCLS) criterion (Boruvka et al., 2018): \[\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}(Y_{t+1}-g_{t}(H_{t})^{\top}\alpha-(A_{t }-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta)^{2}\right], \tag{3}\] where \(\mathbb{P}_{n}\) is an operator denoting the sample average, \(W_{t}=\tilde{p}_{t}(A_{t}|S_{t})/p_{t}(A_{t}|H_{t})\) is a weight where the numerator is an arbitrary function with range \((0,1)\) that only depends on \(S_{t}\), and \(g_{t}(H_{t})\in\mathbb{R}^{p}\) are \(p\) control variables. Important to this paper, the linear term \(g_{t}(H_{t})^{\top}\alpha\) is a working model for \(\mathbb{E}[W_{t}Y_{t+1}|H_{t}]\), which can be viewed as a nuisance function. A high-quality model of the nuisance function can help reduce variance and construct more powerful test statistics. See Boruvka et al. (2018) for more details on the estimand formulation and consistency, asymptotic normality, and robustness properties of this method. **Remark 2.3**.: _Correct causal effect specification, i.e., \(\beta_{\mathbf{p}}(t;s)=f_{t}(s)^{\top}\beta^{\star}\) is not required. Instead, we can follow prior literature (Dempsey et al., 2020; Shi et al., 2022) and interpret the proposed linear form as a working model. Specifically, \(\hat{\beta}\) is a consistent and asymptotically normal estimator for_ \[\beta^{\star}=\arg\min_{\beta}\mathbb{E}\left[\sum_{t=1}^{T}\tilde{p}_{t}(1|S_ {t})(1-\tilde{p}_{t}(1|S_{t}))(\beta(t;S_{t})-f_{t}(S_{t})^{\top}\beta)^{2} \right].\] _Therefore, the working model can be interpreted as an \(L_{2}\) projection of the true causal excursion effect onto the space spanned by a \(q\)-dimensional feature vector that only includes \(t\) and \(s\), denoted by \(f_{t}(s)^{\top}\beta^{\star}\)(Dempsey et al., 2020). Interpretation as a projection or as a correctly specified causal effect can be viewed as a bias-variance trade-off. The projection interpretation guarantees well-defined parameter interpretation in practice._ ### Machine Learning and Causal Effect Estimation Chernozhukov et al. (2018) provided a generic DML approach for obtaining valid inferential statements about focal parameters, using Neyman-orthogonal scores and cross-fitting, in settings where nuisance parameters are estimated using ML methods. As a motivating example, here we consider the following partially linear regression (PLR) model as in Robinson (1988): \[Y =A\beta_{0}+m_{0}(X)+U, \mathbb{E}[U|X,A]=0,\] \[A =p_{0}(X)+V, \mathbb{E}[V|X]=0.\] Here, \(Y\) is the outcome variable, \(A\) represents the treatment indicator, \(X=(X_{1},...,X_{p})\) consists of baseline covariates, and \(U\) and \(V\) are noise variables. The parameter of interest is \(\beta_{0}\), i.e., the treatment effect. In many applications, the dimension of baseline covariates, \(dim(X)=p\), is large relative to the sample size \(N\). To capture this, modern theoretical analyses consider \(p\) increasing with sample size. Traditional analyses that limit the complexity of the nuisance functions \(g_{0}=(m_{0},p_{0})\) will fail in these settings. Chernozhukov et al. (2018) apply Neyman orthogonality and sample splitting to overcome the failures of traditional methods. Suppose, for the sake of clarity, that we randomly split the sample into two parts: a main part of size n, with observation numbers indexed by \(i\in\mathcal{I}\), and an auxiliary part of size \(N-n\), with observations indexed by \(i\in\mathcal{I}^{\complement}\). Let \(\hat{g}_{0}\) be the estimator of the nuisance parameter \(g_{0}\), which is obtained using the auxiliary sample, and the estimator of \(\beta_{0}\) is obtained using the main sample and satisfying: \[\frac{1}{n}\sum_{i\in\mathcal{I}}\psi(W;\hat{\beta}_{0},\hat{g}_{0})=0,\] where \(\psi\) is an orthogonalized or debiased score function, i.e., it satisfies the property that the Gateaux derivative operator with respect to \(g\) vanishes when evaluated at the true parameter values: \[\partial_{g}\mathbb{E}\left[\psi(W;\beta_{0},g_{0})\right](g-g_{0})=0. \tag{4}\] Using moment conditions that satisfy Equation (4) to construct estimators and inference procedures that are robust to mistakes in nuisance parameters has a long history in statistics. We refer to property (4) as _Neyman orthogonality_ and to \(\psi\) as the _Neyman orthogonal score function_ due to the fundamental contributions in Neyman (1979), where this notion was introduced. The score functions \(\psi\) are not sensitive to biased estimation of \(g_{0}\) in the sense that (4) holds. Neyman orthogonality and sample splitting are the main tools for establishing good behavior of an estimator for \(\beta_{0}\). Contributions The objective of this paper is to advance the understanding of optimal methods for estimating causal excursion effects. The main challenges in achieving this are two-fold: First, in MRTs where treatments, responses, and moderators all vary with time, there is a need to effectively incorporate supervised learning methods and sample splitting into the causal excursion effect estimation without introducing bias. Second, it is increasingly common for Assumption 2.2 to be violated due to unknown randomization probabilities or implemented probabilities not matching the recorded probabilities. In these settings, the current WCLS approach can only provide consistent estimates if the working _linear_ outcome regression model for \(\mathbb{E}[Y_{t+1}|H_{t},A_{t}]\) is correctly specified. It is therefore important to develop an inferential procedure that is robust to model misspecification. This paper makes three original contributions: First, we introduce two inferential procedures for estimating causal excursion effects that incorporate DML techniques, called "R-WCLS" and "DR-WCLS". To mitigate overfitting bias, cross-fitting is employed to estimate the first stage of data-driven plug-ins. Second, we demonstrate the \(\sqrt{n}\)-consistency and asymptotic normality of the proposed estimators under regularity conditions, while remaining agnostic to the specific supervised learning algorithm used to learn the plug-in models. Third, we provide theoretical guarantees of double-robustness and gains in estimation efficiency relative to the WCLS approach. ## 4 A Supervised Learning Algorithm Agnostic Approach to Moderation Analysis We begin with a moment-based estimand, as defined in Equation (2), and assume a parametric model for the causal excursion effect, denoted as \(\beta(t;s)=f_{t}(S_{t})^{\top}\beta^{\star}\), where \(\beta^{\star}\in\mathbb{R}^{q}\). The WCLS criterion provides a set of estimating equations used to perform inference about the causal parameter \(\beta^{\star}\). This approach suggests that the nuisance parameter can be expressed as a sequence of expectations \(\mathbf{g}=\{g_{t}(H_{t})=\mathbb{E}[W_{t}Y_{t+1}|H_{t}]\}_{t=1}^{T}\), with a population value of \(\mathbf{g}^{\star}\). To estimate these quantities, the WCLS criterion only considers linear working models \(\{g_{t}(H_{t})^{\top}\alpha\}_{t=1}^{T}\). Constructing linear working models, however, can pose a significant challenge as researchers must pre-specify features from the high-dimensional observed history \(H_{t}\), which is a non-trivial task. To address this challenge and increase the modeling flexibility of nuisance functions, we reframe the estimating equation (3) in a more general form that puts no modeling assumptions in \(g_{t}(H_{t})\) and allows its dimensions to grow with sample size. Here, we assume that \(\beta^{\star}\) satisfies the moment conditions: \(\mathbb{E}[\psi(\beta^{\star};\mathbf{g}^{\star})]=0\), where \(\psi(\beta;\mathbf{g})\) is the estimating equation for \(\beta\): \[\psi(\beta;\mathbf{g})=\sum_{t=1}^{T}W_{t}\Big{[}Y_{t+1}-g_{t}(H_{t})-(A_{t}- \tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta\Big{]}(A_{t}-\tilde{p}_{t}(1|S_ {t}))f_{t}(S_{t}). \tag{5}\] We can recover WCLS by replacing \(g_{t}(H_{t})\) with a linear working model with fixed dimension, i.e., \(g(H_{t})^{\top}\alpha\) for \(\alpha\in\mathbb{R}^{p}\). To ensure robustness and valid inference for \(\beta\), we require Neyman orthogonality for the estimating equation \(\psi(\beta;\mathbf{g})\)(Chernozhukov et al., 2015). The Gateaux derivative operator with respect to \(\mathbf{g}\) is: \[G(\mathbf{g})=\mathbb{E}\left[\sum_{t=1}^{T}W_{t}g_{t}(H_{t})(A_{t}-\tilde{p} _{t}(1|S_{t}))f_{t}(S_{t})\right](\mathbf{g}-\mathbf{g}^{\star})=0, \tag{6}\] thus (5) satisfies Neyman orthogonality. Intuitively, Neyman orthogonality implies that the moment conditions used to identify \(\beta^{\star}\) are sufficiently insensitive to the nuisance parameter estimates, allowing us to directly plug in estimates of \(\mathbf{g}^{\star}\) while still obtaining high-quality inference for \(\beta\). We now consider estimating \(g_{t}(H_{t})\). By definition, \(g_{t}(H_{t})=\mathbb{E}[W_{t}Y_{t+1}|H_{t}]\) is the conditional expectation of the weighted proximal outcome. Let \(g_{t}(H_{t},A_{t})\) denote a working model for \(\mathbb{E}[Y_{t+1}|H_{t},A_{t}]\), then we have the decomposition: \(g_{t}(H_{t})=\tilde{p}_{t}(1|S_{t})g_{t}(H_{t},1)+(1-\tilde{p}_{t}(1|S_{t}))g_{ t}(H_{t},0)\). Based on this, we propose the following _R-WCLS criterion_, which minimizes: \[\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-\Big{(}\tilde{p}_{t}(1|S _{t})g_{t}(H_{t},1)+(1-\tilde{p}_{t}(1|S_{t}))g_{t}(H_{t},0)\Big{)}-(A_{t}- \tilde{p}_{t}(1|S_{t})))f_{t}(S_{t})^{\top}\beta\right)^{2}\right]. \tag{7}\] The proposed approach (7) allows us to learn these conditional expectations without pre-specifying features to construct a parametric working model. This reflects the fact that in practice, only a small subset of the parameters are of key scientific interest, and an analyst may prefer to be agnostic about the nuisance parameters. As an example, we can use supervised learning methods to train the working model \(g_{t}(H_{t},A_{t})\), along with cross-fitting to obtain informative error analysis, avoid overfitting, and optimize out-of-sample prediction accuracy, which is particularly useful when the dimension of the complete observed history is high. Theorem 5.2 in Section 5.2 shows that the estimator \(\hat{\beta}_{n}^{(R)}\) obtained by minimizing Equation (7) is a consistent estimator of the parameter of interest \(\beta^{\star}\) under the assumption that the randomization probability \(p_{t}(A_{t}|H_{t})\) is either known or correctly specified. **Remark 4.1** (Connection to the WCLS criterion).: _The R-WCLS criterion replaces \(g_{t}(H_{t})^{\top}\alpha\) in the WCLS criterion, which was a linear working model for \(\mathbb{E}[W_{t}Y_{t+1}|H_{t}]\), with a general choice of working models. Setting \(g_{t}(H_{t},A_{t})\) to be the linear working model \(g_{t}(H_{t})^{\top}\alpha+(A_{t}-\tilde{p}_{t}(1|S_{t}))f_{t}(S_{t})^{\top}\beta\), the R-WCLS criterion recovers the original WCLS criterion. Thus, (7) is a strict generalization of (3)._ **Remark 4.2** (Connection to the R-learner).: _In traditional causal inference with a single treatment \(A\), fully-observed set of confounders \(X\), and outcome \(Y\), a two-stage estimator, referred to as the R-Learner, was previously proposed by Nie and Wager (2021). Beyond our extension to the time-varying setting, there are two key distinguishing features of R-WCLS in (7) when compared with the R-Learner. First, we focus on estimating a low-dimensional target parameter, whereas R-learner seeks to estimate the conditional average treatment effect and allows it to be a complex function of baseline covariates. Second, the weight \(W_{t}\) in R-WCLS criterion implicitly depends on the propensity \(\tilde{p}_{t}(1|S_{t})\), we thereby replace the R-learner data-adaptive model for \(\mathbb{E}[W_{t}Y_{t+1}|H_{t}]\) with one for each \(\mathbb{E}[Y_{t+1}|H_{t},a]\), \(a\in\{0,1\}\), which is invariant to different choices of moderators \(S_{t}\)._ ### A Doubly-Robust Alternative The above discussion relies on Assumption 2.2 being held. In many MRTs, one may fail to correctly implement or collect the desired randomization probabilities, leading to unknown randomization probabilities or uncertainty in their recorded values. In such cases, the R-WCLS criterion in (7) can only provide consistent estimates of \(\beta^{\star}\) if the fully conditional model for \(\mathbb{E}[Y_{t+1}|H_{t},A_{t}]\) has been correctly specified: \[\mathbb{E}[Y_{t+1}|H_{t},A_{t}]=(\tilde{p}_{t}(1|S_{t})g_{t}^{\star}(H_{t},1)+(1 -\tilde{p}_{t}(1|S_{t}))g_{t}^{\star}(H_{t},0))+(A_{t}-\tilde{p}_{t}(1|S_{t})))f _{t}(S_{t})^{\top}\beta^{\star}.\] This implies that \(g_{t}^{\star}(H_{t},1)-g_{t}^{\star}(H_{t},0)=f_{t}(S_{t})^{\top}\beta^{\star}\), where the fully conditional treatment effect depends only on the specified moderators \(S_{t}\) and the linear model is correctly specified. However, in practice, \(S_{t}\) is often a subset of the potential moderators, so this assumption is not expected to hold. Therefore, an estimating procedure that does not rely on a correct model specification will be preferred. In this section, we present a novel derivation of an alternative, doubly robust estimator called the _DR-WCLS criterion_, which is given by: \[\mathbb{P}_{n}\left[\sum_{t=1}^{T}\tilde{\sigma}_{t}^{2}(S_{t})\left(\frac{W_ {t}(A_{t}-\tilde{p}_{t}(1|S_{t}))(Y_{t+1}-g_{t}(H_{t},A_{t}))}{\tilde{\sigma}_ {t}^{2}(S_{t})}+\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta\right)f_{t}(S_{t}) \right]=0, \tag{8}\] where \(\beta(t;H_{t})\coloneqq g_{t}(H_{t},1)-g_{t}(H_{t},0)\) is the causal excursion effect under the fully observed history \(H_{t}\), and \(\tilde{\sigma}_{t}^{2}(S_{t})\coloneqq\tilde{p}_{t}(1|S_{t})(1-\tilde{p}_{t}( 1|S_{t}))\). Theorem 5.3 below shows that the estimator \(\hat{\beta}_{n}^{(DR)}\) obtained from solving (8) is doubly-robust, i.e., (8) will yield a consistent estimator of \(\beta^{\star}\) if either the randomization probability \(p_{t}(A_{t}|H_{t})\)_or_ the conditional expectation \(g_{t}(H_{t},A_{t})\) is correctly specified. **Remark 4.3** (Connection to the DR-learner).: _The general DR-learner approach was first proposed by Van Der Laan and Rubin (2006). In later research, the DR-learner, a two-stage doubly robust estimator with a single treatment \(A\), a fully observed set of confounders \(X\), and an outcome \(Y\), was proposed (Kennedy, 2020). Beyond our extension to the time-varying setting, there are two key distinguishing features of (8) when compared with existing variants of the DR-learner. First, the causal excursion effect is a marginal effect, so the weights \(W_{t}\) and the treatment-centering probability are dependent on the moderators, whereas the DR-learner estimates a fully conditional causal effect. Second, time-varying treatments and the projection interpretation (see Dempsey et al. (2020) for a detailed discussion) in the feature space \(f_{t}(S_{t})\) require the additional weights \(\tilde{\sigma}_{t}^{2}(S_{t})\) in the DR-WCLS estimating equations._ ### Connection Between R-WCLS and DR-WCLS In recent work from Morzywolek et al. (2023), a unified framework was presented for estimating heterogeneous treatment effects, resulting in a class of weighted loss functions with nuisance parameters. They showed that the R-Learner (Nie and Wager, 2021) and the DR-Learner (Kennedy, 2020) can be seen as special cases resulting from particular weighting choices. Here, we present a complementary viewpoint by showing a simple relationship between the two proposed methods. We begin by adding and subtracting \(g_{t}(H_{t},A_{t})=A_{t}g_{t}(H_{t},1)+(1-A_{t})g_{t}(H_{t},0)\) from Equation (7): \[\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-g_{t}(H_{t},A_{t})+(A_{t }-\tilde{p}_{t}(1|S_{t}))\left(\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta\right) \right)^{2}\right].\] One can then obtain an estimate of \(\beta^{\star}\) by solving the following estimating equation: \[\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-g_{t}(H_{t}, A_{t})\right)\left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)f_{t}(S_{t})\right]+ \tag{9}\] \[\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(A_{t}-\tilde{p}_{t}( 1|S_{t})\right)^{2}\left(\beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta\right)f_{t}(S_ {t})\right]. \tag{10}\] Under the correct specification of the randomization probabilities, the Gateaux derivative with respect to \(\mathbf{g}\) of both terms (9) and (10) will be 0. However, if the randomization probabilities are not specified correctly, term (10) may not have a Gateaux derivative of 0. To address this, we replace the stochastic term \(W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))^{2}\) in (10) with its expectation under the correct randomization probability: \[\mathbb{E}\left[\sum_{t=1}^{T}\tilde{\sigma}_{t}^{2}(S_{t})(\beta(t;H_{t})-f_{ t}(S_{t})^{\top}\beta^{\star})f_{t}(S_{t})\right].\] After this substitution, we recover (8). And by doing so, the Gateaux derivative with respect to \(\mathbf{g}\) of both terms will no longer be affected by the randomization probability specification. The above derivation links the R-WCLS and DR-WCLS, showing that the doubly-robust estimators can be constructed from R-learner methods. Finally, (7) and (8) yield estimation procedures that are presented in Section 5.1. Algorithm and Asymptotic Theory This section presents the inferential algorithms for R-WCLS and DR-WCLS. In addition, we provide the corresponding asymptotic theory that can be used for hypothesis testing and constructing confidence intervals. ### Algorithm Both R-WCLS and DR-WCLS algorithms exploit the structure of (7) and (8) to characterize the problem as a two-stage weighted regression estimation that regresses estimated _pseudo-outcomes_ on a feature vector. Cross-fitting is employed to obtain asymptotic theories and convergence results that are agnostic of the supervised learning algorithm used for the estimation of the nuisance parameters, avoiding the Donsker conditions that were prevalent in the classic semi-parametric inference literature. Step ILet \(K\) be a fixed integer. Form a \(K\)-fold random partition of \(\{1,2,\ldots,n\}\) by dividing it to equal parts, each of size \(n/k\), assuming \(n\) is a multiple of \(k\). From each set \(I_{k}\), let \(I_{k}^{\complement}\) denote the observation indices that are not in \(I_{k}\). Step IILearn the appropriate working models for each fold \(I_{k}\) using the individuals in \(I_{k}^{\complement}\). Let \(\hat{g}_{t}^{(k)}(H_{t},A_{t})\), \(\hat{p}_{t}^{(k)}(1|H_{t})\), and \(\hat{\hat{p}}_{t}^{(k)}(1|S_{t})\) denote the estimates for \(\mathbb{E}[Y_{t+1}|H_{t},A_{t}]\), \(\mathbb{E}[A_{t}|H_{t}]\), and \(\mathbb{E}[A_{t}|S_{t}]\) respectively, i.e., estimates of the nuisance parameters in the \(k\)th fold. Note that when randomization probabilities are known, \(\hat{p}_{t}(A_{t}|H_{t})\) is set equal to \(p_{t}(A_{t}|H_{t})\). Step IIIConstruct the pseudo-outcomes and perform weighted regression estimation: * **R-WCLS**: For individual \(j\) at time \(t\), define the pseudo-outcome: \[\tilde{Y}_{t+1,j}^{(R)}:=Y_{t+1,j}-\hat{g}_{t}^{(k)}(H_{t,j},A_{t,j})+\left(A _{t,j}-\hat{\hat{p}}_{t}^{(k)}(1|S_{t,j})\right)\left(\hat{g}_{t}^{(k)}(H_{t, j},1)-\hat{g}_{t}^{(k)}(H_{t,j},0)\right),\] where \(j\in I_{k}\). Then regress \(\tilde{Y}_{t+1}^{(R)}\) on \((A_{t}-\hat{\hat{p}}_{t}^{(k)}(1|S_{t}))f_{t}(S_{t})^{\top}\beta\) with weights \(\hat{W}_{t}^{(k)}=\hat{\hat{p}}_{t}^{(k)}(A_{t}|S_{t})/\hat{p}_{t}^{(k)}(A_{t} |H_{t})\) to obtain estimate \(\hat{\beta}_{n}^{(R)}\). * **DR-WCLS**: For individual \(j\) at time \(t\), define the pseudo-outcome: \[\tilde{Y}_{t+1,j}^{(DR)}:=\frac{\hat{W}_{t,j}^{(k)}(A_{t,j}-\hat{\tilde{p}}_{t}^{ (k)}(1|S_{t,j}))(Y_{t+1,j}-\hat{g}_{t}^{(k)}(H_{t,j},A_{t,j}))}{\hat{\tilde{p}}_ {t}^{(k)}(1|S_{t,j})(1-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t,j}))}+\left(\hat{g}_{t }^{(k)}(H_{t,j},1)-\hat{g}_{t}^{(k)}(H_{t,j},0)\right),\] where \(j\in I_{k}\). Then regress \(\tilde{Y}_{t+1}^{(DR)}\) on \(f_{t}(S_{t})^{\top}\beta\) with weights \(\hat{\tilde{p}}_{t}^{(k)}(1|S_{t})(1-\hat{\tilde{p}}_{t}^{(k)}(1|S_{t}))\) to obtain \(\hat{\beta}_{n}^{(DR)}\). **Remark 5.1**.: _Without sample splitting, the estimated nuisance functions are now correlated, hence introducing spurious correlation in Step III. Typical approaches would constrain the function \(g(\cdot)\) in Step II to belong to a function class with relatively simple statistical complexity, typically referred to as a Donsker function class. Then the Step III estimate is \(\sqrt{n}\)-consistent and asymptotically normal. Chen et al. (2022) shows that neither the sample splitting nor Donsker property is required if the estimate \(\hat{g}(\cdot)\) satisfies leave-one-out stability properties and the moment function satisfies the weak mean-squared-continuity property of Chernozhukov et al. (2021). This allows for sample reuse, which can benefit moderately sized sample regimes. Here we aim to stay agnostic about the choice of \(g(\cdot)\), but we consider extensions that do not require sample splitting as important future work._ ### Asymptotic Properties Here, we demonstrate the asymptotic theory for both R-WCLS and DR-WCLS estimators obtained using the algorithm described above. All asymptotic statements assume \(\hat{p}_{t}(A_{t}|H_{t})\) is bounded away from \(0\) and \(1\), \(T\) and \(K\) both finite and fixed, and \(n\) increasing to infinity. **Theorem 5.2** (Asymptotic property of R-WCLS estimator).: _Under Assumption 2.2, and given invertibility and moment conditions, the estimator \(\hat{\beta}_{n}^{(R)}\) that minimizes (7) is consistent and asymptotically normal such that \(\sqrt{n}(\hat{\beta}_{n}^{(R)}-\beta^{\star})\rightarrow\mathcal{N}(0,\Sigma _{R})\), where \(\Sigma_{R}\) is defined in Appendix A. In particular, with the algorithm outlined in Section 5.1, \(\Sigma_{R}\) can be consistently estimated by:_ \[\left[\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k}\left\{\dot{m}(\hat{\beta}, \hat{\eta}_{k})\right\}\right]^{-1}\times\left[\frac{1}{K}\sum_{k=1}^{K} \mathbb{P}_{n,k}\left\{m(\hat{\beta},\hat{\eta}_{k})m(\hat{\beta},\hat{\eta}_ {k})^{\top}\right\}\right]\times\left[\frac{1}{K}\sum_{k=1}^{K}\mathbb{P}_{n,k }\left\{\dot{m}(\hat{\beta},\hat{\eta}_{k})\right\}\right]^{-1}, \tag{11}\] _where \(m(\hat{\beta},\hat{\eta}_{k})=\sum_{t=1}^{T}\hat{W}_{t}^{(k)}\left(\tilde{Y}_{t+1}^ {(R)}-(A_{t}-\hat{\hat{p}}_{t}^{(k)}(1|S_{t}))f_{t}(S_{t})^{\top}\hat{\beta}_{n} ^{(R)}\right)(A_{t}-\hat{\hat{p}}_{t}^{(k)}(1|S_{t}))f_{t}(S_{t})\), \(\dot{m}(\hat{\beta},\hat{\eta}_{k})=\sum_{t=1}^{T}\hat{\hat{p}}_{t}^{(k)}(1|S_{ t})(1-\hat{\hat{p}}_{t}^{(k)}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\), and \(\mathbb{P}_{n,k}\{\bullet\}\) refers to the empirical average within fold \(k\)._ If Assumption 2.2 is violated, using the R-WCLS criterion in equation (7) will only produce consistent estimates if the fully conditional model for \(\mathbb{E}[Y_{t+1}|H_{t},A_{t}]\) is correctly specified, which is difficult to achieve in practice due to the complexity of the true data-generating process. Therefore, the DR-WCLS estimator is especially valuable for safeguarding against model misspecification. The asymptotic properties of the DR-WCLS estimator are as follows: **Theorem 5.3** (Asymptotic property of DR-WCLS estimator).: _Given invertibility and moment conditions, the estimator \(\hat{\beta}_{n}^{(DR)}\) that solves (8) is subject to an error term, which (up to a multiplicative constant) is bounded above by:_ \[\mathbf{\hat{B}}=\mathbb{E}\left[\sum_{t=1}^{T}\sum_{a\in\{0,1\}}\|p_{t}(A_{t }=1|H_{t})-\hat{p}_{t}(A_{t}=1|H_{t})\|\,\|g_{t}(H_{t},a)-\hat{g}_{t}(H_{t},a) \|\right], \tag{12}\] _where \(\|X\|:=(\mathbb{E}[X^{2}])^{1/2}\). If \(\mathbf{\hat{B}}=o_{p}(n^{-1/2})\), then \(\hat{\beta}_{n}^{(DR)}\) is consistent and asymptotically normal such that \(\sqrt{n}(\hat{\beta}_{n}^{(DR)}-\beta^{\star})\rightarrow\mathcal{N}(0,\Sigma _{DR})\), where \(\Sigma_{DR}\) is defined in Appendix B. In particular, with the algorithm outlined in Section 5.1, \(\Sigma_{DR}\) can be consistently estimated by formula (11) with \(m(\hat{\beta},\hat{\eta}_{k})=\sum_{t=1}^{T}\hat{\hat{p}}_{t}^{(k)}(1|S_{t})(1 -\hat{\hat{p}}_{t}^{(k)}(1|S_{t}))(\tilde{Y}_{t+1}^{(DR)}-f_{t}(S_{t})\hat{ \beta}_{n}^{(DR)})f_{t}(S_{t})\), and \(\dot{m}(\hat{\beta},\hat{\eta}_{k})=\sum_{t=1}^{T}\hat{\hat{p}}_{t}^{(k)}(1|S_ {t})(1-\hat{\hat{p}}_{t}^{(k)}(1|S_{t}))f_{t}(S_{t})f_{t}(S_{t})^{\top}\)._ It follows that \(\hat{\beta}_{n}^{(DR)}\) is doubly robust since it is consistent when either (1) the treatment model is correctly specified or (2) the conditional model is correctly specified. Importantly, the model-agnostic error bound applies to arbitrary first-stage estimators. The bound \(\mathbf{\hat{B}}\) on the DR-WCLS estimator error shows that it can only deviate from \(\beta^{\star}\) by at most a (smoothed) product of errors in the estimation of treatment propensities and conditional expectation of outcomes, thus allowing faster rates for estimating the causal effect even when the nuisance estimates converge at slower rates. For detailed proofs of Theorems 5.2 and 5.3, please refer to Appendices A and B respectively. **Remark 5.4**.: _A variety of flexible options for nuisance estimates are available to attain the convergence rate of \(o_{p}(n^{-1/2})\). For example, if \(\|p_{t}(1|H_{t})-\hat{p}_{t}(1|H_{t})\|=o_{p}(n^{-1/4})\) and \(\|g_{t}(H_{t},a)-\hat{g}_{t}(H_{t},a)\|=o_{p}(n^{-1/4})\), then the product term is \(o_{p}(n^{-1/2})\) and is thus asymptotically negligible. This occurs when both \(\hat{g}(H_{t},a)\) and \(\hat{p}_{t}(a|H_{t})\) are based on correctly specified parametric models, but also achievable for many ML methods under structured assumptions on the nuisance parameters, for example, regularized estimators such as the Lasso, and random forest (Chernozhukov et al., 2018; Athey et al., 2018). Worth noticing, in this setting, completely nonparametric estimators are usually not an option as they tend to converge at rates slower than \(n^{-1/4}\) unless there are strong smoothness or sparsity assumptions in place (Kennedy, 2016)._ ### Estimation Efficiency Comparison Previous research (Qian et al., 2020; Shi et al., 2022) has demonstrated that a locally efficient, semiparametric estimator for the fully conditional causal effect (i.e., \(S_{t}=H_{t}\)) can be derived based on semiparametric efficiency theory (Robins, 1994; Newey, 1990; Tsiatis, 2007). These findings provide the motivation for the development of the proposed methods described above. In this section, we investigate the relative efficiency between the proposed estimators and WCLS, with a focus on the situation where \(S_{t}\subset H_{t}\). The term "more efficient" here means one method achieves lower asymptotic variance than another method for any linear combination of the causal parameter estimates, i.e., the asymptotic variance of \(c^{\top}\hat{\beta}\) is smaller for any \(c\in\mathbb{R}^{q}\). This is equivalent to the difference between the asymptotic variance matrices being negative semidefinite. #### 5.3.1 An Augmented R-WCLS Estimator To make full use of the estimates \(g_{t}(H_{t},A_{t})\), we propose an augmented version of the R-WCLS criterion. According to Equation (7), \(g_{t}(H_{t},A_{t})\) is only included to construct the plug-in estimator \(g_{t}(H_{t})\), but the difference between \(g_{t}(H_{t},1)\) and \(g_{t}(H_{t},0)\), i.e., the causal excursion effect under fully observed history, is not incorporated into the estimating equation. Therefore, an augmented R-WCLS criterion can efficiently use this information in the following manner: \[0=\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}\left(Y_{t+1}-g_{t}(H_{t})-(A_{t}-\tilde {p}_{t}(1|S_{t}))\left(f_{t}(S_{t})^{\top}\beta+\Delta_{t}^{\perp}\right)\right) \left(A_{t}-\tilde{p}_{t}(1|S_{t})\right)f_{t}(S_{t})\right], \tag{13}\] where \(\Delta_{t}^{\perp}\) denotes the projection of the fully conditional causal effect \(\beta(t;H_{t})\) onto the orthogonal complement of \(f_{t}(S_{t})\). The definition of the orthogonal complement can be found in (32), while the details on constructing \(\Delta_{t}^{\perp}\) are provided in Appendix D.1 for further reference. The estimator obtained from solving Equation (13) has similar asymptotic properties as R-WCLS and it leads to the following lemma. Proof can be found in Appendix D.2. **Lemma 5.5**.: _Let \(\hat{\beta}_{n}^{(AR)}\) denote the augmented R-WCLS estimator obtained from solving Equation (13). Under Assumption 2.2, given invertibility and moment conditions, \(\hat{\beta}_{n}^{(AR)}\) is consistent and asymptotically normal such that \(\sqrt{n}(\hat{\beta}_{n}^{(AR)}-\beta^{\star})\rightarrow\mathcal{N}(0,\Sigma _{AR})\), where \(\Sigma_{AR}\) is defined in Appendix D.2._ #### 5.3.2 Efficiency Improvement We compare the estimation efficiency of the meta-learning estimators proposed with that of the existing WCLS methods. Before presenting the theorem, we first list the additional required assumptions : **Assumption 5.6**.: _In addition to Assumption 2.2, we make the following assumptions:_ 1. _The residual_ \(e_{t}\coloneqq Y_{t+1}-g(H_{t},A_{t})\) _is uncorrelated with future states given history_ \(H_{t}\) _and treatment_ \(A_{t}\)_, i.e.,_ \(\mathbb{E}[e_{t}f_{t^{\prime}}(S_{t^{\prime}})\Delta_{t^{\prime}}^{\perp}|H_{ t},A_{t}]=0,\ t<t^{\prime}\)_;_ 2. _The estimator_ \(\hat{g}(H_{t},A_{t})\) _is consistent for the true conditional expectation_ \(g(H_{t},A_{t})\)_._ Assumption 5.6 (a) implies that the residuals do not convey any additional information beyond that which is already contained in the observed history. Such an assumption would hold when considering Markov Decision Processes (MDPs), where the current state and treatment determine future states. Assumption 5.6(b) is derived from the discussion in Theorem 5.3, where we require the error term to converge at a rate of \(o_{p}(n^{-1/2})\). This assumption, in conjunction with Assumption 2.2, ensures the consistency of the DR-WCLS estimator. **Theorem 5.7** (Efficiency gain over WCLS estimator).: _Under Assumption 2.2 and given invertibility and moment conditions:_ 1. _if Assumption_ 5.6_(a) also holds, the augmented R-WCLS estimator is guaranteed to be at least as efficient as WCLS;_ 2. _if Assumption_ 5.6_(b) holds, the DR-WCLS estimator is guaranteed to be at least as efficient as the R-WCLS estimator._ Theorem 5.7(a) requires Assumption 5.6(a) as a sufficient condition when considering a smoothed model for the causal excursion effect \(f_{t}(S_{t})^{\top}\beta^{\star}\) over time. However, Assumption 5.6(a) is not necessary, since efficiency gains can always be guaranteed if the causal excursion effect is modeled nonparametrically over time, i.e., \(f_{t}(S_{t})^{\top}\beta^{\star}_{t}\). Theorem 5.7(b) indicates further asymptotic efficiency gains by employing our proposed doubly-robust alternative. Bang and Robins (2005) do comment that doubly-robust estimators (i.e., DR-WCLS) may be less efficient in finite samples than the inverse probability of treatment weighting (IPTW) estimators (i.e., R-WCLS) under extremely strong model misspecification. In the current context, if \(\hat{p}(1|H_{t})\) is based on a correctly specified parametric model so that \(\left\|p_{t}(1|H_{t})-\hat{p}_{t}(1|H_{t})\right\|=O_{p}(n^{-1/2})\), then \(\hat{g}_{t}(H_{t},a_{t})\) need only to be consistent, i.e, \(\left\|g_{t}(H_{t},a_{t})-\hat{g}_{t}(H_{t},a_{t})\right\|=o_{p}(1)\) for the DR-WCLS estimator to be least asymptotically as efficient as the R-WCLS estimator. Our model-agnostic approach reduces the risk of severe model misspecification. The detailed proof is provided in Appendix E.1 and E.2. ## 6 Extensions ### Missing Data In mHealth studies, it is common for both the proximal outcome \(Y_{t+1}\) and elements of the history \(H_{t}\) to be missing. In the case study of a 6-month MRT on medical interns presented in Section 8, for example, the proximal outcomes are self-reported mood score and step count. Self-reports are often missing due to non-response, while step count can be missing due to individuals not wearing the wrist sensors. Current methods are lacking for addressing missing data in the context of MRT (Boruvka et al., 2018; Dempsey et al., 2020; Qian et al., 2020). Here we extend the DR-WCLS criterion to be robust to missing data. Specifically, we consider two types of missingness: (1) in the outcome \(Y_{t+1}\) and (2) in the observed history \(H_{t}\) being used in the supervised learning algorithm. We do _not_ consider missingness in the moderator set \(S_{t}\) which we assume is completely observed, but consider this important future work. Let \(R_{t}\) be the binary indicator of whether the proximal outcome \(Y_{t+1}\) is observed (\(R_{t}=1\)) or not (\(R_{t}=0\)) at decision time \(t\), and \(R_{t}(\bar{a}_{t})\) denotes the potential observation status. Clearly, missingness is a post-treatment variable and therefore we require additional assumptions: **Assumption 6.1**.: We assume consistency, missing at random, and positivity: * Consistency: For each \(t\leq T\), \(R_{t}(\bar{A}_{t})=R_{t}\), i.e., the observed missing data indicator is equal to the corresponding potential outcome observation status; * Missing at random: For each \(t\leq T\), \(R_{t}(\bar{a}_{t})\) is independent of \(A_{t}\) conditional on the observed history \(H_{t}\); * Positivity: if the joint density {\(R_{t}=r\),\(H_{t}=h_{t},A_{t}=a_{t}\)} is greater than zero, then \(p(R_{t}=1|H_{t},A_{t})=p(R_{t}|H_{t})>0\). Under Assumption 6.1, we can derive a doubly robust extension for missing data by augmenting the DR-WCLS criterion: \[\mathbb{P}_{n}\bigg{[}\sum_{t=1}^{T}\tilde{\sigma}_{t}^{2}(S_{t} )\bigg{(}\frac{\mathbf{1}(R_{t}=1)}{p(R_{t}|H_{t})}\frac{W_{t}(A_{t}-\tilde{p }_{t}(1|S_{t}))(Y_{t+1}-g_{t}(H_{t},A_{t}))}{\tilde{\sigma}_{t}^{2}(S_{t})}+ \beta(t;H_{t})-f_{t}(S_{t})^{\top}\beta\bigg{)}f_{t}\bigg{]}=0. \tag{14}\] Equation (14) is equal to (8) except that we multiply the first term by the inverse probability of missing data. As the data-missing mechanism is a complex nuisance function, it can also be considered part of the meta-learning algorithm. Theorem 5.3 can be extended to the current setting, leading to Corollary 6.1.1. See Appendix F for the proofs. **Corollary 6.1.1**.: _(Asymptotic property for DR-WCLS estimator with missing data) Under Assumption 6.1, given invertibility and moment conditions, the estimator \(\hat{\beta}_{n}\) that solves (14) _is subject to an error term, which (up to a multiplicative constant) is bounded above by:_ \[\mathbf{\hat{B}}^{R}=\mathbb{E}\left[\sum_{t=1}^{T}\sum_{a\in\{0,1\}}\|p(R_{t}=1| H_{t})p_{t}(a|H_{t})-\hat{p}(R_{t}=1|H_{t})\hat{p}_{t}(a|H_{t})\|\,\|g_{t}(H_{t},a)- \hat{g}_{t}(H_{t},a)\|\right]. \tag{15}\] _If we further assume \(\mathbf{\hat{B}}^{R}=o_{p}(n^{-1/2})\), \(\hat{\beta}_{n}\) is consistent and asymptotically normal such that \(\sqrt{n}(\hat{\beta}_{n}-\beta^{\star})\rightarrow\mathcal{N}(0,\Sigma_{DR}^ {R})\), where \(\Sigma_{DR}^{R}\) is defined in Appendix F._ ### Lagged Effects Beyond the interest on proximal outcomes, additional attention has been paid to lagged outcomes defined over future decision points with a fixed window length \(\Delta>1\), denoted as \(Y_{t,\Delta}\), which is a known function of the observed history and latest treatment: \(Y_{t,\Delta}=y(H_{t+\Delta-1},A_{t+\Delta-1})\). In practice, \(\Delta\) is explicitly chosen to avoid the curse of the horizon problem (Dempsey et al., 2020). While this has been true to date, we acknowledge that larger \(\Delta\) will be more common as MRT data sets grow in size and these longer-term outcomes become of primary interest. Under Assumption 2.1, the causal estimand for lagged effect can be expressed in terms of observable data (Shi et al., 2022): \[\beta_{\mathbf{p},\pi}(t+\Delta;s)=\mathbb{E}\left[\mathbb{E}_{\mathbf{p}} \left[W_{t,\Delta-1}Y_{t,\Delta}\ |\ A_{t}=1,H_{t}\right]-\mathbb{E}_{\mathbf{p}}\left[W_{t,\Delta-1}Y_{t,\Delta} \ |\ A_{t}=0,H_{t}\right]\ |\ S_{t}=s\right], \tag{16}\] where \(W_{t,u}=\prod_{s=1}^{u}\pi_{t}(A_{t+s}|H_{t+s})/p_{t}(A_{t+s}|H_{t+s})\), with \(W_{t,0}=1\). Here, we assume the reference distribution for treatment assignments from \(t+1\) to \(t+\Delta-1\) (\(\Delta>1\)) is given by a randomization probability generically represented by \(\{\pi_{u}(a_{u}|H_{u})\}_{u=t+1}^{t+\Delta-1}\). This generalization contains previous definitions such as lagged effects (Boruvka et al., 2018) where \(\pi_{u}=p_{u}\) and deterministic choices such as \(a_{t+1:(t+\Delta-1)}=\mathbf{0}\)(Dempsey et al., 2020; Qian et al., 2020), where \(\pi_{u}=\mathbf{1}\{a_{u}=0\}\) and \(\mathbf{1}\{\cdot\}\) is the indicator function. A brief discussion in Shi et al. (2022) presented an approach to improve the estimation efficiency of the lagged effect and alleviate the curse of the horizon (Liu et al., 2018). Specifically, it was shown that an optimal estimating function will be orthogonal to the score functions for the treatment selection probabilities (Bickel et al., 1993). This implies the estimator may be improved by replacing the estimating equation by itself minus its projection on the score functions for the treatment selection probabilities (Murphy et al., 2001). This can be done in the case of the DR-WCLS estimating equation as follows: \[\begin{split}\mathbb{P}_{n}\Bigg{[}\sum_{t=1}^{T-\Delta+1}\Bigg{[}& W_{t}(A_{t}-\tilde{p}_{t}(1|S_{t}))\Bigg{(}W_{t,\Delta-1}\left(Y_{t,\Delta}-g_{t+ \Delta-1}(H_{t+\Delta-1},A_{t+\Delta-1})\right)\\ &-\sum_{u=0}^{\Delta-2}W_{t,u}\left(g_{t+u}(H_{t+u},A_{t+u})-\sum _{a_{t+u+1}}\pi(a_{t+u+1}|H_{t+u+1})g_{t+u+1}(H_{t+u+1},a_{t+u+1})\right)\Bigg{)} \\ &+\tilde{\sigma}_{t}^{2}(S_{t})\left(\beta(t+\Delta,H_{t})-f_{t}( S_{t})^{\top}\beta\right)\Bigg{]}f_{t}(S_{t})^{\top}\Bigg{]}=0,\end{split} \tag{17}\] where \(g_{t+u}(H_{t+u},A_{t+u})\) is a working model for \(\mathbb{E}[W_{t+u+1:t+\Delta-1}Y_{t,\Delta}|H_{t+u},A_{t+u}]\). Specifically, \(g_{t+\Delta-1}(H_{t+\Delta-1},A_{t+\Delta-1})=\mathbb{E}[Y_{t,\Delta}|H_{t+ \Delta-1},A_{t+\Delta-1}]\), and \(\mathbb{E}[g_{t+u-1}(H_{t+u-1},A_{t+u-1})]=\mathbb{E}\left[\sum_{a_{t+u}}\pi_ {t+u}(a_{t+u}|H_{t+u})g_{t+u}(H_{t+u},a_{t+u})\right]\). The parameterized linear working model of the conditional expectation \(g_{t+u}(H_{t+u},A_{t+u})\) in Murphy et al. (2001) can be improved by leveraging supervised learning algorithms to construct data-adaptive estimates. Based on the estimating Equation (17), the estimator \(\hat{\beta}_{n}^{\Delta}\) has the following property: **Corollary 6.1.2** (Asymptotic property for DR-WCLS estimator for lagged outcomes).: _Given invertibility and moment conditions, the estimation \(\hat{\beta}_{n}^{\Delta}\) obtained by solving Equation (17) is subject to an error term, which is (up to a multiplicative constant) bounded above by \(\sum_{u=0}^{\Delta-1}\mathbf{\hat{B}}_{u}\), where_ \[\mathbf{\hat{B}}_{u}=\mathbb{E}\Bigg{[}\sum_{t=1}^{T-u}\sum_{a_{t+u}}\left\| \hat{p}_{t+u}(a_{t+u}|H_{t+u})-p_{t+u}(a_{t+u}|H_{t+u})\right\|\left\|\hat{g}_ {t+u}(H_{t+u},a_{t+u})-g_{t+u}(H_{t+u},a_{t+u})\right\|\Bigg{]}. \tag{18}\] _If we assume that \(\mathbf{\hat{B}}_{u}=o_{p}(n^{-1/2})\), the estimator \(\hat{\beta}_{n}^{\Delta}\) is consistent and asymptotically normal such that \(\sqrt{n}(\hat{\beta}_{n}^{\Delta}-\beta^{\star})\rightarrow\mathcal{N}(0,\Sigma _{DR}^{\Delta})\), where \(\Sigma_{DR}^{\Delta}\) is defined in Appendix G._ When \(\Delta\) is large, correctly specifying the conditional expectation model \(g_{t}(H_{t},A_{t})\) is particularly useful to avoid the variance estimation growing exponentially due to the weight \(W_{t,\Delta}\), thus offering a remedy for the curse of the horizon. ### Binary Outcomes Qian et al. (2020) proposed an estimator of the marginal excursion effect (EMEE) by adopting a log relative risk model to examine whether a particular time-varying intervention has an effect on a longitudinal binary outcome. The causal excursion effect is defined by: \[\beta_{\mathbf{p}}(t;s) =\log\frac{\mathbb{E}\left[Y_{t+1}(\bar{A}_{t-1},1)\left|\,S_{t}( \bar{A}_{t-1})=s\right]}{\mathbb{E}\left[Y_{t+1}(\bar{A}_{t-1},0)\left|\,S_{t} (\bar{A}_{t-1})=s\right]}\right. \tag{19}\] \[=\log\frac{\mathbb{E}\left[\mathbb{E}\left[Y_{t+1}\left|\,A_{t}=1,H_{t}\right]\,\left|\,S_{t}=s\right]}{\mathbb{E}\left[\mathbb{E}\left[Y_{t+1} \left|\,A_{t}=0,H_{t}\right]\,\left|\,S_{t}=s\right]\right.}. \tag{20}\] Assuming \(\beta_{\mathbf{p}}(t;s)=f_{t}(s)^{\top}\beta^{\star}\), where \(f_{t}(s)\in\mathbb{R}^{q}\) is a feature vector of a \(q\)-dimension and only depends on state \(s\) and decision point \(t\), a consistent estimator for \(\beta^{\star}\) can be obtained by solving a set of weighted estimating equations: \[\mathbb{P}_{n}\left[\sum_{t=1}^{T}W_{t}e^{-A_{t}f_{t}(S_{t})^{\top}\beta} \left(Y_{t+1}-e^{g_{t}(H_{t})^{\top}\alpha+A_{t}f_{t}(S_{t})^{\top}\beta} \right)\left(\begin{array}{c}g_{t}(H_{t})\\ (A_{t}-\tilde{p}_{t}(1\mid S_{t}))f_{t}(S_{t})\end{array}\right)\right]=0. \tag{21}\] See Qian et al. (2020) for more details on the estimand formulation and consistency, asymptotic normality, and robustness properties of the EMEE estimation method. Based on Equation (21), we propose a doubly-robust alternative of EMEE, termed "DR-EMEE". A doubly robust estimator for the log-relative risk is constructed by solving the following set of estimating equations: \[\mathbb{P}_{n}\bigg{[}\sum_{t=1}^{T}\tilde{\sigma}_{t}^{2}(S_{t}) \bigg{(}\frac{W_{t}e^{-A_{t}f_{t}(S_{t})^{\top}\beta}(A_{t}-\tilde{p}_{t}(1|S_{ t}))(Y_{t+1}-g_{t}(H_{t},A_{t}))}{\tilde{\sigma}_{t}^{2}(S_{t})}+ \tag{22}\] \[e^{-f_{t}(S_{t})^{\top}\beta}g_{t}(H_{t},1)-g_{t}(H_{t},0)\bigg{)} f_{t}\bigg{]}=0.\] **Corollary 6.1.3** (Asymptotic property for DR-EMEE estimator).: _Upon correctly specifying either conditional expectation model \(g_{t}(H_{t},A_{t})\) or treatment randomization probability \(p_{t}(A_{t}|H_{t})\), given invertibility and moment conditions, the estimator \(\hat{\beta}_{n}\) obtained from solving Equation (22) is consistent and asymptotically normal such that \(\sqrt{n}(\hat{\beta}_{n}-\beta^{\star})\rightarrow\mathcal{N}(0,\Sigma_{DR}^{b})\), where \(\Sigma_{DR}^{b}\) is defined in Appendix H._ Simulation ### Simulation Setup To evaluate the proposed estimator, we extend the simulation setup in Boruvka et al. (2018). We first present a base data generation model. Consider an MRT with a known randomization probability, and \(g(H_{t})\) in the generative model is a complex function of high-dimensional history information \(H_{t}\). Let \(S_{t}\in\{-1,1\}\) denote a single state variable, which is an effect moderator, and \(S_{t}\subset H_{t}\). We have the generative model as: \[Y_{t,j}=g_{t}(H_{t})+\big{(}A_{t,j}-p_{t}(1|H_{t})\big{)}(\beta_{10}+\beta_{11 }S_{t,j})+e_{t,j}. \tag{23}\] The randomization probability is \(p_{t}(1|H_{t})=\text{expit}(\eta_{1}A_{t-1,j}+\eta_{2}S_{t,j})\) where \(\text{expit}(x)=(1+\text{exp}(-x))^{-1}\); the state dynamics are given by \(\mathbb{P}(S_{t,j}=1|A_{t-1},H_{t-1})=1/2\) with \(A_{0}=0\), and the independent error term satisfies \(e_{t,j}\sim\mathcal{N}(0,1)\) with \(\text{Corr}(e_{u,j},e_{t,j^{\prime}})=1[j=j^{\prime}]0.5^{|u-t|/2}\). As in Boruvka et al. (2018), we set \(\eta_{1}=-0.8,\eta_{2}=0.8,\beta_{10}=-0.2\), and \(\beta_{11}=0.2\). The marginal proximal effect is equal to \(\beta_{10}+\beta_{11}\mathbb{E}\left[S_{t,j}\right]=\beta_{10}=-0.2\). The marginal treatment effect is thus constant in time and is given by \(\beta_{0}^{\star}=\beta_{10}=-0.2\). In the following, we set the complex function \(g(H_{t})\) as a decision tree, and the flow chart Figure 2 in Appendix I visualizes the decision-making process as well as the outcomes. We consider the estimation of the fully marginal proximal treatment effect, thus \(f_{t}(S_{t})=1\) in Equation (23) (i.e., \(S_{t}=\emptyset\)). The results below report average point estimate (Est), standard error (SE), and 95% confidence interval coverage probabilities (CP) across 1000 replicates. Here, we report results with \(N=100\) showing the relative advantage of R-WCLS and DR-WCLS over WCLS. ### Estimation Methods WclsThe estimation model in Boruvka et al. (2018) assumes a linear function for the control variables, i.e., \(g(H_{t};\alpha)=g(H_{t})^{\top}\alpha\). This method is guaranteed to produce an unbiased estimate with a valid confidence interval. Thus, it will be used as a reference for comparison of the estimation results from the following method. R-WCLSThis estimation model incorporates modern supervised learning techniques to construct a \(\hat{g}(H_{t})\) for the control variables. The plug-in estimators \(\hat{g}(H_{t},1)\) and \(\hat{g}(H_{t},0)\) are learned separately with corresponding treatment assignments in training data. As the number of control variables is relatively small in this simulation study, we will use random forests. The plug-in estimator is then \(\hat{g}(H_{t})=\tilde{p}_{t}(1|S_{t})\hat{g}(H_{t},1)+(1-\tilde{p}_{t}(1|S_{t}) )\hat{g}(H_{t},0)\). Dr-WCLSThe plug-in estimators \(\hat{g}(H_{t},1)\) and \(\hat{g}(H_{t},0)\) can be obtained the same way as above, and using these two estimators, we can get \(\hat{\beta}(t;H_{t})=\hat{g}(H_{t},1)-\hat{g}(H_{t},0)\). ### Simulation Results Table 1 reports the simulation results. "%RE gain" indicates the percentage of times we achieve an efficiency gain out of 1000 Monte Carlo replicates. "mRE" stands for the average relative efficiency, and "RSD" represents the relative standard deviation between two estimates. The proposed R-WCLS and DR-WCLS methods significantly improve the efficiency of the WCLS when estimating the fully marginal causal effect. In addition, we find that mRE varies with \(\beta_{11}\). R-WCLS has higher mRE than DR-WCLS when \(\beta_{11}\) is small, and this reverses when \(\beta_{11}\) increases. In our simulation, \(\beta_{11}\) being large indicates that an important moderator \(S_{t,j}\) was not included in the causal effect model (i.e., \(f_{t}(S_{t})^{\top}\beta=\beta_{0}\)). Therefore, when model misspecification occurs, DR-WCLS shows better performance than R-WCLS. \begin{table} \begin{tabular}{c c c c c c c} Method & \(\beta_{11}\) & Est & SE & CP & \%RE gain & mRE & RSD \\ \hline \multirow{3}{*}{WCLS} & 0.2 & -0.198 & 0.049 & 0.946 & - & - & - \\ & 0.5 & -0.195 & 0.050 & 0.945 & - & - & - \\ & 0.8 & -0.193 & 0.053 & 0.951 & - & - & - \\ \hline \multirow{3}{*}{R-WCLS} & 0.2 & -0.200 & 0.044 & 0.950 & 100\% & 1.231 & 1.260 \\ & 0.5 & -0.199 & 0.045 & 0.944 & 100\% & 1.218 & 1.255 \\ & 0.8 & -0.200 & 0.048 & 0.956 & 99.9\% & 1.203 & 1.236 \\ \hline \multirow{3}{*}{DR-WCLS} & 0.2 & -0.200 & 0.045 & 0.954 & 99.7\% & 1.216 & 1.249 \\ & 0.5 & -0.199 & 0.045 & 0.947 & 99.9\% & 1.228 & 1.261 \\ \cline{1-1} & 0.8 & -0.200 & 0.047 & 0.954 & 99.7\% & 1.254 & 1.282 \\ \hline \end{tabular} \end{table} Table 1: Fully marginal causal effect estimation efficiency comparison. ## 8 Intern Health Study: A Worked Example The Intern Health Study (IHS) is a 6-month micro-randomized trial on medical interns (NeCamp et al., 2020), which aimed to investigate when to provide mHealth interventions to individuals in stressful work environments to improve their behavior and mental health. In this section, we evaluate the effectiveness of targeted notifications in improving individuals' moods and step counts. The exploratory and MRT analyses conducted in this paper focus on weekly randomization, thus, an individual was randomized to receive mood, activity, sleep, or no notifications with equal probability (1/4 each) every week. We choose the outcome \(Y_{t+1,j}\) as the self-reported mood score (a Likert scale taking values from 1 to 10) and step count (cubic root) for individual \(j\) in study week \(t\). Missing data occurred throughout the trial when interns did not complete the self-reported mood survey or were not wearing their assigned Fitbit wrist-worn device; thus, multiple imputation was originally used to impute missing daily data. See NeCamp et al. (2020) for further details. The following analysis is based on one of the imputed data sets. The data set used in the analyses contains 1562 participants. The average weekly mood score when a notification is delivered is 7.14, and 7.16 when there is no notification; The average weekly step count (cubic root) when a notification is delivered is 19.1, and also 19.1 when there is no notification. In Section 8.1 and 8.2, we evaluate the targeted notification treatment effect for medical interns using our proposed methods and WCLS. ### Comparison of the Marginal Effect Estimation First, we are interested in assessing the fully marginal excursion effect (i.e., \(\beta(t)=\beta_{0}^{\star}\)). For an individual \(j\), the study week is coded as a subscript \(t\). \(Y_{t+1,j}\) is the self-reported mood score or step count (cubic root) for individual \(j\) in study week \(t+1\). \(A_{t}\) is defined as the specific type of notification that targets improving the outcome. For example, if the outcome is the self-reported mood score, sending mood notifications would be the action, thus, \(\mathbb{P}(A_{t}=1)=0.25\). We analyze the marginal causal effect \(\beta_{0}\) of the targeted notifications on self-reported mood score and step count using the following model for WCLS: \[Y_{t+1,j}\sim g_{t}(H_{t,j})^{\top}\alpha+(A_{t,j}-\tilde{p}_{t})\beta_{0}.\] The term \(g_{t}(H_{t})^{\top}\alpha\) represents a linear working model of prognostic control variables including two baseline characteristics, study week \(t\), and the prior week's outcome \(Y_{t,j}\). For R-WCLS and DR-WCLS methods, we include a total of 12 control variables and use random forests to construct the plug-in estimators \(\hat{g}(H_{t},A_{t})\) as described in Section 7.2. For a detailed description of the control variables, see Appendix J. We report various estimators in Table 2 and present more details in Appendix J. In comparison with WCLS, the estimations using R-WCLS and DR-WCLS have a tangible improvement in the standard error estimates. We conclude that sending activity notifications can increase (the cubic root of) step counts by 0.07, and sending mood notifications can negatively affect users' moods by -0.017, with statistical significance at level 95%. By comparison, R-WCLS and DR-WCLS have enough power to detect a causal relationship between sending mobile prompts and lower mood scores, while WCLS does not. ### Time-varying Treatment Effect Estimation For further analysis, we include study week in the moderated treatment effect model: \(\beta(t)=\beta_{0}^{\star}+\beta_{1}^{\star}t\), and examine how treatment effect varies over time. Estimated time-varying treatment moderation effects and their relative efficiency are shown in Figure 1. The shaded area in Figure 1 represents the 95% confidence band of the moderation effects as a function of study week. Narrower confidence bands were observed for estimators constructed using both R-WCLS and DR-WCLS methods. Relative efficiencies between 1.2 and 1.3 were observed over the study week. Based on the results above, we can conclude that sending notifications doesn't have a \begin{table} \begin{tabular}{c c c c c c} Outcome & Method & Estimation & Std.err & P-value & RE \\ \hline \multirow{3}{*}{Mood} & WCLS & -0.016 & \(9.03\times 10^{-3}\) & 0.078 & - \\ & R-WCLS & -0.017 & \(8.14\times 10^{-3}\) & **0.038** & 1.23 \\ & DR-WCLS & -0.017 & \(8.18\times 10^{-3}\) & **0.042** & 1.22 \\ \hline \multirow{3}{*}{Steps} & WCLS & 0.070 & \(2.41\times 10^{-2}\) & **0.004** & - \\ & R-WCLS & 0.065 & \(2.34\times 10^{-2}\) & **0.005** & 1.06 \\ \cline{1-1} & DR-WCLS & 0.070 & \(2.37\times 10^{-2}\) & **0.003** & 1.03 \\ \hline \end{tabular} \end{table} Table 2: IHS Study: Fully marginal treatment effect estimation. significant impact on mood scores for the first 12 weeks. Nevertheless, sending notifications later in the study is less likely to improve the mood of participants. In light of this, it might not be ideal to overburden participants for an extended period if the notifications don't serve any therapeutic purpose. Additionally, we assessed the time-varying treatment effect on step count, which is detailed in Appendix J. ### Treatment Effect Estimation with Missing Data Next, we apply our proposed methods to evaluate the treatment effect based on the raw observed data rather than the imputed dataset. To maintain consistency with previous analyses, we still use the weekly average mood score and step count (cubic root) as outcomes. Self-report mood scores and step counts are collected every day, so if no records were observed for the entire week, we indicate the weekly outcome as missing. Otherwise, the average mood score and step count (cubic root) are calculated as outcomes. For mood outcome, there is in total 31.3% person/week missing, and for step count outcome, there is in total 48.1% person/week missing. We carried out the same analysis as above for marginal treatment effects. Inverse probability weighting is used when implementing estimation using WCLS and R-WCLS criteria. Estimated treatment effects and their relative efficiency are shown in Table 3. It is no longer evident that mood notifications have a significant overall impact on participants' moods, Figure 1: Causal effects estimates with confidence intervals of R-WCLS (**left**) and DR-WCLS (**middle**), and their relative efficiency in comparisons with WCLS (**right**). but the step count analysis still indicates a positive effect of sending activity notifications on participants' physical activity levels. ## 9 Discussion Scientists wish to leverage the large volume of data generated by mobile health systems to better answer scientific questions regarding the time-varying intervention effects. The application of ML algorithms can make good use of high-dimensional mobile health data to provide high-quality predictions of proximal outcomes. In this paper, we proposed two methods, termed R-WCLS and DR-WCLS respectively, to estimate time-varying treatment effects and illustrated both the theoretical and practical benefits of incorporating machine learning algorithms into the inference procedure. Particularly, both R-WCLS and DR-WCLS criteria provide sufficient flexibility in their nuisance model specifications and improve estimation efficiency over existing approaches. A crucial feature of the DR-WCLS criterion that is not shared by the WCLS and R-WCLS criteria is that it is doubly robust. This feature is critical even in MRTs where the treatment model may be misspecified and missing data are common. The DR-WCLS criterion is especially powerful when both the treatment randomization probability and conditional expectation model are accurately specified, which results in the highest relative asymptotic efficiency. Although this work represents a major step forward in analyzing MRT data, there are still some interesting questions to explore in the future. For example, all the discussion above relies on sample splitting having little to no impact on the estimation asymptotically, but the effect of the particular random split on the estimate can be important in finite samples. \begin{table} \begin{tabular}{c c c c c} Outcome & Method & Estimation & Std.err & P-value \\ \hline \multirow{3}{*}{Mood} & WCLS & \(7.71\times 10^{-3}\) & \(1.73\times 10^{-2}\) & 0.655 \\ & R-WCLS & \(1.81\times 10^{-3}\) & \(1.62\times 10^{-2}\) & 0.911 \\ & DR-WCLS & \(3.00\times 10^{-3}\) & \(1.68\times 10^{-2}\) & 0.858 \\ \hline \multirow{3}{*}{Steps} & WCLS & \(6.71\times 10^{-2}\) & \(3.94\times 10^{-2}\) & 0.088 \\ & R-WCLS & \(7.43\times 10^{-2}\) & \(4.05\times 10^{-2}\) & 0.067 \\ \cline{1-1} & DR-WCLS & 0.104 & \(4.09\times 10^{-2}\) & **0.011** \\ \hline \end{tabular} \end{table} Table 3: IHS Study: Fully marginal treatment effect estimation with missing outcomes. Chernozhukov et al. (2018) introduced a finite-sample adjustment to incorporate uncertainty induced by sample splitting, which can be a straightforward and useful extension to our proposed methods. In addition, the accuracy of different supervised learning algorithms in estimating nuisance components might differ, and it is worth researching whether there are methods that work best in practice. In relation to the same topic, we can learn the nuisance component using non-parametric models, such as kernel regression. These methods enable us to obtain an explicit error bound for the DR-WCLS estimator, which serves as sufficient conditions for the nuisance model's smoothness and dimension for the estimator to be \(\sqrt{n}\)-consistent (Kennedy, 2020). Last but not least, Han and Wang (2013) proposed a "multiply robust" estimator that is more robust than doubly robust estimators, which allows multiple models for both the propensity score and the outcome regression. This estimator is consistent if any of the multiple models are correctly specified. It is worth considering extending our proposed method to a multiply robust version.
2302.04364
Exotic Baryons in Hot Neutron Stars
We study the nuclear isentropic equation of state for a stellar matter composed of nucleons, hyperons, and $\Delta$-resonances. We investigate different snapshots of the evolution of a neutron star, from its birth as a lepton-rich protoneutron star in the aftermath of a supernova explosion to a lepton-poor regime when the star starts cooling to a catalyzed configuration. We use a relativistic model within the mean-field approximation to describe the hot stellar matter and adopt density-dependent couplings adjusted by the DDME2 parameterization. We use baryon-meson couplings for the spin-$1/2$ baryonic octet and spin-$3/2$ decuplet determined in a unified manner relying on $\text{SU}(6)$ and $\text{SU}(3)$ symmetry arguments. We observe that $\Lambda$ is the dominant exotic particle in the star at different entropies for both neutrino-free and neutrino-trapped stellar matter. For a fixed entropy, the inclusion of new particles (hyperons and/or delta resonances) in the stellar matter decreases the temperature. Also, an increase in entropy per baryon ($1\;\text{to}\; 2$) with decreasing lepton number density ($0.4\;\text{to}\; 0.2$) leads to an increase in stellar radii and a decrease in its mass due to neutrino diffusion. In the neutrino transparent matter, the radii decrease from entropy per baryon $2$ to $T\,=\,0$ without a significant change in stellar mass.
Adamu Issifu, Kauan D. Marquez, Mateus R. Pelicer, Débora P. Menezes
2023-02-08T23:05:40Z
http://arxiv.org/abs/2302.04364v3
# Exotic Baryons in Hot Neutron Stars ###### Abstract We study the nuclear isentropic equation of state for a stellar matter composed of nucleons, hyperons, and \(\Delta\)-resonances. We investigate different snapshots of the evolution of a neutron star, from its birth as a lepton-rich protoneutron star in the aftermath of a supernova explosion to a lepton-poor regime when the star starts cooling to a catalyzed configuration. We use a relativistic model within the mean-field approximation to describe the hot stellar matter and adopt density-dependent couplings adjusted by the DDME2 parameterization. We use baryon-meson couplings for the spin-1/2 baryonic octet and spin-3/2 decuplet determined in a unified manner relying on SU(6) and SU(3) symmetry arguments. We observe that \(\Lambda\) is the dominant exotic particle in the star at different entropies for both neutrino-free and neutrino-trapped stellar matter. For a fixed entropy, the inclusion of new particles (hyperons and/or delta resonances) in the stellar matter decreases the temperature. Also, an increase in entropy per baryon (1 to 2) with decreasing lepton number density (0.4 to 0.2) leads to an increase in stellar radii and a decrease in its mass due to neutrino diffusion. In the neutrino transparent matter, the radii decrease from entropy per baryon 2 to \(T\,=\,0\) without a significant change in stellar mass. keywords: stars: neutron, protostars, supernova remnants ## 1 Introduction The equation of state (EoS) is an essential tool for studying strongly interacting matter and performing astrophysical simulations of compact objects and has already been exploited in several forms (Typel et al., 2022; Dutra et al., 2014). However, the microscopic composition of compact objects is still an open problem, and its resolution requires an enhanced understanding of the dense region of the EoS, both to understand current data and also to accommodate new observational advancements. Notable among emerging events that require the application of the EoS are multimessenger observations of binary neutron star mergers, isolated X-ray pulsars, and radio pulsars. The major constraints imposed on the EoS to study these objects include \(\beta\)-equilibrium, charge neutrality, and lepton number conservation - see Baym et al. (2018); Menezes (2021) for recent reviews and references therein. A hot and dense proto-neutron star (PNS) is a neutrino-rich object formed during a core-collapse supernova explosion or in a binary neutron star merger. The PNS evolves through several processes, including heat transfer, neutrino diffusion, deleptonization, and entropy gradients. When the star emits enough radiation, its mass decreases and its temperature drops to a point where matter becomes neutrino transparent and continues cooling till it catalyzes into a cold neutron star (Glendenning, 2012). The neutrino signature at the later stages of the evolution is determined by microscopic properties such as the EoS and its composition, neutrino opacity, and other microphysical properties that impact neutrino diffusion and finite entropy systems (Sedrakian & Harutyunyan, 2022; Roberts et al., 2012; Prakash et al., 1997; Janka et al., 2007). The study of gravitational collapse and supernova explosions are essential astrophysical events due to their rich physics and diversity. For instance, the process involves all four known fundamental forces of nature, making it an ideal laboratory for physics on different lengths and time scales and a testbed for new phenomena. The process starts in a strong gravitational field. Neutrino emission and deleptonization are weak interaction properties, the thermodynamic properties are governed by electrodynamics and strong interactions, while the change in the composition of the stellar gas is governed by nuclear and weak interactions (Camelio et al., 2017; Fischer et al., 2010; Pons et al., 1999; Camelio et al., 2016). In this study, we analyze the temperature profile and mass-radius diagram of the isotropic, static, spherically symmetric hot star containing the spin-1/2 baryon octet and the non-strange \(J^{P}=3/2^{+}\) decuplet. We investigate the behavior of the EoS and the particle abundances in the evolution of a newly born PNS until it catalyzes. Several studies of cold neutron stars have been carried out within the framework of relativistic models within a mean-field approximation taking into account all of the spin-1/2 octet and/or \(\Delta\)-resonances using various meson-baryon coupling formalism at zero tem perature (Marquez et al., 2022; Schr\(\ddot{\rm u}\)hoff et al., 2010; Drago et al., 2014a; Li et al., 2018; Raduta, 2021; Li & Sedrakian, 2019; Ribes et al., 2019; Zhu et al., 2016). Studies on PNS at finite temperature and fixed entropy considering heavy baryons have also been done in Sedrakian & Harutyunyan (2022); Malfatti et al. (2019). At the same time, hadron-quark phase PNS is also studied in Shao (2011) under fixed entropy conditions. In this work, we aim to give an overview of a neutron star's evolution, from its birth as a lepton-rich proton-neutron star in the aftermath of a supernova explosion to its final stages, when the star cools to a catalyzed configuration. Different non-nucleonic degrees of freedom are considered to be present in neutron star matter, depending on the model adopted. In most of the contemporary literature, the nucleons and hyperons (the entire spin-1/2 baryon octet) are taken as the standard constituents of such objects, including the baryons of the spin-3/2 decuplet (especially \(\Delta\)-resonances) proving also to be relevant in the latest years. The presence of hyperons and \(\Delta\)-resonances in the neutron star matter composition generally softens the EoS, lowering the maximum mass of the star below the expected threshold of \(\sim 2\,M_{\odot}\)(Antoniadis et al., 2013). The recent measurement of the massive pulsar PSR J0740+6620 by NICER (Cromartie et al., 2019; Fonseca et al., 2021) of \(M=2.072^{+0.067}_{-0.066}\) M\({}_{\odot}\) and \(R=12.39^{+1.30}_{-0.98}\) km, at a confidence interval of 68% (Riley et al., 2021), gives a well-defined mass-radius window that must be reached by the NS description. As the RMF model parameters are fitted to reproduce nuclear matter observables, these astrophysical observations can be addressed mainly by adjusting the baryon-meson couplings of the non-nucleonic constituents of the stellar matter (Weissenborn et al., 2012; Miyatsu et al., 2013; Lopes & Menezes, 2014; Lopes et al., 2023). In this study, we use baryon-meson couplings recently determined using group theory in Lopes et al. (2022), to study the evolution of a PNS from its birth when \(S/n_{B}=1\) with trapped neutrinos, neutrino diffusion stage \(S/n_{B}=2\) few seconds of its birth, neutrino transparent stage for \(S/n_{B}=2\), and finally to the formation of a cold neutron star at \(T=0\). The work is organized as follows: In Sec. 2 we present the details of the relativistic model in the mean-field approximation and the required conditions necessary for thermodynamics applications. The section is divided into two subsections; in Sec. 2.1 we present the details of the equations of state and in Sec. 2.2 we present the necessary equilibrium conditions for supernova physics. The results and analyses are contained in Sec. 3, where we discuss the particle abundances, the EoS, the temperature profiles, and the mass-radius diagrams. The final findings are in Sec. 4, where we summarize all the stages of the star's evolution. ## 2 Neutron star matter at finite entropy ### Equation of State The Lagrangian of the relativistic model in the mean field approximation used to describe the hadronic interactions is given by \[\mathcal{L}_{\rm RMF}=\mathcal{L}_{H}+\mathcal{L}_{\Delta}+\mathcal{L}_{\rm mesons }+\mathcal{L}_{\rm leptons}, \tag{1}\] where the Dirac-type Lagrangian for the \(J^{P}=1/2^{+}\) baryon octet is given by \[\mathcal{L}_{H} = \sum_{b\in H}\bar{\psi}_{b}\Big{[}i\gamma^{\mu}\partial_{\mu}- \gamma^{0}\big{(}g_{\mu b}\omega_{0}+g_{\phi b}\phi_{0}+g_{\rho b}I_{3b}\rho_ {03}\big{)} \tag{2}\] \[-\Big{(}m_{b}-g_{\rho b}\omega_{0}\Big{)}\Big{]}\psi_{b},\] and the Rarita-Schwinger-type Lagrangian for the \(J^{P}=3/2^{+}\) particles of baryon decuplet is given by \[\mathcal{L}_{\Delta} = \sum_{d\in\Delta}\bar{\psi}_{d\mu}\Big{[}\gamma^{\mu}i\partial_{ \mu}-\gamma^{0}\left(g_{\nu d}\omega_{0}+g_{\rho d}I_{3d}\rho_{03}\right) \tag{3}\] \[-\left(m_{d}-g_{\rho d}\sigma_{0}\right)\Big{]}\psi_{d\nu}.\] We stress that spin-3/2 baryons are described by the Rarita-Schwinger Lagrangian density and that their vector-valued spinor has additional components when compared to the four components in the spin-1/2 Dirac spinors. However, as shown in de Paoli et al. (2013), spin-3/2 equations of motion can be written compactly as the spin-1/2 ones in the RMF regime. The mesonic part of the Lagrangian is given by \[\mathcal{L}_{\rm mesons}=-\frac{1}{2}m_{\sigma}^{2}\sigma_{0}^{2}+\frac{1}{ 2}m_{\omega}^{2}\omega_{0}^{2}+\frac{1}{2}m_{\sigma}^{2}\phi_{0}^{2}+\frac{1} {2}m_{\rho}^{2}\rho_{03}^{2}, \tag{4}\] where the interaction mediators are the scalar meson \(\sigma\), the vector mesons \(\omega\) and \(\phi\) (which carries hidden strangeness), both isoscalars and the isovector-vector meson \(\vec{\rho}\). The subscript '0' here indicates that the field equations are calculated in the mean field approximation. Finally, the free leptons are described by the Dirac Lagrangian \[\mathcal{L}_{\rm leptons}=\sum_{L}\bar{\psi}_{L}\left(i\gamma^{\mu}\partial_{ \mu}-m_{L}\right)\psi_{L} \tag{5}\] where the summation runs over all leptons considered in each stage of the star evolution. For cold stellar matter, the index \(L\) runs over electron and muons \(L\in(e,\,\mu)\) and their corresponding antiparticles with a degeneracy factor of \(\gamma_{L}=2J_{L}+1=2\), with \(J_{L}\) the total angular momentum of the leptons. For a finite temperature and in the case of fixed entropy and lepton number density, we consider only the electron and its neutrino, since muons only become relevant after the star becomes neutrino-free (Malfatti et al., 2019). In this case, we consider the left-handed electron neutrino in the Standard Model with a degeneracy of \(\gamma_{L}=1\) for a complete study. We use the density-dependent parametrization known as DDME2 (Lalazissis et al., 2005), where the meson couplings are adjusted by the expression \[g_{ib}(n_{B})=g_{ib}(n_{0})a_{i}\frac{1+b_{i}(\eta+d_{i})^{2}}{1+c_{i}(\eta+d_{ i})^{2}}, \tag{6}\] for \(i=\sigma,\omega,\phi\) and \[g_{\rho b}(n_{B})=g_{ib}(n_{0})\exp\left[-a_{\rho}\big{(}\eta-1\big{)}\right], \tag{7}\] \begin{table} \begin{tabular}{c c c c c c c} \hline \hline meson(\(i\)) & \(m_{i}\)(MeV) & \(a_{i}\) & \(b_{i}\) & \(c_{i}\) & \(d_{i}\) & \(g_{iN}(n_{0})\) \\ \hline \(\sigma\) & 550.1238 & 1.3881 & 1.0943 & 1.7057 & 0.4421 & 10.5396 \\ \(\omega\) & 783 & 1.3892 & 0.9240 & 1.4620 & 0.4775 & 13.0189 \\ \(\rho\) & 763 & 0.5647 & — & — & — & 7.3672 \\ \hline \hline \end{tabular} \end{table} Table 1: DDME2 parameters. for \(i=\rho\), with \(\eta=n_{B}/n_{0}\). The model parameters are fitted from experimental constraints of nuclear matter at or around the saturation density, namely the binding energy, compressibility modulus, symmetry energy, and its slope, and are shown in Table 1, considering the associated bulk properties of nuclear matter at saturation \(n_{0}=0.152\) fm\({}^{-3}\) as of being \(B/A=-16.4\) MeV, \(K_{0}=251.9\) MeV, \(J=32.3\) MeV, and \(L=51.3\) MeV, which are in good agreement with current constraints (Dutra et al., 2014; Lalazissis et al., 2005; Reed et al., 2021; Lattimer, 2023). The fitting of the model-free parameters is made considering the pure nucleonic matter, and to determine the meson couplings to hyperons and deltas we define the ratio of the baryon coupling to the nucleon one as \(\chi_{bb}=g_{b}/g_{iN}\). One way to extend the model parameterization to other baryonic degrees of freedom is to use flavor SU(3) symmetry arguments to fix the values of the couplings, a procedure well adopted in the literature as it gets rid of the huge arbitrariness of the previously used recipes (c.f. Stancu, 1997). On the other hand, Lopes et al. (2022) calculated the baryon-meson vector coupling constants of the spin-1/2 baryonic octet, and for the first time, calculated that of the spin-3/2 decuplet, in a model-independent way, using the potentials \(U_{\Lambda}=-28\) MeV, \(U_{\Sigma}=30\) MeV, \(U_{\Xi}=-4\) MeV, and \(U_{\Delta}\approx-98\) MeV to fix the scalar couplings. The values of \(\chi_{ib}\) are shown in Tab. 2 and are equivalent for the choice of \(\alpha_{V}=0.5\) in the free parameter of the baryon-meson coupling scheme. Please note that some of the \(\chi_{b}\) parameters are different from the ones reported in Lopes et al. (2022) because the model presented in Lopes et al. (2022) does not involve the isospin projections in the Lagrangian terms unlike the one under consideration here. From the Lagrangian, thermodynamic quantities can be calculated. The density of a baryons \(b\) is given by \[n_{b}=\gamma_{b}\int\frac{d^{3}k}{(2\pi)^{3}}\left[f_{b+}-f_{b-}\right] \tag{8}\] where \(\gamma_{b}=2J_{b}+1=2\) is the spin degeneracy factor for the baryon octet, with \(J\) the total angular momentum. Moreover, \(f(k)\) is the Fermi-Dirac distribution function \[f_{b\pm}(k)=\frac{1}{1+\exp[(E_{b}\mp\mu_{b}^{*})/T]}\] with energy \(E_{b}=\sqrt{k^{2}+m_{b}^{*2}}\). Interchanging \(b\leftrightarrow d\) the degeneracy factor of the \(\Delta\)-resonances becomes \(\gamma_{d}=2J_{d}+1=4\) and \(E_{d}=\sqrt{k^{2}+m_{d}^{*2}}\). The effective chemical potentials read \[\mu_{b}^{*} =\mu_{b}-g_{ab}\omega_{0}-g_{\mu b}I_{3b}\rho_{03}-g_{\phi b} \phi_{0}-\Sigma^{r}, \tag{9}\] \[\mu_{d}^{*} =\mu_{d}-g_{\rho d}\rho_{03}I_{3d}-\Sigma^{r}, \tag{10}\] where \(\Sigma^{r}\) is the rearrangement term due to the density-dependent couplings \[\Sigma^{r} = \sum_{b}\bigg{[}\frac{\partial g_{\mu b}}{\partial n_{b}}\omega_{ 0}n_{b}+\frac{\partial g_{\mu b}}{\partial n_{b}}\rho_{03}I_{3b}n_{b}+\frac{ \partial g_{\theta b}}{\partial n_{b}}\phi_{0}n_{b} \tag{11}\] \[-\frac{\partial g_{\mu b}}{\partial n_{b}}\phi_{0}n_{b}^{*}+b \leftrightarrow d\bigg{]}.\] The effective masses are \[m_{b}^{*}=m_{b}-g_{\sigma b}\sigma_{0},\qquad m_{d}^{*}=m_{d}-g_{\sigma d} \sigma_{0}, \tag{12}\] and the scalar density \[n_{b}^{s}=\gamma_{b}\int\frac{d^{3}k}{(2\pi)^{3}}\frac{m_{b}^{*}}{E_{b}}\left[ f_{b+}+f_{b-}\right]. \tag{13}\] We obtain equivalent expressions above for the \(\Delta\)-resonances by replacing \(b\) with \(d\). The mesonic mean-field approximation yields \[m_{\sigma}^{2}\sigma_{0} =\sum_{b}g_{\sigma b}n_{b}^{s}+\sum_{d}g_{\sigma d}n_{d}^{s}, \tag{14}\] \[m_{\omega}^{2}\omega_{0} =\sum_{b}g_{\omega b}n_{b}+\sum_{d}g_{\omega d}n_{d},\] (15) \[m_{\phi}^{2}\phi_{0} =\sum_{b}g_{\phi b}n_{b},\] (16) \[m_{\rho}^{2}\rho_{03} =\sum_{b}g_{\mu b}n_{b}I_{3b}+\sum_{d}g_{\rho d}n_{d}I_{3d}. \tag{17}\] The baryon energy and pressure are given by \[\varepsilon_{B} =\varepsilon_{b}+\varepsilon_{m}+\varepsilon_{d}+\varepsilon_{L} \tag{18}\] \[P_{B} =P_{b}+P_{m}+P_{d}+P_{L}+P_{r} \tag{19}\] with the baryonic contributions \[\varepsilon_{b} =\gamma_{b}\int\frac{d^{3}k}{(2\pi)^{3}}E_{b}\left[f_{b+}+f_{b-} \right], \tag{20}\] \[P_{b} =\gamma_{b}\int\frac{d^{3}k}{(2\pi)^{3}}\frac{k}{E_{b}}\left[f_{b+}+f_{b-} \right], \tag{21}\] and the meson contributions \[\varepsilon_{m} =\frac{m_{\sigma}^{2}}{2}\sigma_{0}^{2}+\frac{m_{\omega}^{2}}{2} \omega_{0}^{2}+\frac{m_{\phi}^{2}}{2}\phi_{0}^{2}+\frac{m_{\rho}^{2}}{2}\rho_{0 3}^{2}, \tag{22}\] \[P_{m} =-\frac{m_{\sigma}^{2}}{2}\sigma_{0}^{2}+\frac{m_{\omega}^{2}}{2} \omega_{0}^{2}+\frac{m_{\phi}^{2}}{2}\phi_{0}^{2}+\frac{m_{\rho}^{2}}{2}\rho_{0 3}^{2}. \tag{23}\] The expressions for \(P_{d}\) and \(\varepsilon_{d}\) are similar to (18) and (19) with the replacement of \(b\) with \(d\). The pressure further receives a correction from the rearrangement term to guarantee thermodynamic consistency and energy-momentum conservation (Typel and Wolter, 1999; Fuchs et al., 1995) \[P_{r}=n_{B}\Sigma^{r}. \tag{24}\] The free Fermi gas contribution of the leptons are accounted for in \(\varepsilon_{\lambda}\) and \(P_{\lambda}\). From these quantities, we can finally calculate the baryon-free energy density \(\mathcal{F}_{B}=\varepsilon_{B}-Ts_{B}\), and the entropy density \[s_{B}=\frac{\varepsilon_{B}+P_{B}-\sum_{b}\mu_{b}n_{b}-\sum_{d}\mu_{d}n_{d}}{T}. \tag{25}\] \begin{table} \begin{tabular}{c c c c c} \hline b & \(\chi_{ab}\) & \(\chi_{ab}\) & \(\chi_{\rho b}\) & \(\chi_{\phi b}\) \\ \hline \(\Lambda\) & 0.714 & 0.650 & 0 & -0.808 \\ \(\Sigma^{0}\) & 1 & 0.735 & 0 & -0.404 \\ \(\Sigma^{-}\), \(\Sigma^{+}\) & 1 & 0.735 & 0.5 & -0.404 \\ \(\Xi^{-}\), \(\Xi^{0}\) & 0.571 & 0.476 & 0 & -0.606 \\ \(\Delta^{-}\), \(\Delta^{0}\), \(\Delta^{+}\), \(\Delta^{++}\) & 1.285 & 1.283 & 1 & 0 \\ \hline \end{tabular} \end{table} Table 2: The ratio of the baryon coupling to the corresponding nucleon coupling for hyperons and \(\Delta\)s. ### The Equilibrium Conditions We implement numerical codes to solve the equations of motion for the meson fields, scalar, and baryon densities, and temperature profile by fixing \(S/n_{B}\) and \(Y_{L,e}\) towards the study of PNSs. Here, \(Y_{L,e}=(n_{e}+n_{\nu e})/n_{B}\), where \(n_{e}\) and \(n_{\nu e}\) are the electron and electron neutrino number densities respectively. A newly born PNS contains trapped neutrinos, so it is standard to consider the electron and the muon lepton numbers as fixed. In our calculations in the neutrino-trapped regime, we fix the electron lepton number \(Y_{L,e}=Y_{e}+Y_{\nu e}\) and ignore the contribution of the muon and muon neutrino \(Y_{L,\mu}=Y_{\mu}+Y_{\nu_{\mu}}=(n_{\mu}+n_{\nu\mu})/n_{B}\approx 0\) in accordance with supernova physics (Malfatti et al., 2019), with \(n_{\mu}\) and \(n_{\nu\mu}\) being the muon and muon neutrino number densities respectively. We consider different values of \(S/n_{B}\) for different \(Y_{L,e}\) in accordance with the various stages of PNS evolution (Nakazato et al., 2022; Raduta et al., 2020): for the newly born neutron star (at \(t=0\,\mathrm{s}\)) we consider \(S/n_{B}=1\) and \(Y_{L,e}=0.4\), but a few seconds (\(\sim 0.5-1.0\,\mathrm{s}\)) after the star is born it starts heating, so the entropy increases (\(1\,<\,S/n_{B}\,<3\)) and the lepton number concentration decreases, thus we consider \(S/n_{B}=2\) and \(Y_{L,e}=0.2\) at this stage. Further discussions on fixed entropy calculations can be found in Raduta (2021). The star gets maximally heated and becomes neutrino-free (\(Y_{\nu e}=0\)) with \(S/n_{B}=2\), and finally, it shrinks to a catalyzed cold neutron star at \(T=0\), (Steiner et al., 2000; Shao, 2011; Reddy et al., 1998). For the neutrino-free region, we consider both electrons and muons in the calculation. A snapshot of each stage is discussed in detail below in Fig. 1. We consider the matter to be in \(\beta\)-equilibrium during all the stages, and use the following relations for the chemical potentials: \[\mu_{\lambda}=\mu_{\Sigma^{0}}=\mu_{\Xi^{0}}=\mu_{\Delta^{0}}=\mu _{n}=\mu_{B}, \tag{26}\] \[\mu_{\Sigma^{-}}=\mu_{\Xi^{-}}=\mu_{\Delta^{-}}=\mu_{B}-\mu_{Q},\] (27) \[\mu_{\Sigma^{+}}=\mu_{\Delta^{+}}=\mu_{p}=\mu_{B}+\mu_{Q},\] (28) \[\mu_{\Delta^{++}}=\mu_{B}+2\mu_{Q}, \tag{29}\] with \(\mu_{B}\) the baryon chemical potential and \(\mu_{Q}=\mu_{p}-\mu_{n}\) the charged chemical potential. In the neutrino-trapped region, the charge chemical potential can be expressed in terms of the lepton and neutrino chemical potentials as \[\mu_{Q}=\mu_{\nu l}-\mu_{l}, \tag{30}\] where \(l\) is a lepton i.e., either electron \(e\) or muon \(\mu\) and \(\mu_{\nu l}\) is the neutrino chemical potential. In the neutrino transparent region, the chemical potential of the neutrinos vanishes and the lepton chemical potential is related to the charge chemical potential as \[\mu_{Q}=-\mu_{l}. \tag{31}\] Also, lepton number densities are conserved, \(Y_{L,l}=Y_{l}+Y_{\nu l}\) in the neutrino-trapped matter. The system is charge neutral, so baryon and lepton charges must cancel out \[\begin{split} n_{p}+& n_{\Sigma^{+}}+2n_{\Delta^{++ }}+n_{\Delta^{+}}\\ &-(n_{\Sigma^{-}}+n_{\Xi^{-}}+n_{\Xi^{-}}+n_{\Delta^{-}})=n_{e}+n _{\mu}.\end{split} \tag{32}\] ## 3 Results and Analysis In Fig. 1 we present the results for particle composition of nucleonic (\(N\)), hyperonic (\(H\)), and \(\Delta\)-resonances (\(\Delta\)) admixed hypernuclear matter in PNS core during its evolution till it catalyzes into a _neutron star_. The quantity \(Y_{i}\) is the particle fraction which can be expressed explicitly as \(Y_{i}=n_{i}/n_{B}\), where \(i\) represents the different particles in the system. In the upper panels from left to right, we observe that the ratio of proton fraction (\(Y_{p}\)) to the neutron fraction (\(Y_{n}\)), \(Y_{p}/Y_{n}\), decreases across the panels. However, the asymmetry (\(\delta\)) between proton and neutron in the system is given by \(\delta=(n_{n}-n_{p})/n\), with \(n=n_{n}+n_{p}\), while \(n_{n}\) and \(n_{p}\) are neutron and proton number densities respectively. Therefore, a decrease in \(Y_{p}/Y_{n}\) results in larger values of \(\delta\) making the system even more asymmetric across the panels from left to right. This is also true for the neutrino-free regime in the lower panels from left to right. Comparing the particle fractions for the evolution of the star in the neutrino-trapped matter; we observe two main effects, the abundance of the neutrinos affects the \(Y_{p}/Y_{n}\), and the appearance of particles at low baryon densities. Trapped neutrinos delay the appearance of heavy baryons in general, at low densities, and further delay the appearance of strange matter constituents to higher densities. Comparing the upper panels (ambient condition of core birth at various stages) and the lower panels (ambient conditions after deleptonization at various stages) we observe that neutrino-trapping increases proton and electron concentration in the stellar matter. At \(S/n_{B}=1\), \(Y_{L,e}=0.4\), the \(\Delta\)-resonances start appearing at densities equivalent to the saturation density, firstly the \(\Delta^{-}\), and then the \(\Delta^{0}\), before the appearance of the first particle with strangeness, the \(\Lambda\). Subsequently, heavy baryons start appearing at relatively low densities during deleptonization: at densities lower than the saturation density. When \(S/n_{B}=2\), \(Y_{L,e}=0.2\) the baryonic composition of the star up to \(n_{B}\sim 2n_{0}\) is \(\Delta^{-}\), \(\Delta^{0}\), \(\Lambda\), \(\Sigma^{-}\), \(\Sigma^{0}\) and \(\Xi^{-}\). We can infer that during the early stages of PNS evolution, the stellar matter is mostly composed of non-strange baryons while strange matter constituents are found at higher densities, towards the center of the star. Additionally, in the neutrino-free phase of the star's evolution, the bottom panels from left to right, the strange matter population at densities lower than \(2n_{0}\) decreases as the star cools down. The heavy baryon content of the stellar matter at \(n_{B}\sim 2n_{0}\), when it cools down to \(T=0\), is \(\Delta^{-}\). The \(\Lambda\) appears slightly further from \(2n_{0}\), and the \(\Xi^{-}\) appears around \(3n_{0}\). In sum, the strange matter constituents are suppressed to higher densities when the entropy in the core is low (Prakash et al., 2001). In general, the threshold density for the emergence of the hyperons decreases with increasing entropy and decreasing lepton number density. This implies higher temperatures favor the appearance of the hyperons at low densities since an increase in entropy is accompanied by an increase in temperature -- see Fig. 3. In the bottom panel with \(S/n_{B}=2\), the star is lepton-poor but still hot, indeed, the star is expected to reach its maximum temperature when \(S/n_{B}=2\), \(Y_{\nu e}=0\) before it starts cooling. This can be seen in Fig. 3 below. The star then continues cooling until it forms a cold _neutron star_ at \(T=0\). In Fig. 2 we present the results for the EoS of hot star matter at various stages of evolution for \(N\), \(H\), and \(\Delta\)-resonances admixed hypernuclear matter in \(\beta\)-equilibrium at a fixed entropy. We show the results for the pressure \(P\) as a function of the total energy density \(\varepsilon\). The figure in the top panel represents the EoS for neutrino-trapped matter and the bottom panel represents neutrino-free matter. The EoS for \(N\), \(NH\), and \(NH\Delta\) hypernuclear matter becomes stiffer with decreasing \(Y_{L,e}\) (\(0\leq Y_{L,e}\leq 0.4\)) and \(S/n_{B}\) (\(1\leq S/n_{B}\leq 2\)).This behavior is accompanied by a decrease in \(Y_{p}/Y_{n}\) as the primary attribute which makes the system more asymmetric thereby increasing the symmetry energy. The inclusion of hyperons to hypernuclear matter generally softens the EoS while the \(\Delta\)-resonances soften the EoS at low to intermediate densities and stiffen it at higher densities. This observation is well established in zero temperature studies of neutron stars (Schirnhoff et al., 2010; Drago et al., 2014; Cai et al., 2015; Ribes et al., 2019; Sahoo et al., 2018). We see from Fig. 1 that the presence of a large electron neutrino fraction delays the appearance of the hyperons to higher densities while the low electron content at higher entropies enhances the appearance of the heavy baryons at low densities. The appearance of heavy baryons at low densities significantly softens the EoS, both for neutrino-free and neutrino-trapped stellar matter, which is a well-known result also related to the hyperon puzzle (Menezes, 2021) In Fig. 3, we present the results for temperature as a function of baryon density for hot hypernuclear matter composed of nucleons, nucleons and hyperons, and nucleons, hyperons and \(\Delta\)-resonances. The diagram in the upper panel is the temperature profile in neutrino-trapped hot star for different \(S/n_{B}\) and \(Y_{L,e}\) while the lower panel represents the temperature profile for neutrino transparent matter for \(S/n_{B}=2\). Generally, the changes in the slope of the figures are attributed to the appearance of heavy baryons. For pure nucleonic matter, the temperature increases steadily with baryon density. When hyperons and \(\Delta\)-resonances are introduced into the hypernuclear matter they decrease the temperature significantly, and we start seeing a departure from the \(N,\,NH,\) and \(NH\Delta\) curves at baryon densities between \(n_{0},\,\) and \(2n_{0}.\) It is worth mentioning that introducing additional particles into the system increases its entropy per baryon, which is accounted for by the temperature drop in the system. Thus, the temperature profile for nucleon-only Figure 1: The figures show the particle fraction (\(Y_{l}\)) as a function of baryon density at various stages of PNS evolution. The upper panels show the results for the neutrino-trapped region; the first panel from the left \(S/n_{B}=1,\,Y_{L,e}=0.4\) shows when the star was born, and the second panel \(S/n_{B}=2,\,Y_{L,e}=0.2\) shows when the star starts deleptonization following neutrino diffusion. The lower panels show the neutrino-free regime of the star; the first panel \(S/n_{B}=2,\,Y_{\nu e}=0\) shows a maximally heated star and the right panel \(T=0\) shows the stage where a cold _neutron star_ is born. stellar matter is higher than that of the nucleon plus hyperon admixture which is, in turn, higher than nucleon plus hyperon plus \(\Delta\)- resonances admixture. This observation is in agreement with the discussions in Oertel et al. (2016); Raduta et al. (2020) which argue that the entropy of a hypernuclear system increases with the number of constituent particles. In that regard, in a system with fixed entropy, an increase in constituent particles leads to an increase in the specific heat of the system which favors a temperature decrease. That notwithstanding, it has been argued in Mayle et al. (1993) that the introduction of negatively charged particles into hypernuclear matter other than electrons reduces the net electron number density, releasing electron degeneracy energy and resulting in a high-temperature supernova core. Comparing the temperature profiles to the particle abundances, we observe that the temperature profile for \(N\) and \(NH\Delta\) start departing from each other at a density in which the first \(\Delta\)-resonance baryon, i.e. when \(\Delta^{-}\) appears in the matter for each system. Likewise, the temperature profile for \(NH\) departs from \(N\) at a density in which the first strange particle appears in the matter, mostly, the \(\Lambda\)-particle. Moreover, hyperons and \(\Delta\)-resonances appear at relatively lower densities for higher entropy, \(S/n_{B}=2\) matter as in Fig. 1, this reflects in the temperature profiles as well. The characteristics of the temperature profile are attributed to the appearance of new particles introducing new degrees of freedom and altering the specific heat of the system which is compensated by the change in temperature to keep the entropy fixed -- see Sedrakian & Harutyunyan (2022); Raduta et al. (2020) for more discussion. In Fig. 4 we show the results of the gravitational mass of stars as a function of their radii at different stages of their evolution for baryonic matter composed of \(N\), \(NH\), and \(NH\Delta\). The onset of new degrees of freedom is distinctively represented by different curves with different slopes. The top panel shows the regime in which the neutrinos are trapped inside the star at different \(S/n_{B}\) and \(Y_{L,e}\) while the bottom panel shows the results for neutrino transparent region of the star for \(S/n_{B}=2\) and \(T=0\). Generally, the presence of hyperons and \(\Delta\)s are expected to reduce the maximum mass of Figure 3: The figures show the temperature profiles in a PNS at different stages of its evolution. The upper panel shows the temperature profiles of the neutrino-trapped region and the lower panel shows the temperature profiles of the neutrino-free region of the evolution. Figure 2: We present the EoS composed of \(N\), \(NH\), and \(NH\Delta\) at various stages of PNS evolution. The upper panel shows the EoS for neutrino-trapped matter for different \(S/n_{B}\) and \(Y_{L,e}\), and the lower panel shows the EoS for the neutrino-free region for \(S/n_{B}=2\) and \(T=0\). the star, preventing it from reaching the maximum observable mass (Antoniadis et al., 2013; Demorest et al., 2010). One way of dealing with this problem is through a consistent definition of the baryon-meson coupling. That notwithstanding, the model under investigation is compatible with the \(2M_{\odot}\) constraint. We observe from Tab. 3 and the figures that the radius of the star increase with increasing \(S/n_{B}\) and decreasing \(Y_{L,e}\). This is because at higher entropies the star gets heated and expands and its mass also reduces due to neutrino diffusion. Aside from the discussions above, we employ different couplings and carry out the study at fixed lepton number density and entropy. This makes our results for \(S/n_{B}=1,\ \text{and}\ 2\) different from cold \(\beta\)-equilibrated neutron stars. As can be observed in Fig. 4, the mass-radius diagrams for hot non-rotating spherically symmetric neutron stars, the intermediate-masses, and the maximum masses presented in Table. 3 have radii relatively large for both neutrino-trapped and neutrino-transparent matter compared to a cold _neutron star_. This is attributed to the hot nature of the stars under study in those stages. ## 4 Conclusions We investigated the presence of exotic baryon contents in neutron stars from birth through a supernova explosion until it catalyzes to form a cold neutron star. A relativistic model within a mean-field approximation was used for this work. The snapshots of the particle abundances at various stages of the star evolution are presented in Fig. 1. We examined the EoS for \(N\), \(NH\), and \(NH\Delta\) mater and observed that \(NH\Delta\), soften the EoS, as is well known, and the results are presented in Fig. 2. The temperature profiles during the evolution of the star were also studied. The inclusion of new particles, such as the hyperons, reduces the temperature below the nucleon-only stellar matter and the further addition of \(\Delta\)-resonances to nucleon plus hyperons further decreases the temperature of the stellar matter. Consequently, the presence of hyperons and \(\Delta\)-resonances increases the specific heat, leading to a decrease in the temperature gradient. The mass-radius diagram was also studied and the results are presented in Fig. 4. The evolution stages of the star are summarized below: * First stage: \(S/n_{B}=1\), \(Y_{L,e}=0.4\), this is a neutrino-trapped regime at the early stages of the evolution. Here, the heavy baryons appear at densities greater than the saturation density, \(n_{B}>n_{0}\). The particle content up to \(n_{B}\sim 2n_{0}\) is in the order \(\Delta^{-},\ \Delta^{0},\ \text{and}\ \Lambda\). The temperature profile of the stellar matter at this stage is relatively less than the neutrino diffusion stage and it has a relatively stiffer EoS and smaller radii. * Second stage: \(S/n_{B}=2\), \(Y_{L,e}=0.2\), this is the deleptonization stage where the star gets heated and expands due to neutrino diffusion. The temperature profile at this stage is higher than in the first stage and the EoS softens with relatively high star radii. At this stage, the heavy baryons shift more towards lower baryon densities (less than the saturation density), and the order of particle appearance up to \(n_{B}\sim 2n_{0}\) are \(\Delta^{-},\ \Delta^{0},\ \Lambda,\ \Sigma^{-},\ \Sigma^{0},\) and \(\Xi^{-}\). Thus, the neutrino abundance suppresses the appearance of the heavy baryons and delays the strange matter particles to higher baryon densities, comparing this and the first stage. * Third stage: \(S/n_{B}=2\), \(Y_{ee}=0\), here the star is maximally heated, neutrino-transparent, and cooling through the emission of pairs of neutrinos. The temperature of the stellar matter here is higher than in the two stages described above, with softer EoS and higher radii. The heavy particles are shifted towards lower baryon densities, in the order Figure 4: Gravitational mass \(M\) of a PNS as a function of radii \(R\) for non-rotating spherically-symmetric stars. The top panel shows the results for neutrino-trapped \(\beta\)-equilibrated star matter at different stages of the star’s evolution with different \(S/n_{B}\) and \(Y_{L,e}\). The bottom panel shows a neutrino-transparent star for \(S/n_{B}=2\) and \(T=0\). \begin{table} \begin{tabular}{l l l l} \(S/n_{B}\); \(Y_{L,e}\) & Matter content & \(M_{\rm max}/M_{\odot}\) & \(R/km\) \\ \hline & \(N\) & 2.44 & 12.34 \\ 1; 0.4 & \(NH\) & 2.32 & 12.41 \\ & \(NH\Delta\) & 2.32 & 12.41 \\ \hline & \(N\) & 2.49 & 12.83 \\ 2; 0.2 & \(NH\) & 2.29 & 12.59 \\ & \(NH\Delta\) & 2.29 & 12.56 \\ \hline & \(N\) & 2.49 & 12.87 \\ 2; \(Y_{ee}=0\) & \(NH\) & 2.24 & 12.51 \\ & \(NH\Delta\) & 2.24 & 12.41 \\ \hline & \(N\) & 2.48 & 12.03 \\ \(T=0\) & \(NH\) & 2.26 & 11.96 \\ & \(NH\Delta\) & 2.26 & 11.91 \\ \hline \end{tabular} \end{table} Table 3: Maximum masses (\(M_{\rm max}\)) and radii (\(R\)) of stellar matter \(\Delta^{-},\ \Delta^{0},\ \Lambda,\ \Sigma^{-},\ \Sigma^{0},\Xi^{-},\ \Xi^{0}\,\)and \(\Sigma^{+}\), almost all the particles appear before or at \(n_{B}\,=\,2n_{0}\). Here, more strange matter constituents appear at lower densities compared with the previous stage. * Final stage: \(T=0\), at this stage the star is neutrino-transparent and in a catalyzed configuration The star shrinks with stiffer EoS and smaller radii. The heavy baryons shift towards higher baryon densities, \(n_{B}\,>\,n_{0}\). The heavy baryon content of the stellar matter up to \(n_{B}\sim 2n_{0}\) is \(\Delta^{-}\). We have one heavy baryon appearing at this density range, at this stage, with it being non-strange. Comparing the three stages above, the heavy baryons shift gradually towards higher densities as the star cools. The results qualitatively agree with the ones in references Raduta et al. (2020); Sedrakian & Harutyunyan (2022); Malfatti et al. (2019); Sedrakian & Harutyunyan (2021); Pons et al. (1999) in terms of \(\Lambda\) being the most abundant heavy baryon, the softening and hardening of the EoS, the temperature profile of the stellar matter, and the hierarchy of the mass-radius diagram. The evolution stages of the PNS, its structure, and compositions are also discussed in Ref. Prakash et al. (1997). We observed that the presence of higher temperatures inside the star favors the appearance of heavy baryons at lower baryon densities and vice versa. We can draw a relation between the entropy increase and the softening of the EoS because the appearance of heavy baryons at lower densities means a softer EoS. On average, the most abundant heavy baryons in the star at all the stages of its evolution are; the \(\Lambda^{0}\), which constitutes more than 18% of the matter content followed by \(\Delta^{-}\) which constitutes about 10%, followed by \(\Xi^{-}\) (the \(\Delta^{0}\) surpasses it only at the first stage) which constitutes about 7% of the matter content before the next \(\Delta\)-resonance (\(\Delta^{0}\)) forming about 4% of the matter content. ## Acknowledgements This work is a part of the project INCT-FNA Proc. No. 464898/2014-5. D.P.M. was partially supported by Conselho Nacional de Desenvolvimento Cientifico e Tecnologico (CNPq/Brazil) under grant 303490-2021-7. A.I. and K.D.M. were also supported by CNPq/Brazil under grants 168546/2021-3 and 150751/2022-2, respectively. M.R.P. is supported by Conselho Nacional de Desenvolvimento Cientifico e Tecnologico - Brasil (CNPq) and Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (Capes/Brazil) with scholarships. ## Data availability The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request.
2310.14366
Bi-Encoders based Species Normalization -- Pairwise Sentence Learning to Rank
Motivation: Biomedical named-entity normalization involves connecting biomedical entities with distinct database identifiers in order to facilitate data integration across various fields of biology. Existing systems for biomedical named entity normalization heavily rely on dictionaries, manually created rules, and high-quality representative features such as lexical or morphological characteristics. However, recent research has investigated the use of neural network-based models to reduce dependence on dictionaries, manually crafted rules, and features. Despite these advancements, the performance of these models is still limited due to the lack of sufficiently large training datasets. These models have a tendency to overfit small training corpora and exhibit poor generalization when faced with previously unseen entities, necessitating the redesign of rules and features. Contribution: We present a novel deep learning approach for named entity normalization, treating it as a pair-wise learning to rank problem. Our method utilizes the widely-used information retrieval algorithm Best Matching 25 to generate candidate concepts, followed by the application of bi-directional encoder representation from the encoder (BERT) to re-rank the candidate list. Notably, our approach eliminates the need for feature-engineering or rule creation. We conduct experiments on species entity types and evaluate our method against state-of-the-art techniques using LINNAEUS and S800 biomedical corpora. Our proposed approach surpasses existing methods in linking entities to the NCBI taxonomy. To the best of our knowledge, there is no existing neural network-based approach for species normalization in the literature.
Zainab Awan, Tim Kahlke, Peter Ralph, Paul Kennedy
2023-10-22T17:30:16Z
http://arxiv.org/abs/2310.14366v1
# Bi-Encoders based Species Normalization - Pairwise Sentence Learning to Rank ###### Abstract **Motivation:** Biomedical named-entity normalization involves connecting biomedical entities with distinct database identifiers in order to facilitate data integration across various fields of biology. Existing systems for biomedical named entity normalization heavily rely on dictionaries, manually created rules, and high-quality representative features such as lexical or morphological characteristics. However, recent research has investigated the use of neural network-based models to reduce dependence on dictionaries, manually crafted rules, and features. Despite these advancements, the performance of these models is still limited due to the lack of sufficiently large training datasets. These models have a tendency to overfit small training corpora and exhibit poor generalization when faced with previously unseen entities, necessitating the redesign of rules and features. **Contribution:** We present a novel deep learning approach for named entity normalization, treating it as a pair-wise learning to rank problem. Our method utilizes the widely-used information retrieval algorithm Best Matching 25 to generate candidate concepts, followed by the application of bi-directional encoder representation from the encoder (BERT) to re-rank the candidate list. Notably, our approach eliminates the need for feature-engineering or rule creation. We conduct experiments on species entity types and evaluate our method against state-of-the-art techniques using LINNAEUS and S800 biomedical corpora. Our proposed approach surpasses existing methods in linking entities to the NCBI taxonomy. To the best of our knowledge, there is no existing neural network-based approach for species normalization in the literature.1 Footnote 1: [https://www.ncbi.nlm.nih.gov/taxonomy/?term=human](https://www.ncbi.nlm.nih.gov/taxonomy/?term=human) entity normalization; deep learning; NCBI Taxonomy; species ## Introduction Biomedical named-entity normalization (NEN) is the process of assigning a unique identifier to a biomedical entity. For example, _homo sapien_ or a human is a species that is assigned the identifier 9606 in the NCBI taxonomy[1]. Biomedical named-entity recognition identifies the type of a biomedical entity, such as gene, protein, cell-line, disease, chemical, drug or species. Once identified, an entity has to be linked with a standard knowledge base such as the NCBI taxonomy [1] for species, PubChem for chemicals, Gene Ontology [2] for genes, UniProt [3] for proteins and the CTD (comparative toxicogenomics database) [4] for diseases. In the literature, NEN is also referred to as entity linking or entity disambiguation. Entity normalization is a primary task in any biomedical information extraction pipeline after named entity recognition (NER) which is defined as linking named entities to a standard database identifier and before relation extraction [5] and knowledge base construction [6]. Entity linking is essential to such a pipeline as its applications include semantic web, information retrieval, knowledge base construction and recommender systems, to name a few. Entity linking poses challenges including, ambiguity and term variation. Ambiguity is when an entity can be linked to more than one identifiers, for example, perenns can be linked to the taxa Bellis perenns or Monopera perenns with the taxonomy IDs 41492 or 2584082 respectively depending on the context. "Term variation" is when multiple syntactically different terms could be mapped to the same identifier. For instance, two morphologically different terms Drosophila melanogaster and fly are mapped to the same unique identifier 7227 in the NCBI taxonomy. Term variation is a major challenge in biomedical entity linking, whereas ambiguity is more prevalent in the general domain. In this paper, we investigate the issue of species named entity normalization, wherein a method is employed to assign a distinct identifier from the NCBI taxonomy to a given set of species. The recognition of species and their linkage to the NCBI taxonomy hold significant significance as they aid scientists in species identification, computational knowledge inference, and extraction. Our proposed approach draws inspiration from a recently introduced model in the field of medical entity normalization, aiming to reduce the reliance on feature engineering and rule creation [7]. In the next section we discuss related work, then our proposed method. Next we describe the experimental setup, results and discussions and finally we conclude our paper. ### Contribution We apply BERT-based ranking to species normalization to the NCBI taxonomy and benchmark our approach on LINNAEUS and S800 corpus. We compare our approach with two competitive baselines and demonstrate that those baseline methods cannot capture species' semantics and hence underperform. ### Related Work Biomedical named-entity normalization has been approached with dictionary-based as well as machine learning-based approaches. Also, most of the existing works for species focus on species recognition and normalization (linking to the NCBI) as a subsequent step. For chemical entity normalization tmChem [8] utilize a dictionary-based approach after performing chemical entity recognition. The chemical entities were collected from MeSH [2] and ChEBI[3]. Sieve-based entity linking [9] is an approach that uses ten sieves or rules to assign an identifier to disorder mentions. Sieves are rules such as an exact match or partial match. If a disorder mention does not pass any sieve, it is assigned the identifier "unlinkable". OrganismTagger [10] is a rule-based system for recognizing and normalizing organisms to the NCBI identifiers. Dictionary-based methods require the dictionaries to be comprehensive and re-processed every time a new organism is added to the database. Hence, speed is a concern for dictionary-based approaches. Kate [11] proposed an edit distance-based method for disease and disorder mention normalization, capable of automatically learning term variations/rules. The method, however, does not understand the semantics of mentions. Li and colleagues [12] apply a convolutional neural network (CNN) based network to disease normalization which uses a sieve-based approach [9] to generate candidate concepts for each disease mention and then maximize the similarity between mention and candidate concept pair. Cho et al. [13] use word2vec embeddings to map the plant entity mentions and concepts form knowledge bases on vector space and maximize the cosine similarity between the pair for diseases and plant names. Kaewphan et al. [14] propose a normalization method for multiple biomedical entity types based on fuzzy string matching. Entity mention and concept are mapped to vectors using character n-gram frequencies and cosine similarity is used as a scoring function to maximize the similarity between entity mentions and concept terms. DNorm [15] applies pair-wise learning to rank approach to the disease normalization task. The entities and concepts from knowledge base are represented as term frequency-inverse document frequency. A scoring function is learnt from training data that maximizes the similarity between entity mention and concepts. Furrer et al. address normalization of the biomedical entities [16] as a sequence labelling problem using a bi-directional long short-term memory network (Bi-LSTM). However, in a sequence labelling problem, a model can normalize only those entities seen in the training set, which is a major limitation of such systems. Zhou et al. propose knowledge enhanced normalization for genes and proteins [17]. Their method employs embedded language models (ELMo) and structural knowledge from NCBI and UniProt ontologies for protein and gene normalization with the performance of 44.5% F1-measure. NormCo [18] is a deep coherence model for disease normalization which combines and entity phrase model to capture a semantic model and a topical coherence model to learn about other disease mentions in a document. The final model is composed of combining these two models and are trained jointly. TaggerOne [19] jointly trains for NER and NEN using semi Markov models. It uses a rich feature set for NER and tf-idf weights for NEN. Deng and colleagues [20] propose a two-step convolutional neural network ensemble to normalize microbiology related entities. Ferre et al. [21] propose a hybrid of word embeddings, ontological information and weak supervision method for normalizing Habitat and Phenotype entities to OntoBiotope ontology [22]. Recently, bidirectional encoder representation from Transformers (BERT) [23] has been applied to biomedical/clinical entity normalization. BERT has been applied to clinical NEN in two different ways. Firstly, Li and colleagues [24] apply BERT to electronic health records to normalize disorder mentions to knowledge bases by treating normalization as a token level task, and a number of disorder mentions in the knowledge base are the number of classes. Secondly, Ji and colleagues [7] apply BERT to disease NEN by approaching it as a ranking problem which is a sequence level task. They use the BM25 algorithm to generate candidate concepts and rerank the candidates by learning semantic similarity between entity and concept pairs. Research Gaps Species normalization has mostly been addressed as a dictionary matching problem-as a subsequent step of named entity recognition. The issue with these approaches is that they do not capture the context and semantics of species names. The addition of a new species may require additional rules or features, and species may be linked to concepts that are lexically similar but semantically different. Here we apply 'pair-wise learning to rank' using pre-trained BERT [23] based language models for species normalization, which do not require any rules or features. The corpus can be easily constructed from the NCBI taxonomy with a simple script, and new entities can be added without changing rules or features. Although similar works have been done for the drugs and disease normalization, no work has been reported on linking taxonomic units to the NCBI identifiers. State-of-the-art NLP pipelines employ deep learning models for genes, disease, chemicals, drugs and mutations normalization and still rely on dictionary lookup approaches for species [25]. There are no existing neural models for species normalization. ### Baselines We used OrganismTagger [10] and ORGANISMS web resource [26] as baselines. OrganismTagger [10] is a hybrid of machine learning and rule-based techniques. It uses pre-processing modules from the General Architecture for Text Engineering (GATE) framework to pre-process text documents and the NCBI taxonomy database for assigning a unique identifier to identified mentions in documents. We used GATE developer4 for tagging species with their unique identifier. In Figure 1, we show the GUI of OrganismTagger, where species highlighted in green colour have been recognized and assigned a unique identifier. The species that are not highlighted have not been recognized and hence not linked with any identifier. Footnote 4: [https://gate.ac.uk/family/developer.html](https://gate.ac.uk/family/developer.html) ORGANISMS [26] is a dictionary-based web resource for taxonomy-based retrieval of documents. It uses a dictionary that has taxonomic units from the NCBI taxonomy. For a given input mention, it retrieves a matched name, primary name, type and identifier. For instance, for the given query "bananas", it retrieves two matches, out of which the first match is considered only as in Figure 2. ### Proposed Method This section presents the problem definition and our proposed method to perform named entity normalization of species. #### Problem Definition We approach species normalization as a linking problem, where a given named entity \(NE\) should be linked to a candidate concept (CC) from a knowledge base (KB), \(\text{NE}\rightarrow\text{CC}\). The problem here is addressed as pair-wise learning to the rank problem (pLTR)[27]. In pLTR, named entities are considered queries and candidate concepts from the knowledge base form pairs with queries. During the prediction phase, a binary classifier assigns a score to each \((NE,CC)\) pair and the concept with the highest score is selected. The score is the probability of the classification label computed by the \(softmax\) function. ### Methodology First the corpora was downloaded with manually tagged and normalized species. Then the named entities were extracted from annotation files of all the articles/abstracts and duplicate entities were discarded. For example, if an entity "mouse" appears multiple times in several different articles, we consider it only once for training and testing. After deduplication, we resolve acronyms manually by searching the given identifier in the NCBI taxonomy. Figure 3 shows the architecture used for species named entity normalization. Our method has three main steps; pre-processing, candidate generation and candidate ranking. Figure 1: OrganismTagger - GATE based framework for organisms NER and NEN Figure 2: ORGANISMS - web-based resource for taxonomic names identification Pre-ProcessingIn the first step, we extract the named entities from.ann files and apply the following pre-processing steps. We convert them to lower case ASCII, remove punctuation marks and resolve acronyms to long-form manually. These named entities (species) will serve as queries to the BM25 algorithm [28]. We use the spaCy toolkit5 to pre-process documents. Footnote 5: [https://spacy.io/](https://spacy.io/) NCBI Taxonomy DictionaryWe downloaded names and node files from the NCBI Taxonomy website6 on 11 March, 2020. We pre-process the names file based on the information from the nodes file. We created separate dictionaries for strains, species, phylum, order, family and genus. The identifiers of strains, species, family, genus and phylum serve as an index in the dictionary, and the actual name/phrase serves as the content of that particular index. For instance, species dictionary index #7 contains azorhizobium caulindoans, azotirhizobium caulindoans as its contents as shown in Figure 4. Footnote 6: [ftp://ftp.ncbi.nlm.nih.gov/pub/taxonomy/](ftp://ftp.ncbi.nlm.nih.gov/pub/taxonomy/) Figure 4: A snippet of NCBI taxonomy transformed to a corpus for BM25 Figure 3: Normalization framework for linking species with NCBI taxonomy identifiers Candidate GenerationIn this step, we give queries (species) as input to the BM25 algorithm and retrieve top \(k=10,3\) candidates for each query. Okapi BM25, a commonly used information retrieval algorithm by search engines [29, 27, 30], is a probabilistic framework which retrieves a ranked list of documents for a given query. We choose the default values of parameters as \(k1=1.2\), \(k2=100\) and \(b=0.75\)[31]. Candidate RankingReranking of the retrieved list is treated as a sentence-pair classification task. A query and its candidates make sentence pairs. Each query has at most ten query-candidate pairs. An example of such a pair is shown in Table 1, where aspergillus nidulans is the named entity that serves as the query to the BM25 algorithm which in turn retrieves \(k=10\) candidate concepts from the NCBI Taxonomy. The objective is to rerank the list and bring the correct candidate to the top of the list. The Label column is 1 if the retrieved candidate is correct and 0 if the retrieved candidate is incorrect. The sentence classification task will maximize semantic equivalence between query and candidate concept. We use pre-trained BioBERT [32] and Bert-base-uncased 1 for fine-tuning on the LINNAEUS and S800 corpora. The probability of \(label=1\) is used as the score for reranking the list. The pair with the highest probability(score) is assigned to the input entity, where identifier = arg max \(P(label=1|NE,CC)\). Footnote 1: [https://huggingface.co/bert-base-uncased](https://huggingface.co/bert-base-uncased) ### Corpora In this section, we explain the two corpora used for evaluation purposes. LinnaeusThe Linnaeus corpus [33] has 100 full text articles randomly chosen from Pubmed Central Open Access (PMC-OA) subset. The corpus has been annotated manually for species and has been mapped to NCBI taxonomy identifiers. \begin{table} \begin{tabular}{l|l|l} \hline \hline **Candidate Concept** & **Query Identifier** & **Label** \\ \hline aspergillus nidulans aspergillus nidulans americella nidulans (162425) & aspergillus nidulans (162425) & 1 \\ aspergillus latus aspergillus nidulans var latus aspergillus sp jac 2016b emericella nidulans var lata (41734) & aspergillus nidulans (162425) & 0 \\ aspergillus delacroixi aspergillus delacroixi aspergillus pelibulas nidulans (162425) & 0 \\ aspergillus nidulans var echninulaus aspergillus spinulos emericella nidulans var echninulata emericella & \\ nidulans var echninulata (1810908) & aspergillus nidulans (162425) & 0 \\ synechococcus nidulans(463277) & aspergillus nidulans (162425) & 0 \\ mecopus nidulans (1898863) & aspergillus nidulans (162425) & 0 \\ phyllotopsis nidulans(38812) & aspergillus nidulans (162425) & 0 \\ nassella nidulans (523898) & aspergillus nidulans (162425) & 0 \\ aphanothec nidulans (202207) & aspergillus nidulans (162425) & 0 \\ olgaea nidulans (591996) & aspergillus nidulans (162425) & 0 \\ oxalis nidulans (245251) & aspergillus nidulans (162425) & 0 \\ \hline \hline \end{tabular} \end{table} Table 1: Named entity (Query)- Candidate Concepts pairs example S800The S800 corpus [26] has 800 abstracts from PubMed from diverse taxonomic classes. It has eight taxonomic units from bacteriology, botany, entomology, medicine, mycology, protistology, virology and zoology. The corpus is manually annotated for species and normalized to NCBI taxonomy identifiers. ### Experimental Setup In this section, we discuss how the data has been split for training and evaluation and the configurations of the BERT network used for pair-wise learning to rank. Table 2 shows that we have used LINNAEUS and S800 corpora for training and evaluation where \(80\%-10\%-10\%\) of the documents go into training, development and test subsets, respectively. Each set had duplicates of named entities, as a named entity may appear more than once in an abstract or full-text document. Hence, we applied a deduplication script to get rid of duplicate named entities. For instance, if an entity "rat" appears multiple times in several abstracts, it is considered only once in the training subset. We use PyTorch's transformers library by HuggingFace[34] to finetune BERT. We implement pair-wise learning to rank model by adding a linear layer with softmax activation on \([CLS]\) token. We encode the input sequences as [CLS]named entity[SEP]candidate concept[SEP]. We use Bert-base-uncased and BioBert [32] pre-trained models with 10 epochs, batch size \(16\) and \(3\times 10^{-5}\) learning rate, and remaining hyper-parameters kept the same as in the pre-trained models. ### Evaluation and Discussion In this section, we evaluate our method on two corpora, LINNAEUS and S800. We report our findings and compare with OrganismTagger [10] and ORGANISMS [26]. We use accuracy as an evaluation metric. If the correctly linked identifier is ranked first (highest score) out of 10 candidates for a named entity, we consider it correct. We considered top-10 and top-3 candidates. When we used top-3 candidates, it resulted in a slightly lower performance. Hence, we used top-10 candidates only. Accuracy is measured as the number of correctly normalized entities divided by the total number of entities. For evaluation, we consider only those entities for which at least one correct candidate is generated in the candidate generation step. If no correct candidate is generated in the candidate-generation step, then there is no purpose of re-ranking the incorrect candidates. We perform five sets of experiments to evaluate our test data on OrganismTagger, ORGANISMS, BM25, BM25+Bert-base-uncased and BM25+BioBERT. OrganismTagger and ORGANISMS are the two state-of-the-art baseline methods for species normalization to the NCBI taxonomy. BM25 algorithm is used to generate candidate identifiers for the named entities(queries) in the test set. BM25+bert-base-uncased and BM25+BioBERT are used to re-rank the candidates generated from the BM25 \begin{table} \begin{tabular}{l c|c|c|c|c} \hline \hline & \multicolumn{2}{c|}{**LINNAEUS**} & \multicolumn{2}{c}{**S800**} \\ \cline{2-6} & train & dev & test & train & dev & test \\ \hline \# of documents & 80 & 10 & 10 & 640 & 80 & 80 \\ \hline \# of unique mentions & 257 & 102 & 44 & 1016 & 133 & 134 \\ \hline \hline \end{tabular} \end{table} Table 2: Corpora statistics algorithm. We report the accuracy of BM25 alone, where we consider the top-most candidate and do not consider the rest of the candidates. We then report accuracy after re-ranking the top-10 candidates generated from the BM25 generator. Table 3 shows that the BM25 generator combined with BioBERT outperforms OrganismTagger, ORGANISMS, BM25, and BM25+BERT-base-uncased. For the LINNAEUS corpus, when the BM25 generator was used alone, we see that only 50% of the entities were normalized correctly. However, after applying BERT based re-ranker, the accuracy is improved to 80% and 86.66% for BM25+Bert-base-uncased BM25+BioBERT methods, respectively. For the S800 corpus, Table 3 shows an accuracy of 71.76% for OrganismTagger and 72.41% for ORGANISMS BM25+BERT-base-uncased; the number of correctly normalized entities were the same for each of these but, the entities were different. The BM25 generator retrieves 58.6% correct identifiers for entities. Re-ranking improves the accuracy from 58.6% to 79.31% for the BM25+BioBert model. One limitation of this work is that for evaluation purpose, we consider only those entities for which BM25 generates at least one correct candidate out of the top-10 candidates. We discard those named entities for which no correct candidate was generated as reranking will not help. In this case, after generating candidates, we have 30 entities in LINNAEUS and 87 entities in the S800 test set. We show interesting use cases in Table 4, where we observe that for named entities child[8], children[9], fire ant[10] and Asian rice[11] ORGANISMS [26] assign identifiers based on the lexical overlap and ignores the semantic context of entities. For all of these named entities our proposed framework was able to understand semantics and hence assigned the correct identifiers. For instance, our approach assigned 9606 to children instead of assigning 525814 which is Childrena, a genus of butterflies. ## Conclusion In this research paper, we propose a comprehensive methodology for normalizing species entities to their corresponding NCBI taxonomy identifiers. Our approach \begin{table} \begin{tabular}{l c c} \hline \hline & LINNAEUS & S800 \\ \cline{2-3} & Accuracy & Accuracy \\ \hline \hline OrganismTagger [10] & 44.57\(\pm\)5.12 & 78.66\(\pm\)4.59 \\ ORGANISMS [26] & 50.74\(\pm\)11.2 & 74.62\(\pm\)3.27 \\ BM25 & 59.58\(\pm\)13.56 & 67.81\(\pm\)7.72 \\ BM25+bert-base-uncased & 82.46\(\pm\)3.67 & 84.16\(\pm\)7.51 \\ BM25+BioBERT & 88.56\(\pm\)5.12 & 86.86\(\pm\)4.96 \\ \hline \hline \end{tabular} \end{table} Table 3: Evaluation on LINNAEUS and S800 corpora for the test set consists of three key steps: the creation of a dictionary or corpus from the NCBI Taxonomy, candidate generation using the probabilistic information retrieval framework BM25, and re-ranking based on the powerful bi-directional encoder representation from the encoder (BERT). We evaluate the effectiveness of our proposed pipeline on two widely used benchmark corpora, LINNAEUS and S800, specifically focusing on species entities. Our experimental results demonstrate that the BERT-based re-ranking significantly improves the accuracy of normalization by assigning higher ranks to the correct identifiers within the candidate list. However, the quality of candidates generated by the BM25 framework plays a crucial role in the success of the re-ranking process. Depending on the size and nature of the ontology to which entities need to be linked, rule-based, probabilistic, and semantic search-based candidate generators can be employed. Future research directions include exploring better candidate generation techniques to fully leverage the potential of BERT-based re-ranking. Additionally, the use of a joint model based on BERT for both species recognition and normalization could be investigated to minimize error propagation from the recognition stage to the normalization stage. ## Competing interests None declared. **Author's contributions** Text for this section... **Acknowledgements** Text for this section... **Author details** \({}^{1}\)Australian Artificial Intelligence Institute, UTS, NSW, Sydney, Australia. \({}^{2}\)Climate Change Cluster, UTS, NSW, Sydney, Australia.
2301.02245
Comments on the Bell-Clauser-Horne-Shimony-Holt inequality
We discuss the relationship between the Bogoliubov transformations, squeezed states, entanglement and maximum violation of the Bell-CHSH inequality. In particular, we point out that the construction of the four bounded operators entering the Bell-CHSH inequality can be worked out in a simple and general way, covering a large variety of models, ranging from Quantum Mechanism to relativistic Quantum Field Theories. Various examples are employed to illustrate the above mentioned framework. We start by considering a pair of entangled spin 1 particles and a squeezed oscillator in Quantum Mechanics, moving then to the relativistic complex quantum scalar field and to the analysis of the vacuum state in Minkowski space-time in terms of the left and right Rindler modes. In the latter case, the Bell-CHSH inequality turns out to be parametrized by the Unruh temperature.
S. P. Sorella
2023-01-05T05:17:38Z
http://arxiv.org/abs/2301.02245v2
# Comments on the Bell-Clauser-Horne-Shimony-Holt inequality ###### Abstract We discuss the relationship between the Bogoliubov transformations, squeezed states, entanglement and maximum violation of the Bell-CHSH inequality. In particular, we point out that the construction of the four bounded operators entering the Bell-CHSH inequality can be worked out in a simple and general way, covering a large variety of models, ranging from Quantum Mechanism to relativistic Quantum Field Theories. Various examples are employed to illustrate the above mentioned framework. We start by considering a pair of entangled spin 1 particles and a squeezed oscillator in Quantum Mechanics, moving then to the relativistic complex quantum scalar field and to the analysis of the vacuum state in Minkowski space-time in terms of the left and right Rindler modes. In the latter case, the Bell-CHSH inequality turns out to be parametrized by the Unruh temperature. ## 1 Introduction The aim of this work is that of pointing out that the construction of the four Hermitian operators \(A_{i}\), \(B_{i}\), \(i=1,2\) \[A_{i}^{2}=B_{i}^{2}=1\;,\qquad[A_{i},B_{k}]=0\;. \tag{1}\] characterizing the quantum violation of the Bell-Clauser-Horne-Shimony-Holt inequality [1, 2, 3], _i.e._ \[|\langle\psi|{\cal C}_{CHSH}|\psi\rangle|=|\langle\psi|(A_{1}+A_{2})B_{1}+(A_{ 1}-A_{2})B_{2}|\psi\rangle|>2\;, \tag{2}\] can be achieved in a rather simple and elegant way, which turns out to have general applicability, covering models ranging from Quantum Mechanics to more sophisticated examples such as: relativistic Quantum Field Theories. In order to illustrate how the setup works, we proceed first by discussing two examples from Quantum Mechanics. Entangled pair of spin 1 particles Let us start by considering a pair of entangled spin 1 particles. As entangled state \(|\psi_{s}\rangle\), we take the singlet state: \[|\psi_{s}\rangle=\left(\frac{|1\rangle_{A}|-1\rangle_{B}-|0\rangle_{A}|0\rangle_ {B}+|-1\rangle_{A}|1\rangle_{B}}{\sqrt{3}}\right)\;. \tag{3}\] It is easily checked that expression (3) can be thought as the vacuum state of the spin Hamiltonian \[H=\vec{S}_{A}\cdot\vec{S}_{B}=\frac{1}{2}\left(\vec{S}_{A}+\vec{S}_{B}\right)^ {2}-2\;. \tag{4}\] with \(\left(\vec{S}_{A},\vec{S}_{B}\right)\) denoting the spin 1 matrices. For the operators \((A_{i},B_{i})\) we write \[A_{i}|-1\rangle_{A} = e^{i\alpha_{i}}|0\rangle_{A}\;,\qquad A_{i}|0\rangle_{A}=e^{-i \alpha_{i}}|-1\rangle_{A}\;,\qquad A_{i}|1\rangle_{A}=|1\rangle_{A}\;,\] \[B_{i}|1\rangle_{B} = e^{i\beta_{i}}|0\rangle_{B}\;,\qquad B_{i}|0\rangle_{B}=e^{-i \beta_{i}}|1\rangle_{B}\;,\qquad B_{i}|-1\rangle_{B}=|-1\rangle_{B}\;, \tag{5}\] where \((\alpha_{i},\beta_{i})\) are arbitrary real coefficients. The Hermitian operators \(A_{i}\), \(B_{i}\), \(i=1,2\) fulfill the requirement (1). For the Bell-CHSH correlator we obtain \[\langle\psi_{s}|{\cal C}_{CHSH}|\psi_{s}\rangle=\frac{2}{3}\left(1-cos(\alpha _{1}+\beta_{1})-cos(\alpha_{2}+\beta_{1})-cos(\alpha_{1}+\beta_{2})+cos(\alpha _{2}+\beta_{2})\right)\;. \tag{6}\] Choosing, for example, \(\alpha_{2}=\beta_{2}=0\), \(\alpha_{1}=\frac{\pi}{2}\), \(\beta_{1}=\frac{3\pi}{4}\), one gets the violation \[|\langle\Omega|{\cal C}_{CHSH}|\Omega\rangle|=\frac{2(2+\sqrt{2})}{3}\approx 2.27\;, \tag{7}\] which compares well with the value reported by [4]. The same reasoning applies to the Bell spin 1/2 singlet state \[|\psi_{s}\rangle=\left(\frac{|+\rangle_{A}|-\rangle_{B}-|-\rangle_{A}|+ \rangle_{B}}{\sqrt{2}}\right)\;. \tag{8}\] For the operators \((A_{i},B_{i})\) we have now \[A_{i}|+\rangle_{A} = e^{i\alpha_{i}}|-\rangle_{A}\;,\qquad A_{i}|-\rangle_{A}=e^{-i \alpha_{i}}|+\rangle_{A}\;,\;,\] \[B_{i}|+\rangle_{B} = e^{i\beta_{i}}|-\rangle_{B}\;,\qquad B_{i}|-\rangle_{B}=e^{-i \beta_{i}}|+\rangle_{B}\;. \tag{9}\] Setting \(\alpha_{1}=0\), \(\alpha_{2}=\frac{\pi}{2}\), \(\beta_{1}=\frac{\pi}{4}\), \(\beta_{2}=-\frac{\pi}{4}\), one recovers Tsirelson's bound [3], namely \[|\langle\psi_{s}|{\cal C}_{CHSH}|\psi_{s}\rangle|=2\sqrt{2}\;. \tag{10}\] Two degrees of freedom squeezed oscillator In this second example we consider a two degrees of freedom squeezed oscillator. Let us begin by introducing two annihilation operators \((a,b)\) satisfying \[\left[a,a^{\dagger}\right] = 1\;,\qquad[a^{\dagger},a^{\dagger}]=0\;,\qquad[a,a]=0\;,\] \[\left[b,b^{\dagger}\right] = 1\;,\qquad[b^{\dagger},b^{\dagger}]=0\;,\qquad[b,b]=0\;,\] \[\left[a,b\right] = 0\;,\qquad[a,b^{\dagger}]=0\;. \tag{11}\] The operators \((a,b)\) annihilate the state \(|0\rangle\), which is not the vacuum state of the system, _i.e._ \[a|0\rangle=b|0\rangle=0\;. \tag{12}\] Let us introduce the new operators \((\alpha,\beta)\), obtained through the Bogoliubov transformations \[\alpha=\frac{(a-\eta b^{\dagger})}{\sqrt{1-\eta^{2}}}\;,\qquad\beta=\frac{(b- \eta a^{\dagger})}{\sqrt{1-\eta^{2}}}\;, \tag{13}\] where \(\eta\) is a real parameter, \(0<\eta<1\). It is easily checked that the operators \((\alpha,\beta)\) fulfill the same commutation relations of eq.(11), namely \[\left[\alpha,\alpha^{\dagger}\right] = 1\;,\qquad[\alpha^{\dagger},\alpha^{\dagger}]=0\;,\qquad[\alpha, \alpha]=0\;,\] \[\left[\beta,\beta^{\dagger}\right] = 1\;,\qquad[\beta^{\dagger},\beta^{\dagger}]=0\;,\qquad[\beta, \beta]=0\;,\] \[\left[\alpha,\beta\right] = 0\;,\qquad[\alpha,\beta^{\dagger}]=0\;. \tag{14}\] We consider now the normalized squeezed state \[|\eta\rangle=\sqrt{(1-\eta^{2})}\;e^{\eta a^{\dagger}b^{\dagger}}|0\rangle\;, \qquad\langle\eta|\eta\rangle=1\;. \tag{15}\] Let us show that the state \(|\eta\rangle\) is annihilated by the Bogoliubov operators \((\alpha,\beta)\): \[\alpha|\eta\rangle=\beta|\eta\rangle=0\;. \tag{16}\] In fact, we have \[(a-\eta b^{\dagger})e^{\eta a^{\dagger}b^{\dagger}}|0\rangle = \left[a,e^{\eta a^{\dagger}b^{\dagger}}\right]|0\rangle-\eta b^{ \dagger}e^{\eta a^{\dagger}b^{\dagger}}|0\rangle \tag{17}\] \[= \sum_{n=1}^{\infty}\frac{\eta^{n}}{n!}(b^{\dagger})^{n}\left[a,(a ^{\dagger})^{n}\right]|0\rangle-\eta b^{\dagger}e^{\eta a^{\dagger}b^{\dagger }}|0\rangle\] \[= \eta b^{\dagger}\sum_{n=1}^{\infty}\frac{\eta^{n-1}}{(n-1)!}(b^{ \dagger})^{(n-1)}(a^{\dagger})^{(n-1)}|0\rangle-\eta b^{\dagger}e^{\eta a^{ \dagger}b^{\dagger}}|0\rangle=0\;.\] Similarly for the operator \(\beta\). As a consequence, the squeezed state \(|\eta\rangle\) turns out to be the vacuum state of the Hamiltonian \[H = \alpha^{\dagger}\alpha+\beta^{\dagger}\beta=\frac{(1+\eta^{2})}{( 1-\eta^{2})}\left(a^{\dagger}a+b^{\dagger}b\right)-\frac{2\eta}{(1-\eta^{2})} \left(a^{\dagger}b^{\dagger}+ab\right)+2\eta^{2}.\] \[H|\eta\rangle = 0\;. \tag{18}\] This expression is nothing but the Hamiltonian worked out in [5] in order to establish the violation of the Bell-CHSH inequality in relativistic free Quantum Field Theory. It becomes apparent now that the role of the parameter \(\eta\), \(0<\eta<1\), is that of a coupling constant. ### Maximum violation of the Bell-CHSH inequality We are ready now to discuss the violation of the Bell-CHSH exhibited by this model. To introduce the four Hermitian operators \(A_{i}\), \(B_{i}\), \(i=1,2\) \[A_{i}^{2}=B_{i}^{2}=1\;,\qquad[A_{i},B_{k}]=0\;, \tag{19}\] we proceed in exactly the same way as done in the case of the spin 1 example. We write the squeezed state \(|\eta\rangle\) as \[|\eta\rangle = \sqrt{(1-\eta^{2})}\;\sum_{n=0}^{\infty}\eta^{n}\frac{(a^{\dagger }b^{\dagger})^{n}}{n!}|0\rangle \tag{20}\] \[= \sqrt{(1-\eta^{2})}\;\sum_{n=0}^{\infty}\left(\eta^{(2n)}|(2n_{a })(2n_{b})\rangle+\eta^{(2n+1)}|(2n_{a}+1)(2n_{b}+1)\rangle\right)\;,\] where \(|m_{a}m_{b}\rangle\) stands for the normalized state \[|m_{a}m_{b}\rangle=\frac{(a^{\dagger})^{m}(b^{\dagger})^{m}}{m!}|0\rangle\;. \tag{21}\] For the operators \(A_{i}\), \(B_{i}\), \(i=1,2\), we have \[A_{i}|(2n_{a})\;m_{b}\rangle = e^{i\alpha_{i}}|(2n_{a}+1)\;m_{b}\rangle\;,\qquad A_{i}|(2n_{a}+1 )\;m_{b}\rangle=e^{-i\alpha_{i}}|(2n_{a})\;m_{b}\rangle\;,\] \[B_{i}|m_{a}\;(2n_{b})\rangle = e^{i\beta_{i}}|m_{a}\;(2n_{b}+1)\rangle\;,\qquad B_{i}|m_{a}\;(2 n_{b}+1)\rangle=e^{-i\beta_{i}}|m_{a}\;(2n_{b})\rangle\;. \tag{22}\] The operator \(A_{i}\) acts only on the first entry, while \(B_{i}\) only on the second one. A quick calculation gives \[\langle\eta|\;A_{k}B_{i}\;|\eta\rangle=\frac{2\eta}{1+\eta^{2}}\;\cos(\alpha_ {k}+\beta_{i})\;. \tag{23}\] Therefore, for the Bell-CHSH correlator, one finds \[\langle\eta|{\cal C}_{CHSH}|\eta\rangle = \langle\eta|(A_{1}+A_{2})B_{1}+(A_{1}-A_{2})B_{2}|\eta\rangle \tag{24}\] \[= \frac{2\eta}{1+\eta^{2}}\left(\cos(\alpha_{1}+\beta_{1})+\cos( \alpha_{2}+\beta_{1})+\cos(\alpha_{1}+\beta_{2})-\cos(\alpha_{2}+\beta_{2}) \right)\;.\] Setting [5] \[\alpha_{1}=0\;,\qquad\beta_{1}=-\frac{\pi}{4}\;,\qquad\alpha_{2}=\frac{\pi}{2 }\;,\qquad\beta_{2}=\frac{\pi}{4}\;, \tag{25}\] expression (24) becomes \[\langle\eta|{\cal C}_{CHSH}|\eta\rangle=2\;\frac{2\sqrt{2}\eta}{1+\eta^{2}}\;, \tag{26}\] implying in a violation of the Bell-CHSH inequality whenever \[\sqrt{2}-1<\eta<1\;. \tag{27}\] In particular, the maximum violation is attained for values of \(\eta\approx 1\), namely \[\langle\eta|{\cal C}_{CHSH}|\eta\rangle\approx 2\sqrt{2}\;. \tag{28}\] ## 4 The relativistic complex scalar quantum Klein-Gordon field Let us move now to Quantum Field Theory, by considering the example of a free complex massive scalar Klein-Gordon field, _i.e._ \[{\cal L}=\left(\partial^{\mu}\varphi^{\dagger}\partial_{\mu}\varphi-m^{2} \varphi^{\dagger}\varphi\right)\;. \tag{29}\] Expanding \(\varphi\) in terms of annihiliation and creation operators, one gets \[\varphi(t,\vec{x})=\int\frac{d^{3}\vec{k}}{(2\pi)^{3}}\frac{1}{2\omega_{k}} \left(e^{-ikx}a_{k}+e^{ikx}b_{k}^{\dagger}\right)\;,\qquad k^{0}=\omega_{k}= \sqrt{\vec{k}^{2}+m^{2}}\;, \tag{30}\] where \[[a_{k},a_{q}^{\dagger}]=[b_{k},b_{q}^{\dagger}]=(2\pi)^{3}2\omega_{k}\delta^ {3}(\vec{k}-\vec{q})\;, \tag{31}\] are the non vanishing canonical commutation relations. Expression (30) is a too singular object, being in fact an operator valued distribution [6]. In order to give a well defined meaning to eq.(30), one introduces the smeared operators \[a_{f}=\int\frac{d^{3}\vec{k}}{(2\pi)^{3}}\frac{1}{2\omega_{k}}\hat{f}(\omega_ {k},\vec{k})\;a_{k}\;,\qquad b_{g}=\int\frac{d^{3}\vec{k}}{(2\pi)^{3}}\frac{1 }{2\omega_{k}}\hat{g}(\omega_{k},\vec{k})\;b_{k}\;, \tag{32}\] with \[\hat{f}(p)=\int d^{4}x\;e^{-ipx}f(x)\;,\qquad\hat{g}(p)=\int d^{4}x\;e^{-ipx} g(x).\;, \tag{33}\] where \((f(x),g(x))\) are test functions belonging to the space of compactly supported smooth functions \({\cal C}_{0}^{\infty}(\mathbb{R}^{4})\). The support of \(f(x)\), \(supp_{h}\), is the region in which the test function \(f(x)\) is non-vanishing. When rewritten in terms of the operators \((a_{f},b_{g})\), the canonical commutation relations (31) read \[\left[a_{h},a_{h^{\prime}}^{\dagger}\right]=\left[b_{h},b_{h^{\prime}}^{ \dagger}\right]=\langle h|h^{\prime}\rangle\;, \tag{34}\] where \(\langle h|h^{\prime}\rangle\) denotes the Lorentz invariant scalar product between the test functions \(h\) and \(h^{\prime}\). _i.e._ \[\langle h|h^{\prime}\rangle=\int\frac{d^{3}\vec{k}}{(2\pi)^{3}}\frac{1}{2 \omega_{k}}\hat{h}(\omega_{k},\vec{k})\hat{h}^{{}^{\prime}\ast}(\omega_{k}, \vec{k})=\int\frac{d^{4}k}{(2\pi)^{4}}2\pi\;\theta(k^{0})\delta(k^{2}-m^{2}) \hat{h}(k)\hat{h}^{{}^{\prime}\ast}(k)\;. \tag{35}\] In particular, from eq.(34), it follows that \[\left[a_{f},a_{f}^{\dagger}\right]=\left[b_{f},b_{f}^{\dagger}\right]=||f||^{ 2}\;, \tag{36}\] where \(||f||^{2}\) is the norm of the test function \(f\), _i.e._ \[||f||^{2}=\int\frac{d^{4}k}{(2\pi)^{4}}2\pi\;\theta(k^{0})\delta(k^{2}-m^{2})\hat {f}(k)\hat{f}^{*}(k)\;. \tag{37}\] Let us remark that, by a simple rescaling, the test functions can be taken to be normalized to 1, namely \[f(x)\rightarrow\frac{1}{||f||}f(x)\;,\qquad||f||=1\;. \tag{38}\] Therefore, expression (36) becomes \[\left[a_{f},a_{f}^{\dagger}\right]=\left[b_{f},b_{f}^{\dagger}\right]=||f||^{ 2}=1\;. \tag{39}\] The vacuum state \(|0\rangle\) of the theory is defined as \[a_{f}|0\rangle=b_{g}|0\rangle=0\qquad\forall\;(f,g)\in{\cal C}_{0}^{\infty}( \mathbb{R}^{4})\;. \tag{40}\] The scalar complex KG field \(\varphi\), eq.(29), enables us to introduce a squeezed state very similar to that introduced in the previous section, \[|\sigma\rangle=\sqrt{(1-\sigma^{2})}\;e^{\sigma a_{f}^{\dagger}b_{g}^{\dagger }}|0\rangle\;,\qquad\langle\sigma|\sigma\rangle=1\;,\qquad 0<\sigma<1\;. \tag{41}\] We write thus \[|\sigma\rangle = \sqrt{(1-\sigma^{2})}\;\sum_{n=0}^{\infty}\sigma^{n}\;\frac{(a_{ f}^{\dagger}b_{g}^{\dagger})^{n}}{n!}|0\rangle \tag{42}\] \[= \sqrt{(1-\sigma^{2})}\;\sum_{n=0}^{\infty}\left(\sigma^{(2n)}|(2 n_{f})(2n_{g})\rangle+\sigma^{(2n+1)}|(2n_{f}+1)(2n_{g}+1)\rangle\right)\;,\] where \(|m_{f}m_{g}\rangle\) stands for the normalized state \[|m_{f}m_{g}\rangle=\frac{(a_{f}^{\dagger})^{m}(b_{g}^{\dagger})^{m}}{m!}|0 \rangle\;. \tag{43}\] Proceeding as in the previous example, the operators \(A_{i}\), \(B_{i}\), \(i=1,2\), are given by \[A_{i}|(2n_{f})\;m_{g}\rangle = e^{i\alpha_{i}}|(2n_{f}+1)\;m_{g}\rangle\;,\qquad A_{i}|(2n_{f} +1)\;m_{g}\rangle=e^{-i\alpha_{i}}|(2n_{f})\;m_{g}\rangle\;,\] \[B_{i}|m_{f}\;(2n_{g})\rangle = e^{i\beta_{i}}|m_{f}\;(2n_{g}+1)\rangle\;,\qquad B_{i}|m_{f}\;( 2n_{g}+1)\rangle=e^{-i\beta_{i}}|m_{f}\;(2n_{g})\rangle\;. \tag{44}\] Again, the operator \(A_{i}\) acts only on the first entry, while \(B_{i}\) only on the second one. It turns out that \[\langle\sigma|\;A_{k}B_{i}\;|\sigma\rangle=\frac{2\sigma}{1+\sigma^{2}}\;\cos( \alpha_{k}+\beta_{i})\;, \tag{45}\] so that, for the Bell-CHSH correlator, one has precisely the expression obtained in the case of the squeezed oscillator, namely \[\langle\sigma|{\cal C}_{CHSH}|\sigma\rangle = \langle\sigma|(A_{1}+A_{2})B_{1}+(A_{1}-A_{2})B_{2}|\sigma\rangle \tag{46}\] \[= \frac{2\sigma}{1+\sigma^{2}}\left(\cos(\alpha_{1}+\beta_{1})+\cos (\alpha_{2}+\beta_{1})+\cos(\alpha_{1}+\beta_{2})-\cos(\alpha_{2}+\beta_{2}) \right)\;.\] Therefore, using \[\alpha_{1}=0\;,\qquad\beta_{1}=-\frac{\pi}{4}\;,\qquad\alpha_{2}=\frac{\pi}{2}\;, \qquad\beta_{2}=\frac{\pi}{4}\;, \tag{47}\] expression (46) becomes \[\langle\sigma|{\cal C}_{CHSH}|\sigma\rangle=2\;\frac{2\sqrt{2}\sigma}{1+\sigma^ {2}}\;, \tag{48}\] implying in a violation of the Bell-CHSH inequality for \[\sqrt{2}-1<\sigma<1\;. \tag{49}\] The maximum violation is attained for values of \(\sigma\approx 1\): \[\langle\eta|{\cal C}_{CHSH}|\eta\rangle\approx 2\sqrt{2}\;. \tag{50}\] It is worth reminding here that the violation of the Bell-CHSH in free Quantum Field Theory has been established several decades ago by using Algebraic Quantum Field Theory techniques, see [5] and refs.therein. See also [7] for a more recent discussion. ## 5 Some remarks on the Minkowski vacuum and the Rindler wedges Let us conclude with a few remarks on the origin of the violation of the Bell-CHSH inequality in relativistic Quantum Field Theory. We consider here the case of a real scalar KG field. It can be argued that the violation of the Bell-CHSH inequality can be understood in a simple way as a consequence of the entanglement properties of the vacuum state \(|0\rangle\), which can be expressed as a squeezed state in terms of left and right Rindler modes [8, 9], _i.e._ \[|0\rangle=\prod_{i}\left((1-e^{-\frac{2\pi\omega_{i}}{a}})^{\frac{1}{2}}\sum _{n_{i}=0}^{\infty}\;e^{-\frac{\pi n_{i}\omega_{i}}{a}}\;|n_{i}\rangle_{L}|n_ {i}\rangle_{R}\right)\;, \tag{51}\] where \((|n_{i}\rangle_{L},|n_{i}\rangle_{R})\) are the left and right Rindler modes and \(T=\frac{a}{2\pi}\) is the Unruh temperature. The relation (51) follows from the use of a Bogoliubov transformation applied to the the quantization of the real scalar KG field in the Rindler wedges [8, 9]. Proceeding as in the previous cases, for the four operators \((A_{k},B_{k}),k=1,2\), we have \[A_{k}|2n_{i}\rangle_{L} = e^{i\alpha_{k}}|2n_{i}+1\rangle_{L}\;,\qquad A_{k}|2n_{i}+1 \rangle_{L}=e^{-i\alpha_{k}}|2n_{i}\rangle_{L}\;,\] \[B_{k}|2n_{i}\rangle_{R} = e^{i\beta_{k}}|2n_{i}+1\rangle_{R}\;,\qquad B_{k}|2n_{i}+1 \rangle_{R}=e^{-i\beta_{k}}|2n_{i}\rangle_{R} \tag{52}\] \[;. \tag{53}\] As a consequence, setting again \[\alpha_{1}=0\;,\qquad\alpha_{2}=\frac{\pi}{2}\;,\qquad\beta_{1}=-\frac{\pi} {4}\;,\qquad\beta_{2}=\frac{\pi}{4}\;, \tag{54}\] the Bell-CHSH inequality can be parametrized as \[|\langle\Omega|{\cal C}_{CHSH}|\Omega\rangle|=2\sqrt{2}\;\tau(T)\;, \tag{55}\] where the form factor \(\tau\) reads \[\tau(T)=2\sum_{i}\frac{\left(e^{\frac{\pi\omega_{i}}{a}}-e^{-\frac{\pi\omega_{i}} {a}}\right)}{\left(e^{\frac{2\pi\omega_{i}}{a}}-e^{-\frac{2\pi\omega_{i}}{a}} \right)}=\sum_{i}\frac{1}{\cosh(\frac{\omega_{i}}{2T})}\;, \tag{56}\] from which it follows that the violation of the Bell-CHSH inequality in relativistic Quantum Field Theory can be anyzed in terms of the Unruh temperature [10]. ## 6 Conclusion In this work we have pointed out that the four operators \((A_{i},B_{i}),i=1,2\) entering the Bell-CHSH inequality can be constructed in a simple and elegant way. The setup turns out to be of general applicability, ranging from Quantum Mechanics examples to relativistic Quantum Field Theory. We have argued that the violation of the Bell-CHSH inequality in relativistic Quantum Field Theory can be understood as a direct consequence of the decomposition of the vacuum sate \(|0\rangle\) as a deeply entangled squeezed state in terms of left and right Rindler modes. These considerations might result in applications in several Quantum Field Theory models, including Abelian and non-Abelian gauge theories [10]. ## Acknowledgements The authors would like to thank the Brazilian agencies CNPq and FAPERJ for financial support. S.P. Sorella is a level 1 CNPq researcher under the contract 301030/2019-7.
2302.14177
Soft-Search: Two Datasets to Study the Identification and Production of Research Software
Software is an important tool for scholarly work, but software produced for research is in many cases not easily identifiable or discoverable. A potential first step in linking research and software is software identification. In this paper we present two datasets to study the identification and production of research software. The first dataset contains almost 1000 human labeled annotations of software production from National Science Foundation (NSF) awarded research projects. We use this dataset to train models that predict software production. Our second dataset is created by applying the trained predictive models across the abstracts and project outcomes reports for all NSF funded projects between the years of 2010 and 2023. The result is an inferred dataset of software production for over 150,000 NSF awards. We release the Soft-Search dataset to aid in identifying and understanding research software production: https://github.com/si2-urssi/eager
Eva Maxfield Brown, Lindsey Schwartz, Richard Lewei Huang, Nicholas Weber
2023-02-27T22:31:15Z
http://arxiv.org/abs/2302.14177v1
# Soft-Search: Two Datasets to Study the Identification and Production of Research Software ###### Abstract. Software is an important tool for scholarly work, but software produced for research is in many cases not easily identifiable or discoverable. A potential first step in linking research and software is software identification. In this paper we present two datasets to study the identification and production of research software. The first dataset contains almost 1000 human labeled annotations of software production from National Science Foundation (NSF) awarded research projects. We use this dataset to train models that predict software production. Our second dataset is created by applying the trained predictive models across the abstracts and project outcomes reports for all NSF funded projects between the years of 2010 and 2023. The result is an inferred dataset of software production for over 150,000 NSF awards. We release the Soft-Search dataset to aid in identifying and understanding research software production: [https://github.com/si2-urssi/eager](https://github.com/si2-urssi/eager) datasets, text classification, research software + Footnote †: ccs: Software and its engineering Software creation and management Data Collection and Annotation ### Finding Software Produced by NSF Awards The first step in our data collection process was to find software outputs from National Science Foundation (NSF) funded research. This step has two potential approaches. The first approach is a manual search for references and promises of software production within NSF award abstracts, project outcome reports, and papers supported by each award. This first approach is labor intensive and may be prone to labeling errors because while there may be a promise of software production in these documents, it may not be possible to verify such software was ultimately produced. The other approach is to predict software production using a trained model. We pursue this approach with the caveat that there are also potential label errors. To gather examples of verifiable software production, we created a Python script which used the GitHub API to search for repositories which included reference to financial support from an NSF award in the repositories README.md file. Specifically our script queried for README.md files which contained any of the following text snippets: 'National Science Foundation', 'NSF Award', 'NSF Grant', 'Supported by NSF', or 'Supported by the NSF'. GitHub was selected as the basis for our search because of its widespread adoption and mention in scholarly publication (Bowman et al., 2017). This search found 1520 unique repositories which contained a reference to the NSF in the repository's README.md file. ### Software Production Annotation The next step in our data collection process was to annotate each of the GitHub repositories found as either "software" or "not software." In our initial review of the repositories we had collected, we found that the content of repositories ranged from documentation, experimental notes, course materials, collections of one-off scripts written during a research project, to more typical software libraries with installation instructions, testing, and community support and use. Using existing definitions of what constitutes research software to form the basis of our annotation criteria (Bowman et al., 2017; Bowman et al., 2018), we conducted multiple rounds of trial coding on samples of the data. Fleiss' kappa was used to determine if there was agreement between our research team on whether ten GitHub repositories contained'software' or not. On each round of trial coding ten GitHub repositories were randomly selected from our dataset for each member of our research team to annotate independently. When assessing a repository, members of the research team were allowed to use any information in the repository to determine their annotation (i.e. the content of the README.md file, the repository activity, documentation availability, etc.) Our final round of trial coding showed that there was near perfect agreement between the research team (K=0.892) (Krause et al., 2017). Our final annotation criteria was generally inclusive of labeling repositories as software, rather there were specific exclusion criteria that resulted in a repository being labeled as "not software". Specifically repositories were labeled as "not software" when a repository primarily consisted of: 1. project documentation or research notes 2. teaching materials for a workshop or course 3. the source code for a project or research lab website 4. collections of scripts specific to the analysis of a single experiment without regard to further generalizability 5. utility functions for accessing data without providing any additional processing capacity We then annotated all GitHub repositories in our dataset as either "software" or "not software" according to our agreed upon annotation criteria. ### Linking GitHub Repositories to NSF Awards Our final step in the data collection process was to link the annotated GitHub repositories back to specific NSF awards. To do so, we created a script which would load the webpage for each GitHub repository, scrape the content of the repository's README and find the specific NSF award ID number(s) referenced. While annotating the dataset, and with this script, our dataset size was reduced as we found some repositories were returned in the initial search because of the "NSF" acronym being used by other, non-United-States governmental agencies which also fund research. When processing each repository, our Python script would load the README content, search for NSF Award ID patterns with regular expressions, and then verify that each NSF award ID found was valid by requesting metadata for the award from the NSF award API. We then retrieved the text for each award's abstract and project outcomes report. This was the final step of our data collection process and allowed us to create a dataset of 446 unique NSF awards labeled as 'produced software' and 471 unique NSF awards labeled as 'did not produce software'. ## 3. Predictive Models Using the compiled Soft-Search Training dataset, we trained three different models using the text from either the award abstract or project outcomes report. The models trained include a logistic regression model trained with TF-IDF word embeddings (tfidf-logit), a logistic regression model trained with semantic embeddings (semantic-logit), and a fine-tuned transformer (transformer). The semantic embeddings and the base model from which we finetuned our own transformer model was the 'distilbert-base-uncased-finetuned-sst-2-english' model (Bowman et al., 2017). Each model was trained with 80% of the Soft-Search Training dataset. We then test each of the models and use F1 to rank each model's performance. Table 1 reports the results from training using the abstract text as input. The best performing model was the tfidf-logit which achieved an F1 of 0.673. \begin{table} \begin{tabular}{l l r r r} \hline \hline & model & accuracy & precision & recall & f1 \\ \hline 0 & tfidf-logit & 0.674 & 0.674 & 0.674 & 0.673 \\ 1 & transformer & 0.636 & 0.608 & 0.697 & 0.649 \\ 2 & semantic-logit & 0.630 & 0.630 & 0.630 & 0.630 \\ 3 & regex & 0.516 & 0.515 & 0.516 & 0.514 \\ \hline \hline \end{tabular} \end{table} Table 1. Predictive Model Results (Trained with Abstract Text) Table 2 reports the results from training using the project outcomes reports as input. The best performing model was the tfidf-logit which achieved an F1 of 0.745. While the models trained with the project outcomes reports were trained with less data, the best model of the group achieved a higher F1 than any of the models trained with the abstracts. While we have not investigated further, we believe this to be because the project outcomes reports contain more direct citation of produced software rather than an abstract's promise of software production. The data used for training, and functions to reproduce these models, are made available via our Python package: soft-search. ## 4. The Soft-Search Dataset Using the predictive models, we compile the Soft-Search Inferred dataset which contains the metadata, abstract text, and project outcomes report text, for all NSF awarded projects during the 2010-2023 period. The Soft-Search Inferred dataset additionally contains our predictions for software production using both texts respectively. ### Trends and Observations Using the Soft-Search Inferred dataset we can observe trends in software production over time. Figure 1 plots the percent of awards which we predict to have produced software (using the award's abstract) over time. While there are minor year-to-year deviations in predicted software production, we observe the "Math and Physical Sciences" (MPS) funding program as funding the most awards which we predict to produce software, with "Computer and Information Science and Engineering" (CISE), and "Engineering" (ENG) close behind. We can additionally observe trends in software production as award duration increases. Figure 2 plots the percent of awards which we predict to have produced software (using the award's abstract) grouped by the award duration in years. We note that as award duration increases, the percentage of awards which are predicted to have produced software also tends to increase. ## 5. Conclusion We introduce Soft-Search, a pair of novel datasets for studying software production from NSF funded projects. The Soft-Search Training dataset is a human-labeled dataset with almost 1000 examples used to train models which predict software production from either the NSF award abstract text or the project outcomes report text. We used these models to generate the Soft-Search Inferred dataset. The Soft-Search Inferred dataset includes project metadata, the awards abstract and project outcomes report, and predictions of software production for each NSF funded project between 2010 and 2023. We hope that Soft-Search helps further new studies and findings in understanding the role software development plays in scholarly publication. \begin{table} \begin{tabular}{l l r r r} \hline \hline & model & accuracy & precision & recall & f1 \\ \hline 0 & tfidf-logit & 0.745 & 0.745 & 0.745 & 0.745 \\ 1 & transformer & 0.673 & 0.638 & 0.771 & 0.698 \\ 2 & semantic-logit & 0.633 & 0.633 & 0.633 & 0.632 \\ 3 & regex & 0.510 & 0.507 & 0.510 & 0.482 \\ \hline \hline \end{tabular} \end{table} Table 2. Predictive Model Results (Trained with Project Outcomes Report Text) Figure 1. Software Production Over Time (Using Predictions from Abstracts) \begin{table} \begin{tabular}{l l r r r} \hline \hline & Program & \# Awards & \# Software & \% Software \\ \hline 0 & MPS & 32885 & 19178 & 0.583184 \\ 1 & CISE & 24633 & 13274 & 0.538871 \\ 2 & ENG & 22900 & 11242 & 0.490917 \\ 3 & GEO & 17822 & 5142 & 0.288520 \\ 4 & BIO & 16990 & 6013 & 0.353914 \\ 5 & EHR & 13703 & 575 & 0.041962 \\ 6 & SBE & 13318 & 1966 & 0.147620 \\ 7 & TIP & 8597 & 4501 & 0.523555 \\ 8 & OISE & 2329 & 636 & 0.273079 \\ 9 & OIA & 498 & 123 & 0.246988 \\ \hline \hline \end{tabular} \end{table} Table 3. Composition of the NSF Soft Search Dataset Figure 2. Software Production Grouped By Award Duration (Using Predictions from Abstracts) All datasets and predictive models produced by this work are available from our GitHub repository: s12-urssi/eager. ### Limitations As discussed in Section 2, the Soft-Search Training dataset was entirely composed of NSF awards which ultimately released or hosted software (and other research products) on GitHub. Due to our data collection strategy, it is possible that each of the predictive models learned not to predict if an NSF award would produce software, but rather, if an NSF award would produce software hosted on GitHub. ### Future Work As discussed in Section 2.1, our initial method for attempting to find research software produced from NSF supported awards was to search for references and promises of software production in the abstract, project outcomes report, and attached papers of each award. While attempting this approach to create the dataset, we found that many awards and papers that reference computational methods do not provide a reference web link to their code repositories or websites. In some cases, we found repositories related to an award or paper via Google and GitHub search ourselves. While we support including references to code repositories in award abstracts, outcomes reports, and papers, future research should be conducted on how to enable automatic reconnection of papers and their software outputs. ## 6. Acknowledgements We thank the USRSSI team, especially Karthik Ram for their input. This material is based upon work supported by the National Science Foundation under Grant 2211275.
2306.06084
Machine Vision Using Cellphone Camera: A Comparison of deep networks for classifying three challenging denominations of Indian Coins
Indian currency coins come in a variety of denominations. Off all the varieties Rs.1, RS.2, and Rs.5 have similar diameters. Majority of the coin styles in market circulation for denominations of Rs.1 and Rs.2 coins are nearly the same except for numerals on its reverse side. If a coin is resting on its obverse side, the correct denomination is not distinguishable by humans. Therefore, it was hypothesized that a digital image of a coin resting on its either size could be classified into its correct denomination by training a deep neural network model. The digital images were generated by using cheap cell phone cameras. To find the most suitable deep neural network architecture, four were selected based on the preliminary analysis carried out for comparison. The results confirm that two of the four deep neural network models can classify the correct denomination from either side of a coin with an accuracy of 97%.
Keyur D. Joshi, Dhruv Shah, Varshil Shah, Nilay Gandhi, Sanket J. Shah, Sanket B. Shah
2023-05-12T04:43:51Z
http://arxiv.org/abs/2306.06084v1
Machine Vision Using Cellphone Camera: A Comparison of deep networks for classifying three challenging denominations of Indian Coins ###### Abstract Indian currency coins come in a variety of denominations. Off all the varieties $1, $2 and $5 have similar diameters. Majority of the coin styles in market circulation for denominations of $1 and $2 coins are nearly the same except numerals on its reverse side. If a coin is resting on its obverse side, the correct denomination is not distinguishable by humans. Therefore, it was hypothesized that a digital image of a coin resting on its either size could be classified into its correct denomination by training a deep neural network model. The digital images were generated by using cheap cell phone cameras. To find the most suitable deep neural network architecture, four were selected based on the preliminary analysis carried out for comparison. The results confirm that two of the four deep neural network models can classify correct denomination from either side of a coin with accuracy of 97%. deep neural networks, obverse and reverse sides, coins classification, database generation, performance comparison ## I Introduction and Background Machine Vision usually involves object detection and classification based on available images. Coins form a major part in the Indian currency system. With the increase in computational power, coin classification is one of the emerging field of study. Since the manual classification is time consuming, tedious, and prone to errors, there is a need to have an efficient, robust and automatic coin classification system. Moreover, such systems are useful in unmanned stores with automated checkout facility [1] and securing cash circulation [2]. Indian coins have a variety of styles for all the denominations in circulation. A number of studies on coin recognition/classification systems using neural networks are reported in literature. [3] developed a five convolutional layer-based deep neural network architecture for coin classification with 90% validation accuracy. [4] used cell phones to classify Indian currency notes with 93% validation accuracy. The authors in [5] have classified ancient Roman Republican Coins using the motifs minted on the reverse side of the coins. In [6], the authors majorly focus on the edge detection problem in coin recognition and suggest that dynamic statistical colour threshold method is the best suitable aid for the problem. Later they have used multi-channel Gabor filter for feature extraction and back propagation neural network for numeral recognition. The authors in [7] evaluated the effect on accuracy due to the design parameters which included size of the database, number of classes, quality of images, type of network, nature of training and testing strategy. A set of guidelines for part recognition tasks is presented. [8] presented a novel Multi-column Deep Neural Network for image classification. Their architecture outperformed the existing traditional models in terms of recognition accuracy on data-set such as MNIST and NORB. [9] focused on optimizing the task of Indian coins recognition at high speed for a camera-based moving coin system. [10] compared accuracy and performance of three different techniques for Indian coin classification namely particle classification, pattern matching and recognition matching. The authors reached the conclusion that none of the techniques reached the goal of 95% accuracy at 1000 coins/min. [11] proposed a coin recognition system design which consists of Robert's edge detection method, Laplacian of Gaussian edge detection method, Canny edge detection method and Multi-Level Counter Propagation Neural Network (ML-CPNN) yielding 93%, 95%, 97.25% and 99.47% recognition rate respectively. The noise present in form of shadow or from improper positioning of coin/camera may lead to false classification. The classification accuracy is the major distinguishing performance measure for a developed classification model. However, many times a value of 100% accuracy is not good due to model's poor generalizability mainly as a result of over-fitting [12]. The motivation of this work is to develop a well generalizable classification model that can classify Indian coins from images captured using cell phones. Prior to designing such a system, a reliable database is required such that inferences can be derived and the underlying pattern can be represented in a clear manner. Indian coin denominations such as $1 and $2 have similar styles so that it is difficult to identify the denomination from a single side of the coin [10]. The absence of a good dataset for the Indian coins in this context gave rise to the need of creation of such a dataset from scratch. This work involved a database creation having Indian coins images acquired in a systematic manner. The images in the database were processed to make it more presentable to data driven classification methods such as deep neural networks. We employ selected renowned deep learning architectures that are based on convolutional neural networks, and analyse their performance for this application. In this paper we refer two sides of the coin as obverse and reverse, where the reverse side contains the numerical value of the coin. Section II and III provides discussion on image database creation and image processing. Section IV provides details about the deep neural networks. Results and conclusion with future work are discussed in section V and VI, respectively. ## II Database Formation ### _Available Database_ The open source database [13], consisted 656 total images of the coins currently used in India. Specifications of the database are that it includes datasets of 194, 197, 191 and 74 images of $ 1, $ 2, $ 5 and $ 10 coins respectively. Here, the individual denominations have been segregated properly into different datasets. However, this database had a few demerits. First, the images have a lot of noise and not suitable to readily use for feature extraction. Second, background for the images was random for the entire database, reducing uniformity across the data. Third, distance between a coin and the camera used for capturing the image was non-uniform. In nutshell, a structured process has not been followed for capturing the images of the coins. ### _Generated Database_ A database was generated with structured process. Fig. 1 shows a sample setup where two views of the setup are shown. The distance between coin and the camera was kept at 10 cm after a set of experiments with higher and lower values. Three cell phones were used to capture the coin images. Table 1 lists major features of the cell phones. Features of the generated database are: 1) three different datasets to accommodate three challenging denominations of $ 1, $ 2, and $ 5, Fig. 2 shows only the reverse side of the coin styles considered in this work; 2) the background for the images have been prefixed to be dark and uniform without any granularity; 3) the physical coins were manually rotated with 90, 180, 270 and 360 degrees; there were total 121 physical coins used as shown in Fig. 3. Total number of images under title 'column A' is 967 considering both sides. These images are then processed as discussed in next section. \begin{table} \begin{tabular}{c c c c c} \hline \multirow{2}{*}{**Cellphone**} & \multicolumn{4}{c}{**Major Features**} \\ \cline{2-5} & **Model number** & **Shutter speed** & **Aperture** & **ISO** \\ \hline \multirow{2}{*}{Redmit Note 9 Pro} & M2003J6A11 & 1/17 second & f1.79 & 6500 \\ & SM-A315F & 1/20 second & f2.0 & 3200 \\ & MO4922IL/A & 1/4 second & f1.8 & 2000 \\ \hline \end{tabular} \end{table} Table 1: major features of the three cellphones used for data acquisition Fig. 1: Setup view from an angle (left) and from a side (right) Fig. 3: Database generation: from 121 physical coins (image acquisition) to the database of 37713 images (dataset augmentation) Fig. 2: Reverse side of the sample coins for $ 1: total 6, 20 and 17 physical coins for style 1, 2 and 3 (top); $ 2: total 2, 26, 5 and $ physical coins for style 1, 2, 3 and 4 (middle); and $ 5: total 1, 15, 16, 3, and 1 physical coins for style 1, 2, 3, 4, and 5 (bottom). For obverse sides, refer [10]. ## III Processing the Datasets The images generated cannot be directly used due to reasons such as redundancy of image space and inconsistency in coin positions. Therefore these 967 images were processed such that for each one input image, there were 38 output image. Total number of images in the database including the processed input images resulted in 37713 images, shown under 'column B' in Fig. 3. The image processing involved data cleaning and augmentation. ### _Data cleaning_ Self-optimizing architectures depend on the training data for the extraction of knowledge and on the testing data to ensure the absorption of the knowledge. To speed up the procedure, removal of the redundant data is important without which the learning process may get misguided. On the other hand, important data must not be lost or the performance would suffer. Cleaning involved centering, cropping, resizing and grey-scaling. For centering and cropping, HoughCircles method provided by deep learning library OpenCV was used. The images were resized to a size of 150x150x3 and then converted to greyscale. The images were converted to greyscale because in a separate set of experiments, the color feature was not found to be significantly improving the performance. ### _Data Augmentation_ Deep Learning Models can solve complex problems as they are capable of identifying various hidden features and defining a relationship between these interdependent features. To be able to learn such intricate hidden patterns and relations, they need to identify feature themselves. This require huge amount of data. Data Augmentation synthetically increases the number of images by applying various techniques. The output images might seem similar to us, but they provide useful extra information to the machine learning model. Brightness and rotational augmentation were selected. In brightness augmentation, the images were darkened and brightened by 33% of the original image as beyond this the change were not identifiable to a human eye. We chose to digitally rotate the original image by 10 degrees. The rotation will rotate pixels out of the image frame and leave corner areas of the frame with no pixel data that must be filled in. These corner areas were filled with the average value of all the four corner values of the original image. After applying both the augmentations, 38 new images (36 from rotation augmentation and remaining 2 from brightness augmentation) were created for each input image. The database of 37713 images was then used for training and testing deep neural networks (DNNs). ## IV Experiments with Deep Neural Networks The four DNN architectures were selected for performance comparison on the selected application after a set of experiments with the existing dataset [13]. ### _Selecting deep network architectures_ Initially seven DNNs were tested with the database [13]: 1) AlexNet, 2) ResNet50, 3) Inception V3, 4) MobileNet V2, 5) GoogleNet, 6) VGG16 and 7) LeNet. Considering four classes and 75%-25% train test data distribution, the performance in terms of accuracy was found out using Adam optimizer with cross entropy. After 25 epochs, the training was stopped and four architectures were selected for comparative performance analysis: 1) ResNet50, 2) Inception V3, 3) MobileNet V2, and 4) VGG16. ResNet50 has 50 layers of residual networks. This solves the problem of vanishing gradients by providing an alternative shortcut for the gradient to flow. [14]. Inception V3 is updated DNN architecture that uses aggressive regularization and suitably factorized convolutions for computational efficiency [15]. The VGG16 architecture is made up of very small convolutional filters and a depth of 16 weighted networks [16]. MobileNetV2 is a convolutional neural network that is designed to work on mobile devices. It is based on an inverted residual structure, with bottleneck levels connected by residual connections. The design of MobileNetV2 contains a fully convolutional layer with 32 filters as well as 19 residual bottleneck layers [17]. ### _Methodology for model development_ The generated database was divided into training and testing datasets with 67%-33% distribution. The total classes prepared were 6 considering obverse and reverse side of the denomination as two separate classes. This helped in inferring which side is classification friendly. Nevertheless, it is the denomination which is important, (and not a particular side). Therefore, the six classes were merged into 3 classes for their respective observations. The performance analysis of the DNNs architectures and results are discussed in next section. ## V Results and Discussion The total images used for testing were 12446. Fig. 4 shows the performance plot of the test accuracy vs epochs. Here the \begin{table} \begin{tabular}{c c c c c} \multirow{2}{*}{**Deep network architectures**} & \multicolumn{4}{c}{**Test Accuracy (in \%)**} \\ \cline{2-5} & **Obverse** & **Reverse** & **Both** & **Both** \\ & & & & **(floor)** \\ \hline MobileNet V2 & 73.78 & 70.75 & 77.03 & 77 \\ ResNet50 & 92.68 & 93.08 & 94.97 & 94 \\ Inception V3 & 96.28 & 95.92 & 97.57 & 97 \\ VGG16 & 99.81 & 99.8 & 97.87 & 97 \\ \hline \end{tabular} \end{table} Table 2: DNN architectures performance after completion of 30\({}^{\text{th}}\) epoch Figure 4: Performance plot of selected four DNN architectures testing accuracy was observed for up to 50 epochs. The value of testing accuracy after completion of the 30th epoch as can be seen by vertical line in Fig. 4 for all the DNN architectures under consideration seemed well generalizable. Table 2 shows the testing accuracy for obverse side, reverse side and both side combined after the end of 30th epoch. From Table 2, it is clear that InceptionNet V3 and VGG16 provided similar performance, while ResNet50 provided the less performance and MobileNet V2 provided the least performance. Let us examine the individual confusion matrices obtained using DNN architectures. ### _Results with InceptionNet V3_ Table 3 shows two confusion matrices for assessing the performance of InceptionNet V3. The left is for 6 classes and the right is for 3 classes. Misclassification between two denominations is undesirable as compared with misclassification between two sides of same denomination. Misclassifications between \(\mathfrak{K}\) 1 obverse side and \(\mathfrak{K}\) 2 obverse side are highlighted using red dotted oval shape. Here, \(\mathfrak{K}\) 1 classified as \(\mathfrak{K}\) 2, for 60 and \(\mathfrak{K}\) 2 classified as \(\mathfrak{K}\)1 for 25 observations. This contributed majorly in misclassifications of overall confusion matrix shown in right side of Table 3. The reason behind this is high similarity between obverse sides of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 coins. Although there were some misclassifications, the reverse side does not seem to have this problem. There were total 303 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 3. ### _Results with VGG16_ Table 4 shows two confusion matrix for 6 classes in left and 3 classes in right. Here the difference between classification of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 is more pronounced as the highlighted values are larger than any other neighbor misclassifications in the confusion matrix. This architecture registered total 264 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 4. ### _Results with ResNet50_ Resultant confusion matrices for 6 and 3 classes obtained from ResNet50 DNN architecture are shown in left and right side of Table 5, respectively. A sum of 127 images from \(\mathfrak{K}\) 2 observations. This contributed majorly in misclassifications of overall confusion matrix shown in right side of Table 3. The reason behind this is high similarity between obverse sides of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 coins. Although there were some misclassifications, the reverse side does not seem to have this problem. There were total 303 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 3. ### _Results with VGG16_ Table 4 shows two confusion matrix for 6 classes in left and 3 classes in right. Here the difference between classification of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 is more pronounced as the highlighted values are larger than any other neighbor misclassifications in the confusion matrix. This architecture registered total 264 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 4. ### _Results with ResNet50_ Resultant confusion matrices for 6 and 3 classes obtained from ResNet50 DNN architecture are shown in left and right side of Table 5, respectively. A sum of 127 images from \(\mathfrak{K}\) 2 observations. This contributed majorly in misclassifications of overall confusion matrix shown in right side of Table 3. The reason behind this is high similarity between obverse sides of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 coins. Although there were some misclassifications, the reverse side does not seem to have this problem. There were total 303 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 3. ### _Results with VGG16_ Table 4 shows two confusion matrix for 6 classes in left and 3 classes in right. Here the difference between classification of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 is more pronounced as the highlighted values are larger than any other neighbor misclassifications in the confusion matrix. This architecture registered total 264 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 4. ### _Results with ResNet50_ Resultant confusion matrices for 6 and 3 classes obtained from ResNet50 DNN architecture are shown in left and right side of Table 5, respectively. A sum of 127 images from \(\mathfrak{K}\) 2 observations. This contributed majorly in misclassifications of overall confusion matrix shown in right side of Table 3. The reason behind this is high similarity between obverse sides of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 coins. Although there were some misclassifications, the reverse side does not seem to have this problem. There were total 303 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 3. ### _Results with VGG16_ Table 4 shows two confusion matrix for 6 classes in left and 3 classes in right. Here the difference between classification of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 is more pronounced as the highlighted values are larger than any other neighbor misclassifications in the confusion matrix. This architecture registered total 264 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 4. ### _Results with ResNet50_ Resultant confusion matrices for 6 and 3 classes obtained from ResNet50 DNN architecture are shown in left and right side of Table 5, respectively. A sum of 127 images from \(\mathfrak{K}\) 2 observations. This contributed majorly in misclassifications of overall confusion matrix shown in right side of Table 3. The reason behind this is high similarity between obverse sides of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 coins. Although there were some misclassifications, the reverse side does not seem to have this problem. There were total 303 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 3. ### _Results with VGG16_ Table 4 shows two confusion matrix for 6 classes in left and 3 classes in right. Here the difference between classification of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 is more pronounced as the highlighted values are larger than any other neighbor misclassifications in the confusion matrix. This architecture registered total 264 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 4. ### _Results with ResNet50_ Resultant confusion matrices for 6 and 3 classes obtained from ResNet50 DNN architecture are shown in left and right side of Table 5, respectively. A sum of 127 images from \(\mathfrak{K}\) 2 observations. This contributed majorly in misclassifications of overall confusion matrix shown in right side of Table 3. The reason behind this is high similarity between obverse sides of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 coins. Although there were some misclassifications, the reverse side does not seem to have this problem. There were total 303 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 3. ### _Results with VGG16_ Table 4 shows two confusion matrix for 6 classes in left and 3 classes in right. Here the difference between classification of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 is more pronounced as the highlighted values are larger than any other neighbor misclassifications in the confusion matrix. This architecture registered total 264 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 4. ### _Results with ResNet50_ Resultant confusion matrices for 6 and 3 classes obtained from ResNet50 DNN architecture are shown in left and right side of Table 5, respectively. A sum of 127 images from \(\mathfrak{K}\) 2 observations. This contributed majorly in misclassifications of overall confusion matrix shown in right side of Table 3. The reason behind this is high similarity between obverse sides of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 coins. Although there were some misclassifications, the reverse side does not seem to have this problem. There were total 303 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 3. ### _Results with VGG16_ Table 4 shows two confusion matrix for 6 classes in left and 3 classes in right. Here the difference between classification of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 is more pronounced as the highlighted values are larger than any other neighbor misclassifications in the confusion matrix. This architecture registered total 264 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 4. ### _Results with ResNet50_ Resultant confusion matrices for 6 and 3 classes obtained from ResNet50 DNN architecture are shown in left and right side of Table 5, respectively. A sum of 127 images from \(\mathfrak{K}\) 2 observations. This contributed majorly in misclassifications of overall confusion matrix shown in right side of Table 3. The reason behind this is high similarity between obverse sides of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 coins. Although there were some misclassifications, the reverse side does not seem to have this problem. There were total 303 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 3. ### _Results with VGG16_ Table 4 shows two confusion matrix for 6 classes in left and 3 classes in right. Here the difference between classification of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 is more pronounced as the highlighted values are larger than any other neighbor misclassifications in the confusion matrix. This architecture registered total 264 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 4. ### _Results with ResNet50_ Resultant confusion matrices for 6 and 3 classes obtained from ResNet50 DNN architecture are shown in left and right side of Table 5, respectively. A sum of 127 images from \(\mathfrak{K}\) 2 observations. This contributed majorly in misclassifications of overall confusion matrix shown in right side of Table 3. The reason behind this is high similarity between obverse sides of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 coins. Although there were some misclassifications, the reverse side does not seem to have this problem. There were total 303 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 3. ### _Results with VGG16_ Table 4 shows two confusion matrix for 6 classes in left and 3 classes in right. Here the difference between classification of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 is more pronounced as the highlighted values are larger than any other neighbor misclassifications in the confusion matrix. This architecture registered total 264 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 4. ### _Results with ResNet50_ Resultant confusion matrices for 6 and 3 classes obtained from ResNet50 DNN architecture are shown in left and right side of Table 5, respectively. A sum of 127 images from \(\mathfrak{K}\) 2 observations. This contributed majorly in misclassifications of overall confusion matrix shown in right side of Table 5. The reason behind this is high similarity between obverse sides of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 coins. Although there were some misclassifications, the reverse side does not seem to have this problem. There were total 303 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 3. ### _Results with VGG16_ Table 4 shows two confusion matrix for 6 classes in left and 3 classes in right. Here the difference between classification of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 is more pronounced as the highlighted values are larger than any other neighbor misclassifications in the confusion matrix. This architecture registered total 264 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 4. ### _Results with ResNet50_ Resultant confusion matrices for 6 and 3 classes obtained from ResNet50 DNN architecture are shown in left and right side of Table 5, respectively. A sum of 127 images from \(\mathfrak{K}\) 2 observations. This contributed majorly in misclassifications of overall confusion matrix shown in right side of Table 3. The reason behind this is high similarity between obverse sides of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 coins. Although there were some misclassifications, the reverse side does not seem to have this problem. There were total 303 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 3. ### _Results with VGG16_ Table 4 shows two confusion matrix for 6 classes in left and 3 classes in right. Here the difference between classification of \(\mathfrak{K}\) 1 and \(\mathfrak{K}\) 2 is more pronounced as the highlighted values are larger than any other neighbor misclassifications in the confusion matrix. This architecture registered total 264 misclassifications out of 12446 observations in overall confusion matrix shown in right side of Table 4. ### _Results with ResNet50_ Resultant confusion matrices for 6 and 3 classes obtained from ResNet50 DNN architecture are shown in left and right side of Table 5, respectively. A sum of 127 images from \(\mathfrak{K}\) 2 observations. This resulted minor in misclassifications of reverse side images were classified as $ 1 reverse side images. Ignoring few misclassifications, this was not the case with InceptionNet V3 and VGG16. This shows ResNet50 is not performing well as compared with the previous two DNN architectures. Interestingly, 95 images of $ 2 obverse side were classified as $ 1 obverse side which is in line with the results obtained with InceptionNet V3 and VGG16. The same is highlighted in confusion matrix shown in left side of Fig. 5. The results show 625 misclassifications from 12446 observations of overall confusion matrix. ### _Results with MobileNet V2_ Table 6 shows two confusion matrix as per the convention followed by previous three figures. Two highlighted values in dotted red oval shape represents misclassification between $ 1 and $ 2 both sides. In particular, 361 $ 1 reverse side images were classified as $ 2 reverse side images, while 255 $ 1 obverse side images were classified as $ 2 obverse side images. Here, the problem with reverse side is more pronounced as compared with ResNet50. This problem was almost non-existent in performance of VGG16 and InceptionNet V3 DNN architectures. Left confusion matrix in Table 6 shows more misclassifications such as 496 for $ 1, but is of less importance as they are between two sides of the same denomination. Total of 2859 misclassifications registered out of 12446 observations in overall confusion matrix shown in right of Table 6. ### _Results comparison_ Table 2 suggests that considering floored accuracy value for DNN architecture comparison, InceptionNet V3 and VGG16 provided similar performance. While VGG16 has estimated 138 million trainable parameters, InceptionNet V3 has roughly 5 million trainable parameters. Therefore, in terms of less computational head for this application, InceptionNet V3 is the most suitable DNN architecture. ResNet50 has roughly 23 million parameters, but was unable to provide performance close to the ones shown by Inception V3 and VGG16. MobileNet V2 performance was not up to the mark although it was developed for the applications involving mobile vision such as this one. ## VI Conclusion and Future work For the application of three challenging denominations of Indian coins classification, four DNN architectures were compared: Inception V3, ResNet50, VGG16 and MobileNet V2. The computationally efficient and higher performance architecture was InceptionNet V3. MobileNet V2 could not provide the satisfactory performance. There is a plan to develop cell phone application to identify the challenging denominations of Indian coins and investigate MobileNet V2 further to make it readily adoptable for this application in future. ## Acknowledgment The authors acknowledge the help and support received from Mr. Nisarg Patel, Mr. Devarsh Suthar and Mr. Tejas Chauhan for this research work.
2307.00904
An open-source deep learning algorithm for efficient and fully-automatic analysis of the choroid in optical coherence tomography
Purpose: To develop an open-source, fully-automatic deep learning algorithm, DeepGPET, for choroid region segmentation in optical coherence tomography (OCT) data. Methods: We used a dataset of 715 OCT B-scans (82 subjects, 115 eyes) from 3 clinical studies related to systemic disease. Ground truth segmentations were generated using a clinically validated, semi-automatic choroid segmentation method, Gaussian Process Edge Tracing (GPET). We finetuned a UNet with MobileNetV3 backbone pre-trained on ImageNet. Standard segmentation agreement metrics, as well as derived measures of choroidal thickness and area, were used to evaluate DeepGPET, alongside qualitative evaluation from a clinical ophthalmologist. Results: DeepGPET achieves excellent agreement with GPET on data from 3 clinical studies (AUC=0.9994, Dice=0.9664; Pearson correlation of 0.8908 for choroidal thickness and 0.9082 for choroidal area), while reducing the mean processing time per image on a standard laptop CPU from 34.49s ($\pm$15.09) using GPET to 1.25s ($\pm$0.10) using DeepGPET. Both methods performed similarly according to a clinical ophthalmologist, who qualitatively judged a subset of segmentations by GPET and DeepGPET, based on smoothness and accuracy of segmentations. Conclusions: DeepGPET, a fully-automatic, open-source algorithm for choroidal segmentation, will enable researchers to efficiently extract choroidal measurements, even for large datasets. As no manual interventions are required, DeepGPET is less subjective than semi-automatic methods and could be deployed in clinical practice without necessitating a trained operator.
Jamie Burke, Justin Engelmann, Charlene Hamid, Megan Reid-Schachter, Tom Pearson, Dan Pugh, Neeraj Dhaun, Stuart King, Tom MacGillivray, Miguel O. Bernabeu, Amos Storkey, Ian J. C. MacCormick
2023-07-03T10:01:36Z
http://arxiv.org/abs/2307.00904v3
An open-source deep learning algorithm for efficient and fully-automatic analysis of the choroid in optical coherence tomography ###### Abstract PurposeTo develop an open-source, fully-automatic deep learning algorithm, DeepGPET, for choroid region segmentation in optical coherence tomography (OCT) data. We used a dataset of 715 OCT B-scars (82 subjects, 115 eyes) from 3 clinical studies related to systemic disease. Ground truth segmentations were generated using a clinically validated, semi-automatic choroid segmentation method, Gaussian Process Edge Tracing (GPET). We finetuned a UNet with MobileNetV3 backbone pre-trained on ImageNet. Standard segmentation agreement metrics, as well as derived measures of choroidal thickness and area, were used to evaluate DeepGPET, alongside qualitative evaluation from a clinical ophthalmologist. DeepGPET achieves excellent agreement with GPET on data from 3 clinical studies (AUC=0.9994, Dice=0.9664; Pearson correlation of 0.8908 for choroidal thickness and 0.9082 for choroidal area), while reducing the mean processing time per image on a standard laptop CPU from 34.49s (\(\pm\)15.09) using GPET to 1.25s (\(\pm\)0.10) using DeepGPET. Both methods performed similarly according to a clinical ophthalmologist, who qualitatively judged a subset of segmentations by GPET and DeepGPET, based on smoothness and accuracy of segmentations. DeepGPET, a fully-automatic, open-source algorithm for choroidal segmentation, will enable researchers to efficiently extract choroidal measurements, even for large datasets. As no manual interventions are required, DeepGPET is less subjective than semi-automatic methods and could be deployed in clinical practice without necessitating a trained operator. ## 1 Introduction The retinal choroid is a complex, extensively interconnected vessel network positioned between the retina and the sclera. The choroid holds the majority of the vasculature in the eye and plays a pivotal role in nonvinking the retina. Optical coherence tomography (OCT) is an ocular imaging modality that uses low-coherence light to construct a three-dimensional map of choriore retinal structures at the back of the eye. Standard OCT imaging does not visualise the deeper choroidal tissue well as it sits beneath the hyperreflective retinal pigment epithelium layer of the retina. Enhanced Depth Imaging OCT (EDI-OCT) overcomes this problem and offers improved visualisation of the choroid, thus providing a unique window into the microvascular network which not only resides closest to the brain embryologically, but also carries the highest volumetric flow per unit tissue weight compared to any other organ in the body. Since the advent of OCT, interest in the role played by the choroid in systemic health has been growing [1], as non-invasive imaging of the choroidal microvasculature may provide a novel location to detect systemic, microvascular changes early. Indeed, changes in choroidal blood flow, thickness and other markers have been shown to correspond with patient health such as choroidal thickness in chronic kidney disease [2] and choroidal area and vascularity in Alzheimer's dementia [3]. Quantification of the choroid in EDI-OCT imaging requires segmentation of the choroidal space. However, this is a harder problem than retinal layer segmentation due to poor signal penetration from the device -- and thus lower signal-to-noise ratio -- and shadows cast by superficial retinal vessels and choroidal stroma tissue. This results in poor intra- and inter-rater agreement even with manual segmentation by experienced clinicians, and manual segmentation is too labour intensive and subjective to be practical for analysing large scale datasets. Semi-automated algorithms improve on this slightly but are typically multi-stage procedures, requiring traditional image processing techniques to prepare the images for downstream segmentation [4]. Methods based on graph theory such as Dijkstra's algorithm [56] or graph cut [7], as well as on statistical techniques including level sets [8, 9], contour evolution [10], and Gaussian mixture models [11] have been proposed previously. Concurrently, deep learning(DL)-based approaches have emerged [12], used a DL model for choroid layer segmentation, but with traditional contour tracing as a post-processing step. Other DL-based approaches, too, combine traditional image processing techniques as pre- or post-processing steps [13, 14, 15] whereas others are fully DL-based [16, 17], the latter of which is in a similar vein to the proposed method. More recently, DL has been used to distil existing semi-automatic traditional image processing pipelines into a fully-automatic method [18]. Gaussian Proces Edge Tracing (GPET), based on Bayesian machine learning [19], is a particularly promising method for choroid layer segmentation that has been clinically and quantitatively validated [20]. Gaussian process (GP) regression is used to model the upper and lower boundaries of the choroid from OCT scans. For each boundary, a recursive Bayesian scheme is employed to iteratively detect boundary pixels based on the image gradient and the GP regressor's distribution of candidate boundaries. However, GPET is semi-automatic and thus requires time-consuming manual interventions by specifically trained personnel which introduces subjectivity and limits the potential for analysing larger datasets or deploying GPET into clinical practice. There are currently no accessible, open-source algorithms for fully-automatic choroidal segmentation. All available algorithms fall into one of three categories: First, semi-automatic methods [21, 22] that require human supervision and thus require training and introduce subjectivity. Second, fully-automatic DL-based methods that are not openly accessible, either only providing the code but not the trained model necessary to use the method [23, 24] or not providing any access at the time of writing [25]. Third, fully-automatic but comprising of many steps, requiring a good understanding of image processing techniques and a license for proprietary software (MATLAB) [26, 27]. We aim to develop and release an open-source, raw image-to-measurement, fully-automatic method for choroid region segmentation that can be easily used without special training and does not require licenses for proprietary software (Fig. 1). Importantly, we intend to not only to make our method available to the research community, but to do so in a frictionless way that allows other researchers to download and use our method without seeking our approval. We distil GPET into a deep learning algorithm, DeepGPET, which can process images without supervision in a fraction of the time -- permitting analysis of large scale datasets and potential deployment into clinical care and research practice without prior training in image processing. ## Methods ### Study population We used 715 OCT B-scans belonging to 82 subjects from three studies: **OCTANE**[28], a study looking at renal function and impairment in chronic kidney disease patients. **i-Test**, a study recruiting pregnant women of any gestation or those who have delivered a baby within 6 months, including controls and individuals at high risk of complications. **Normative**, data from 30 healthy volunteers as a control group [29]. All studies conformed with the Declaration of Helsinki and received relevant ethical approval and informed consent from all subjects. Table 1 provides an overview of basic population characteristics and number of subjects/images of these studies. Two Heidelberg spectral domain OCT SPECTRALIS devices were used for image acquisition: the Standard Module (OCT1 system) and FLEX Module (OCT2 system). The FLEX is a portable version that enables imaging of patients in a ward environment. Both machines imaged a 30\({}^{\circ}\) (8.7 mm) region, generating a macular, cross-sectional OCT B-scan at 768 \(\times\) 768 pixel resolution. Notably, 14% of the OCT B-scans were non-EDI and thus present more challenging images with lower signal-to-noise ratio in the choroidal part of the OCT. Horizontal line and vertical scans were centred at the fovea with active eye tracking, using an Automatic Real Time (ART) value of 100. Posterior pole macular scans covered a 30-degree by 25-degree region, using EDI mode. We split the data into training (603 B-scans, 64 subjects), validation (58 B-scans, 9 subjects) and test sets (54 B-scans, 7 subjects) at the patient-level, stratified on scan type (EDI/non-EDI), cohort, and image quality. ### DeepGPET Our approach is to distil GPET into a DL model to obtain an efficient and fully-automatic method. We fine-tune a UNet [30] with MobileNetV3 [31] backbone pre-trained on ImageNet for 60 epochs with batch size 16 using AdamW [32] (\(\mathrm{lr}=10^{-3}\), \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), weight decay \(=10^{-2}\)). After epoch 30, we maintain an exponential moving average (EMA) of model weights which we then use as our final model. We use the following data augmentations: brightness and contrast changes, horizontal flipping, and simulated OCT speckle noise by applying Gaussian noise followed by multiplicative noise (all \(p=0.5\)); Gaussian blur and random affine transforms (both \(p=0.25\)). To reduce memory-load, we crop the black space above and below the OCT B-scan and process images at a resolution of \(544\times 768\) pixels. Images are standardised by subtracting \(0.1\) and dividing by \(0.2\), and no further pre-processing is done. We used Python 3.11, PyTorch 2.0, Segmentation Models PyTorch [33] and the timm library [34]. Our code is available here [to be added upon publication]. ### Statistical analysis We used Dice coefficient and Area Under the ROC Curve (AUC) for evaluating agreement in segmentations, as well as the Pearson correlation \(r\) and Mean Absolute Error (MAE) for segmentation-derived choroid thickness and area. The calculation of thickness and area from the segmentation is described in more detail in [20]. Briefly, for thickness the average of 3 measures is used, taken at the fovea and 2,000 microns from it in either direction by drawing a perpendicular line from the upper boundary to the lower boundary to account for choroidal curvature. For area, pixels are counted in a region of interest 3,000 microns around the fovea, which corresponds to the commonly used Early Treatment Diabetic Retinopathy Study (ETDRS) macular area of \(6,000\times 6,000\) microns [35]. We compare DeepGPET's agreement with GPET's segmentations against the repeatability of GPET itself. The creator of GPET, J.B., made both the original and repeated segmentations with GPET. Since both segmentations were done by the same person the is no inter-rater subjectivity at play here. Thus, the intra-rater agreement measured here is a best case scenario and forms an upper-bound for agreement with the original segmentation. In addition to quantitative evaluations, we also compared segmentations by GPET and DeepGPET for 20 test set OCT images qualitatively by having them rated by I.M., an experienced clinical ophthalmologist. We selected 7 examples with the highest disagreement in thickness and area, 7 examples with disagreement closest to the median, and 6 examples with the lowest disagreement. Thus, these 20 examples cover cases where both methods are very different, cases of typical disagreement, and cases where both methods are very similar. In each instance, I.M. was shown the segmentations of both methods overlaid on the OCT -- blinded to which method produced which segmentation -- and also provided with the raw, full-resolution OCT, and was then asked to rate each one along three dimensions: Quality of the upper boundary, the lower boundary and overall smoothness using an ordinal scale: "Very bad", "Bad", "Okay", "Good", "Very good". ## Results ### Quantitative Table 2 shows the results for DeepGPET and a repeat GPET, compared to the initial GPET segmentation as "ground-truth". \begin{table} \begin{tabular}{l c c c|c} \hline \hline & OCTANE & i-Test & Normative & Total \\ \hline Subjects & 47 & 5 & 30 & 82 \\ Male/Female & 24 / 23 & 0 / 5 & 20 / 10 & 44 / 38 \\ Right/Left eyes & 47 / 0 & 5 / 5 & 30 / 28 & 82 / 33 \\ Age (mean (SD)) & 48.8 (12.9) & 34.4 (3.4) & 49.1 (7.0) & 48.0 (11.2) \\ Machine & Standard & FLEX & Standard & Both \\ Horizontal/Vertical scans & 166 / 0 & 16 / 16 & 57 / 54 & 239 / 65 \\ Volume scans & 174 & 186 & 46 & 406 \\ Total B scans & 340 & 218 & 157 & 715 \\ \hline \hline \end{tabular} \end{table} Table 1: Overview of population characteristics. EDI, enhanced depth imaging; SD, standard deviation. Figure 1: Comparison between the semi-automatic GPET [19, 20] (top) and fully-automatic DeepGPET (bottom). #### Agreement in segmentation Both methods have excellent agreement with the original segmentations. DeepGPET's agreement is comparable to the repeatability of GFET itself, with DeepGPET's AUC being slightly higher (0.9994 vs 0.9812) and Dice coefficient slightly lower (0.9664 vs 0.9672). DeepGPET performing better in terms of AUC but worse in terms of Dice suggests that for pixels where it disagrees with GFET after thresholding, the confidence is lower than for ones where it agrees with GPET. This in turn suggests that DeepGPET is well-calibrated based on the raw predictions made for each pixel. #### Processing speed and manual interventions Both methods were compared on the same standard laptop CPU. DeepGPET processed the images in only 3.6% of the time that GPET needed. DeepGPET ran fully-automatic and successfully segmented all images, whereas GPET required 1.27 manual interventions on average, including selecting initial pixels and manual adjustment of GPET parameters when the initial segmentation failed. This results in massive time savings: A standard OCT volume scan consists of 61 B-scans. With GPET, processing such a volume for a single eye takes about 35 minutes during which a person has to select initial pixels to guide tracing (for all images) and adjust parameters if GPET initially failed (for about 25% of images). In contrast, DeepGPET could do the same processing in about 76 seconds on the same hardware, during which no manual input is needed. DeepGPET could even be GPU-accelerated to cut the processing time by another order of magnitude. The lack of manual interventions required by DeepGPET means that no subjectivity is introduced unlike GPET, particularly when used by different people. Additionally, DeepGPET does not require specifically trained analysts and could be used fully-automatically in clinical practice. #### Agreement in choroid area and thickness GPET showed very high repeatability for thickness (Pearson \(r\)=0.9527, MAE=10.4074 \(\mu\)m) and area (Pearson \(r\)=0.9726, MAE=0.0486 mm\({}^{2}\)). DeepGPET achieved slightly lower, yet also very high agreement for both thickness (Pearson \(r\)=0.8908, MAE=13.3086 \(\mu\)m) and area (Pearson \(r\)=0.9082, MAE=0.0699 mm\({}^{2}\)). Fig. 2 shows correlation plots for thickness and area. DeepGPET's agreement with GPET does not quite reach the repeatability of GPET itself, when used by the same experienced analyst, but it is quite comparable and high in absolute terms. Especially noteworthy is that the MAE for thickness and area is only 21% lower for thickness and 30% lower for area for repeated GPET than for DeepGPET Thus, DeepGPET comes quite close to optimal performance, i.e. best case repeatability where the same experienced analyst did both sets of annotation. Furthermore, the regression fits in both derived measures for DeepGPET are closer to the identity line than for the repeated GPET measurements. For CT, the linear fit estimated a slope value of 1.043 (95% confidence interval of 0.895 to 1.192) and intercept of -7.308 \(\mu\)m (95% confidence interval of -48.967 \(\mu\)m to 34.350 \(\mu\)m). For CA, the linear fit estimated a slope value of 1.01 (95% confidence interval of 0.878 to 1.137) and an intercept of 0.016 mm\({}^{2}\) (95% confidence interval of -0.195 mm\({}^{2}\) to 0.226 mm\({}^{2}\)). All confidence intervals contain 1 and 0 for the slope and intercepts, respectively, suggesting no systematic bias or proportional difference between GPET and DeepGPET [36, 37]. \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{AUC} & \multirow{2}{*}{Dice} & \multirow{2}{*}{Time (s/img)} & \multicolumn{2}{c}{Thickness} & \multicolumn{2}{c}{Area} \\ \cline{5-8} & & & & Pearson \(r\) & MAE (\(\mu\)m) & Pearson \(r\) & MAE (mm\({}^{2}\)) \\ \hline DeepGPET & 0.9994 & 0.9664 & 1.25 \(\pm\) 0.10 & 0.8908 & 13.3086 & 0.9082 & 0.0699 \\ \hline Repeat GPET & 0.9812 & 0.9672 & 34.49 \(\pm\) 15.09 & 0.9527 & 10.4074 & 0.9726 & 0.0486 \\ \hline \hline \end{tabular} \end{table} Table 2: Metrics for DeepGPET and repeated GPET using the initial GPET annotation as “ground-truth”. Time given as mean \(\pm\) standard deviation. Figure 2: Correlation plots comparing derived measures of mean choroid thickness (a) and choroid area (b) using DeepGPET and the re-segmentations using GPET. ### Qualitative Table 3 shows the results of the adjudication between GPET and DeepGPET. The upper boundary was rated as "Very good" for both methods in all 20 cases. However, for the lower boundary, DeepGPET was rated as "Bad" in 2 cases for the lower boundary and 1 case for smoothness. Otherwise, both methods performed very similarly. Fig. 3 shows some examples. In (a), DeepGPET segments more of the temporal region than GPET does, providing a full width segmentation which was preferred by the rater. Additionally, both approaches are able to segment a smooth boundary, even in regions with stroma fluid obscuring the lower boundary (red arrow). In (b), the lower boundary for this choroid is very faint and is actually below the majority of the vessels sitting most posterior (red arrow). DeepGPET produced a smooth and concave boundary preferred by the rater, while GPET fell victim to hugging the posterior most vessels in the subfroval region. In (c), DeepGPET rejected the true boundary in the low contrast region (red arrow) and opted for a more well-defined one, while GPET segmented the more uncertain path. Since GPET permits human intervention, there is more opportunity to fine tune it's parameters to fit what the analyst believes is the true boundary. Here, the rater preferred GPET, while DeepGPET's under-confidence led to under-segmentation and to a bad rating. In (d) -- the test image with the largest disagreement in thickness and area -- the lower boundary is difficult to delineate due to a thick suprachoroidal space (red arrow) and thus a lack of lower boundary definition. Here, the rater gave a bad rating to DeepGPET and preferred GPET, while remarking that GPET actually under-segmented the choroid by intersecting through posterior vessels. ## Discussion We developed DeepGPET, a fully-automatic and efficient method for choroid layer segmentation, by distilling GPET, a clinically validated semi-automatic method. DeepGPET achieved excellent agreement with GPET on held-out data in terms of segmentation and derived choroidal measurements, approaching the repeatability of GPET itself. Most importantly, DeepGPET does not require specialist training and can process images fully-automatically in a fraction of the time, enabling analysis of large scale datasets and potential deployment in clinical practice. While the observed agreement was very high, it was not perfect. However, even higher agreement with GPET would not necessarily produce a better method as GPET itself is not perfect and even conceptually there is debate around the exact location of choroid-scleral interface (CSI), i.e. the lower choroid boundary in an OCT B-scan. CSI is commonly defined, e.g. by the original authors behind EDI-OCT [38], as the smooth inner boundary between the choroid and sclera, or just below the most posterior vessels but excluding the suprachoroidal space. However, even that definition is still debated and can be hard to discern in practice. Not all choroids are smooth, and there are edge cases like vessels passing from the sclera into the choroid, or stroma fluid obscurations that make the boundary even more ambiguous. Figure 3: Four examples from the adjudication. The rater preferred DeepGPET for (a–b) and GPET for (c–d). Top row: green, segmented by both GPET and DeepGPET; red, GPET only; and blue, DeepGPET only. Bottom row: arrows indicate important choroidal features which can make segmentation challenging. (a): no large vessels in nasal region to guide segmentation; (b): lower boundary very faint and below the posterior most vessels; (c): lower boundary noisy and faint; (d): large suprachoroidal space visible. \begin{table} \begin{tabular}{l c c c} \hline \hline Method & Upper boundary & Lower boundary & Smoothness \\ \hline DeepGPET & Very good: 20 & Very good: 4, Good: 10, Okay: 4, Bad: 2 & Very good: 5, Good: 12, Okay: 2, Bad: 1 \\ GPET & Very good: 20 & Very good: 6, Good: 6, Okay: 8, Bad: 0 & Very good: 6, Good: 13, Okay: 1, Bad: 0 \\ \hline \hline \end{tabular} \end{table} Table 3: Qualitative ratings of 20 test set segmentations along 3 key dimensions. The rater was blinded to the identity of the methods and their order was randomised for every example. These features, coupled with low signal-to-noise ratio and vessel shadowing from superficial retinal vessels, all contribute to the difficult challenge of choroid layer segmentation. For quantitative analysis of choroidal phenotypes, the specific definition of the CSI is secondary to applying the same, consistent definition across and within patients. Here, fully-automatic methods like DeepGPET provide a large benefit by removing the subjectivity present in semi-automatic methods. In GPET, the initial points being selected determine what edge is being traced as the CSI, and thus two analysts with different understandings could produce vastly different segmentations. With DeepGPET, the same image is always segmented in the same way, removing subjectivity. Initial experiments with other types of OCT imaging have positively indicated DeepGPET's ability to generalise to different visualisations of the choroid. Fig. 4 shows a fully segmented retinal nerve fibre layer (RNFL) scan extracted from the Heidelberg Standard Module, showing promise in it's usability in scans different to the regular macular line scan. We hope in future iterations to experiment further with RNFL scans and different imaging devices entirely. In the present work, we used data from three studies, two OCT devices and included both EDI and non-EDI scans. However, we only used data from subjects that were either healthy or had systemic but not eye disease, to which DeepGPET might not be robust to. In future work, we plan to externally validate DeepGPET and include cases of ocular pathologies. A further limitation is that while GPET has been clinically validated, not all segmentations used for training DeepGPET were entirely perfect. Thus, revisiting some of the existing segmentations and manually improving them to a "gold standard" for purposes of training the model could improve DeepGPET. For instance, GPET does not always segment the whole width of the choroid. Interestingly, DeepGPET already is able to do that in some cases (e.g. Fig. 3(a) and Fig. 4), but also does emulate the incomplete segmentations by GPET in other cases. A model trained on enhanced "gold standard" segmentations would produce even better segmentations. Finally, we have focused on segmentation as it is the most important and most time-consuming step of choroidal analysis. However, the location of the fovea on OCT images needs identified to define the region of interest for derived measurements such as thickness, area and volume. Identifying the fovea is less time-consuming or ambiguous than choroid segmentation, and so we plan to extend DeepGPET to output the fovea location. This would make DeepGPET a fast and efficient end-to-end framework capable of converting a raw OCT image to a set of clinically meaningful segmentation-derived measurements. Likewise, segmenting the choroidal vessels is a very challenging task even for humans and would be prohibitively time-consuming to do manually, but in the future we aim to explore whether DeepGPET can automatically segment the vasculature within the choroid as well. ## Conclusion Choroid segmentation is a key step in calculating choroidal measurements like thickness and area. Currently, this is commonly done manually which is labour intensive and introduces subjectivity. Semi-automatic methods only partially alleviate both of these problems, and previous fully-automatic methods were not easily accessible for researchers. DeepGPET addresses this gap as a fully-automatic, end-to-end algorithm that does not require manual interventions. DeepGPET provides similar performance as the previously clinically validated, semi-automatic GPET, while being fully-automatic and an order of magnitude faster. This enables the analysis of large scale datasets and potential deployment in clinical practice without necessitating a trained operator. Although the definition of the lower choroid boundary is still subject to debate - especially when it comes to supra-choroidal spaces - the most important consideration is to have a method that consistently applies the same definition across subjects and studies, which DeepGPET as a fully-automatic method provides. As an easily accessible, open-source algorithm for choroid segmentation, DeepGPET will enable researchers to easily calculate choroidal measurements much faster and with less subjectivity than before. ## Acknowledgements J.B. was supported by the Medical Research Council (grant MR/N013166/1) as part of the Doctoral Training Programme in Precision Medicine at the Usher Institute, University of Edinburgh. J.E. was supported by UK Research and Innovation (grant EP/S02431X/1) as part of the Centre of Doctoral Training in Biomedical AI at the School of Informatics, University of Edinburgh. For the purpose of open access, the authors have applied a creative commons attribution (CC BY) licence to any author accepted manuscript version arising. The authors would also like to thank all participants in the studies used in this paper, as well as all staff at the Edinburgh Imaging Facility who Figure 4: An example RNFL scan from Heidelberg’s Standard Module, automatically segmented by DeepGPET without manual intervention. contributed to image acquisition for this study. ## Conflicts of Interest The authors declare no conflicts of interest.
2310.15078
Convergence of a steepest descent algorithm in shape optimisation using $W^{1,\infty}$ functions
Built upon previous work of the authors in (Deckelnick, Herbert, and Hinze, ESAIM: COCV 28 (2022)), we present a general shape optimisation framework based on the method of mappings in the $W^{1,\infty}$ topology together with a suitable finite element discretisation. For the numerical solution of the respective discrete shape optimisation problems we propose a steepest descent minimisation algorithm with Armijo-Goldstein stepsize rule. We show that the sequence generated by this descent method globally converges, and under mild assumptions also, that every accumulation point of this sequence is a stationary point of the shape functional. Moreover, for the mesh discretisation parameter tending to zero we under mild assumptions prove convergence of the discrete stationary shapes in the Hausdorff complementary metric. To illustrate our approach we present a selection of numerical examples for PDE constrained shape optimisation problems, where we include numerical convergence studies which support our analytical findings.
Klaus Deckelnick, Philip J. Herbert, Michael Hinze
2023-10-23T16:37:07Z
http://arxiv.org/abs/2310.15078v1
# Convergence of a steepest descent algorithm in shape optimisation using \(W^{1,\infty}\) functions ###### Abstract Built upon previous work of the authors in Deckelnick, Herbert, and Hinze, ESAIM: COCV 28 (2022), we present a general shape optimisation framework based on the method of mappings in the \(W^{1,\infty}\) topology together with a suitable finite element discretisation. For the numerical solution of the respective discrete shape optimisation problems we propose a steepest descent minimisation algorithm with Armijo-Goldstein stepsize rule. We show that the sequence generated by this descent method globally converges, and under mild assumptions also, that every accumulation point of this sequence is a stationary point of the shape functional. Moreover, for the mesh discretisation parameter tending to zero we under mild assumptions prove convergence of the discrete stationary shapes in the Hausdorff complementary metric. To illustrate our approach we present a selection of numerical examples for PDE constrained shape optimisation problems, where we include numerical convergence studies which support our analytical findings. _Keywords:_ PDE constrained shape optimisation, \(W^{1,\infty}\)-steepest-descent, global convergence, finite element discretisation _MSC subject classification:_ 35Q93, 49Q10, 49J20 ## 1 Introduction We are interested in the numerical approximation of PDE constrained shape optimisation. Our prototype problem will be of the form \[\min\mathcal{J}(\Omega):=\int_{\Omega}j(\cdot,u,\nabla u)\,\mathrm{dx},\, \Omega\in\mathcal{S}, \tag{1.1}\] where \(j\) is a real-valued function whose properties will be specified in Section 2 and \(u\) weakly solves the Poisson problem \[-\Delta u=f\text{ in }\Omega,\quad u=0\text{ on }\partial\Omega.\] Furthermore, \(\mathcal{S}\) is a collection of admissible domains contained in a given hold-all domain \(D\subset\mathbb{R}^{d}\). We use the method of mappings and assume that each \(\Omega\in\mathcal{S}\) is represented by a bi-Lipschitz mapping \(\Phi\colon D\to D\) as \(\Omega=\Phi(\hat{\Omega})\), where \(\hat{\Omega}\Subset D\) is a fixed reference domain. For domain variations one seeks a mapping \(V^{*}\in W^{1,\infty}_{0}(D,\mathbb{R}^{d})\) which forms a descent direction for the shape derivative, i.e. satisfies \(\mathcal{J}^{\prime}(\Omega)[V^{*}]<0\). The new domain is then obtained as \(\Omega_{\mathrm{new}}=(\mathrm{id}+\alpha V^{*})(\Omega)\) with \(\alpha>0\) chosen suitably to ensure that the map \(\mathrm{id}+\alpha V^{*}\) is bi-Lipschitz. A common approach to determine a descent direction is to work in a Hilbert space \(H\hookrightarrow W^{1,\infty}(D,\mathbb{R}^{d})\) and then to find \(V^{*}\) as the corresponding Riesz representative of \(\mathcal{J}^{\prime}(\Omega)\). Depending on the space dimension this may require the use of Sobolev spaces \(H^{m}(D,\mathbb{R}^{d})\) with a larger \(m\in\mathbb{N}\) making the discretisation of this approach cumbersome. In this work we follow instead the concept introduced in [10], [10] and suggest to work directly in the space \(W^{1,\infty}_{0}(D,\mathbb{R}^{d})\) choosing \[V^{*}\in\arg\min\left\{\mathcal{J}^{\prime}(\Omega)[V]:V\in W^{1,\infty}_{0}( D,\mathbb{R}^{d}),|DV|\leq 1\text{ a.e. in }D\right\} \tag{1.2}\] as descent direction for the shape minimisation problem. In the above, by \(|DV|\), we mean the spectral norm of the matrix \(DV\). In order to approximate (1.1) based on this idea we introduce the functional \[\mathcal{J}_{h}(\Omega_{h}):=\int_{\Omega_{h}}j(\cdot,u_{h},\nabla u_{h})\, \mathrm{dx},\quad\Omega_{h}\in\mathcal{S}_{h}, \tag{1.3}\] where \(u_{h}\) denotes the piecewise linear and continuous finite element function solving the discrete Poisson problem (2.14) and \(\mathcal{S}_{h}\) is a suitable approximation of \(\mathcal{S}\). For the numerical solution of the discrete shape optimisation problem we propose a steepest descent method with Armijo step size rule which is realised in the \(W^{1,\infty}-\) topology as described above. In fact \(\mathcal{S}_{h}\) is built upon piecewise linear and continuous approximations \(\Phi_{h}\) of the mapping \(\Phi\), which in turn are induced by piecewise linear and continuous vector fields \(V^{*}_{h}\) solving the discrete counterpart of (1.2). We here note that the use of piecewise linear and continuous finite elements is perfectly tailored to the numerical treatment of our approach, since they belong to \(W^{1,\infty}\), and both problems (1.2) and (1.3) can be discretised on the same triangulation \(\mathcal{T}_{h}\). It is the purpose of this paper to analyse the resulting numerical method both for a fixed mesh width \(h\) and for the case that \(h\) tends to zero thereby justifying the underlying approach. The main contributions of this work are * Theorem 3.3, where global convergence of the steepest descent method is shown for a fixed discretisation parameter, and under mild assumptions also, that every accumulation point of this sequence is a stationary point of the discrete shape functional; * Theorem 4.4, where it is shown that under suitable conditions a sequence of discrete stationary shapes converges with respect to the Hausdorff complementary metric to a stationary point of (1.1). An important ingredient in the proof of Theorem 4.4 is the continuity of the Dirichlet problem with respect to the Hausdorff complementary metric which is usually expressed in terms of \(\gamma\)-convergence or (equivalently) Mosco-convergence. Our analysis is inspired by the work [13] of Chenais and Zuazua, who obtain the convergence of a sequence of discrete minimal shapes, obtained by some finite element approximation, to a minimum of the continuous problem. In [13], Mosco-convergence is a consequence of the assumption that the complementary sets of the discrete optimal shapes have a uniformly bounded number of connected components. In contrast, in our setting it will be more convenient to work with a uniform capacity density condition, see Theorem 4.1. A convergence result for a shape optimisation problem in the class of convex domains has recently been obtained by Bartels and Wachsmuth, [12] under a condition that will also appear in our work. In special settings a priori estimates for finite element approximations of shape optimisation problems have been proved. Here we refer to the works of Kiniger and Vexler [14] and Fumagalli et al. [15], where graph settings are considered, and of Eppler et al. [1] for star-shaped domains. Another aspect that has been examined from the viewpoint of numerical analysis is the approximation of the shape derivative. In [17] Hiptmair, Paganini, and Sargheini study the finite element approximation of the shape derivative under appropriate regularity assumptions of the state and the adjoint state. In [18] Gong and Zhu propose a finite element approximation to the boundary form of the shape derivative in PDE constrained shape optimisation. Zhu and Gao in [19] numerically analyse a mixed finite element approximation of the shape gradient for Stokes flow, and Zhu, Hu and Liao in [22] provide numerical analysis for the finite element approximation of shape derivatives in eigenvalue optimisation for the Poisson problem. For additional information on the subject of shape optimisation we refer the reader to the seminal works of Delfour and Zolesio [11], of Sokolowski and Zolesio [12], and the recent overview article [1] by Allaire, Dapogny, and Jouve, where also a comprehensive bibliography on the topic can be found. Outline:In Section 2 we provide preliminaries for the formulation and the numerical analysis of our PDE constrained shape optimisation problem. In Section 3 we prove global convergence for the steepest descent method applied to problem (1.3), and in Section 4 prove convergence of discrete stationary points to a stationary point for the limit problem (1.1). In Section 5 we provide numerical experiments which support our theoretical findings. ## 2 Preliminaries ### Setting of the problem Let \(D\subset\mathbb{R}^{d}\) be an open, convex, polygonal hold-all domain and \(\hat{\Omega}\Subset D\) a fixed reference domain. We define \[\mathcal{U}:=\{\Phi:\bar{D}\to\bar{D}\,|\,\Phi\text{ is a bilipschitz map},\Phi=\text{id on }\partial D\}\] and our set of admissible shapes as \[\mathcal{S}:=\{\Omega\subset D\,|\,\Omega=\Phi(\hat{\Omega})\text{ for some }\Phi\in\mathcal{U}\}.\] Let us consider the shape optimisation problem \[\min_{\Omega\in\mathcal{S}}\mathcal{J}(\Omega)=\int_{\Omega}j(x,u(x),\nabla u (x))\,\mathrm{dx},\] where \(u\in H^{1}_{0}(\Omega)\) is the unique solution of \[\int_{\Omega}\nabla u\cdot\nabla\eta\,\mathrm{dx}=\langle f,\eta\rangle \qquad\text{ for all }\eta\in H^{1}_{0}(\Omega). \tag{2.1}\] Our definition of \(\mathcal{S}\) allows us to interpret \(\mathcal{U}\) as the set of controls for a PDE-constrained optimisation problem. In what follows we assume that \(f\in H^{1}(D)\) and that \(j\in C^{2}(D\times\mathbb{R}\times\mathbb{R}^{d})\) satisfies \[|j(x,u,z)|+|j_{x}(x,u,z)|+|j_{xx}(x,u,z)| \leq \varphi_{1}(x)+c_{1}\big{(}|u|^{q}+|z|^{2}\big{)}; \tag{2.2}\] \[|j_{u}(x,u,z)|+|j_{xu}(x,u,z)| \leq \varphi_{2}(x)+c_{2}\big{(}|u|^{q-1}+|z|^{2-\frac{q}{q}}\big{)};\] (2.3) \[|j_{z}(x,u,z)|+|j_{zz}(x,u,z)| \leq \varphi_{3}(x)+c_{3}\big{(}|u|^{\frac{q}{2}}+|z|\big{)};\] (2.4) \[|j_{uu}(x,u,z)| \leq \varphi_{4}(x)+c_{4}\big{(}|u|^{q-2}+|z|^{2-\frac{q}{q}}\big{)};\] (2.5) \[|j_{zz}(x,u,z)| \leq \varphi_{5}(x) \tag{2.6}\] for all \((x,u,z)\in D\times\mathbb{R}\times\mathbb{R}^{d}\). Here, \(2\leq q<\infty\) if \(d=2\) and \(q=\frac{2d}{d-2}\) if \(d\geq 3\). Also, \(\varphi_{1},\ldots,\varphi_{5}\) are non-negative with \(\varphi_{1}\in L^{1}(D),\varphi_{2}\in L^{\frac{q}{q-1}}(D),\varphi_{3}\in L^{ 2}(D),\varphi_{4}\in L^{\frac{q}{q-2}}(D)\) and \(\varphi_{5}\in L^{\infty}(D)\). Note that the choice of \(q\) implies the continuous embedding \(H^{1}_{0}(D)\hookrightarrow L^{q}(D)\), so that there exists \(c>0\) with \[\|v\|_{L^{q}}\leq c\|v\|_{H^{1}}\qquad\text{ for all }v\in H^{1}_{0}(D). \tag{2.7}\] It is well known that the shape derivative of \(\mathcal{J}\) is given by \[\mathcal{J}^{\prime}(\Omega)[V] = \int_{\Omega}\Bigl{(}j(\cdot,u,\nabla u)\,\mathrm{div}\,V+j_{x} (\cdot,u,\nabla u)\cdot V-j_{z}(\cdot,u,\nabla u)\cdot DV^{\mathsf{T}}\nabla u \Bigr{)}\,\mathrm{dx}\] \[+\int_{\Omega}\Bigl{(}\bigl{(}DV+DV^{\mathsf{T}}-\mathrm{div}\,VI \bigr{)}\nabla u\cdot\nabla p+\mathrm{div}(fV)p\Bigr{)}\,\mathrm{dx}\] for all \(V\in W^{1,\infty}_{0}(D,\mathbb{R}^{d})\). Here, \(p\in H^{1}_{0}(\Omega)\) is the solution of the adjoint problem \[\int_{\Omega}\nabla p\cdot\nabla\eta\,\mathrm{dx}=\int_{\Omega}\bigl{(}j_{u}( \cdot,u,\nabla u)\eta+j_{z}(\cdot,u,\nabla u)\cdot\nabla\eta\bigr{)}\,\mathrm{dx }\quad\text{ for all }\eta\in H^{1}_{0}(\Omega). \tag{2.9}\] We observe that (2.2)-(2.4) together with (2.7) imply that the integrals on the right hand side of (2.8) and (2.9) exist. Finding a global minimiser of \(\mathcal{J}\) is usually a very hard task so that numerical methods aim to approximate stationary points, i.e. sets \(\Omega\in\mathcal{S}\) that satisfy \(\mathcal{J}^{\prime}(\Omega)[V]=0\) for all \(V\in W^{1,\infty}_{0}(D,\mathbb{R}^{d})\). ### Discretisation In order to define a corresponding numerical method we choose an admissible triangulation \(\hat{\mathcal{T}}_{h}\) of \(\bar{D}\) and define \[\hat{\mathcal{U}}_{h}:=\{\Phi_{h}\in C^{0}(\bar{D},\mathbb{R}^{d})\,|\,\Phi_{ h|\hat{T}}\in P^{1}(\hat{T},\mathbb{R}^{d}),\hat{T}\in\hat{\mathcal{T}}_{h}, \Phi_{h}\text{ is injective},\Phi_{h}=\mathrm{id\ on }\partial D\}.\] We start with the following observation. **Lemma 2.1**.: _Let \(\Phi_{h}\in\hat{\mathcal{U}}_{h}\). Then \(\Phi_{h}\) is a bilipschitz map from \(\bar{D}\) onto \(\bar{D}\)._ Proof.: Denoting by \(\deg\) the Brouwer degree and using that \(\Phi_{h}=\mathrm{id}\) on \(\partial D\), we have for every \(p\in D\) that \[\deg(\Phi_{h},D,p)=\deg(\mathrm{id},D,p)=1.\] Hence we deduce from the existence property of the degree that there exists \(x\in D\) with \(p=\Phi_{h}(x)\), and therefore \(D\subset\Phi_{h}(D)\). Next we claim that \(D\) is closed in \(\Phi_{h}(D)\). To see this, let \((p_{n})_{n\in\mathbb{N}}\) be a sequence in \(D\) such that \(p_{n}\to p\) as \(n\to\infty\) for some \(p\in\Phi_{h}(D)\), say \(p=\Phi_{h}(x)\) with \(x\in D\). If \(p\in\partial D\), then \(\Phi_{h}(x)=p=\Phi_{h}(p)\) and hence we obtain in view of the injectivity of \(\Phi_{h}\) that \(x=p\), a contradiction. Hence \(p\in D\). As \(D\) is also open in \(\Phi_{h}(D)\) and \(\Phi_{h}(D)\) is connected we infer that \(D=\Phi_{h}(D)\). Recalling again that \(\Phi_{h}=\mathrm{id}\) on \(\partial D\) we see that \(\Phi_{h}:\bar{D}\to\bar{D}\) is bijective. Finally, using the fact that \(\Phi_{h}\) is piecewise linear and injective together with the convexity of \(D\) it is not difficult to show that there exists a constant \(K>1\) depending on \(\Phi_{h}\) such that \[\frac{1}{K}|x-y|\leq|\Phi_{h}(x)-\Phi_{h}(y)|\leq K|x-y|\qquad\forall x,y\in \bar{D}. \tag{2.10}\] Similarly as in [1, Section 3.2] we shall define our discrete admissible domains via transformations of \(\hat{\Omega}\) from the set \(\hat{\mathcal{U}}_{h}\). In what follows we assume that \(\hat{\Omega}\) is an open polygonal domain such that \(\overline{\hat{\Omega}}=\bigcup_{\hat{T}\in\hat{\mathcal{T}}_{h}^{\text{ \tiny{self}}}}\hat{T}\subset D\), where \(\hat{\mathcal{T}}_{h}^{\text{\tiny{self}}}\subset\hat{\mathcal{T}}_{h}\). For later purposes we suppose in addition that \(\hat{\Omega}\) satisfies the following exterior corkscrew condition: \[\exists\lambda\in(0,1)\,\exists s_{0}>0\,\forall\hat{x}\in\partial\hat{ \Omega}\,\forall s\in(0,s_{0})\,\exists\hat{y}\in B_{s}(\hat{x}):\quad B_{ \lambda s}(\hat{y})\subset B_{s}(\hat{x})\cap\complement\hat{\Omega}. \tag{2.11}\] In the above \(\complement\hat{\Omega}\) denotes the complement of \(\hat{\Omega}\). We then define \[\mathcal{S}_{h}:=\{\Omega_{h}\subset D\,|\,\Omega_{h}=\Phi_{h}(\hat{\Omega}) \text{ for some }\Phi_{h}\in\hat{\mathcal{U}}_{h}\}. \tag{2.12}\] Note that in view of Lemma 2.1 sets \(\Omega_{h}\in\mathcal{S}_{h}\) are triangulated in a natural way via \(\mathcal{T}_{\Omega_{h}}=\{\Phi_{h}(\hat{T}),\,\hat{T}\in\hat{\mathcal{T}}_{h} ^{\text{\tiny{self}}}\}\). Given a triangulation of this form we introduce \[X_{\Omega_{h}}:=\{\eta_{h}\in C^{0}(\overline{\Omega_{h}})\,|\,\eta_{h|T}\in P _{1}(T),T\in\mathcal{T}_{\Omega_{h}},\,\eta_{h}=0\text{ on }\partial\Omega_{h}\}.\] Our discrete shape optimisation problem now reads: \[\min\mathcal{J}_{h}(\Omega_{h}):=\int_{\Omega_{h}}j(x,u_{h}(x),\nabla u_{h}(x)) \,\mathrm{dx}, \tag{2.13}\] where \(u_{h}\in X_{\Omega_{h}}\) is the unique solution of \[\int_{\Omega_{h}}\nabla u_{h}\cdot\nabla\eta_{h}\,\mathrm{dx}=\langle f,\eta_{h} \rangle\qquad\text{ for all }\eta_{h}\in X_{\Omega_{h}}. \tag{2.14}\] We remark that we have chosen linear finite elements merely for convenience and that one may take any conforming finite element space in order to approximate the solution of (2.1). Let us fix \(\Omega_{h}=\Phi_{h}(\hat{\Omega})\in\mathcal{S}_{h}\) for some \(\Phi_{h}\in\hat{\mathcal{U}}_{h}\). In order to define a suitable perturbation of \(\Omega_{h}\) we let \[\mathcal{V}_{\Phi_{h}}:=\{V_{h}\in C^{0}(\bar{D},\mathbb{R}^{d})\,|\,V_{h|T} \in P_{1}(T,\mathbb{R}^{d}),T=\Phi_{h}(\hat{T}),\hat{T}\in\hat{\mathcal{T}}_{h },\,V_{h}=0\text{ on }\partial D\}. \tag{2.15}\] Suppose that \(V_{h}\in\mathcal{V}_{\Phi_{h}}\) with \(|DV_{h}|\leq 1\) in \(\bar{D}\). Clearly, \(\Phi_{h}+tV_{h}\circ\Phi_{h}\) belongs to \(\hat{\mathcal{U}}_{h}\) provided that \(|t|<1\). Hence \(\Omega_{h,t}:=(\mathrm{id}+tV_{h})(\Omega_{h})=(\Phi_{h}+tV_{h}\circ\Phi_{h})( \hat{\Omega})\in\mathcal{S}_{h}\) if \(|t|<1\) and we may define \(\mathcal{J}^{\prime}_{h}(\Omega_{h})[V_{h}]:=\frac{d}{dt}\mathcal{J}_{h}( \Omega_{h,t})_{|t=0}\). The formula for \(\mathcal{J}^{\prime}(\Omega_{h})[V_{h}]\) is obtained analogously to the continuous case. As the corresponding arguments will appear in the proof of Lemma 3.2 below we here merely state its form: \[\mathcal{J}^{\prime}_{h}(\Omega_{h})[V_{h}] = \int_{\Omega_{h}}\Bigl{(}j(\cdot,u_{h},\nabla u_{h})\mathrm{div} V_{h}+j_{x}(\cdot,u_{h},\nabla u_{h})\cdot V_{h}-j_{z}(\cdot,u_{h},\nabla u_{h}) \cdot DV_{h}^{\mathsf{T}}\nabla u_{h}\Bigr{)}\,\mathrm{dx} \tag{2.16}\] \[+\int_{\Omega_{h}}\Bigl{(}\bigl{(}DV_{h}+DV_{h}^{\mathsf{T}}- \mathrm{div}V_{h}I\bigr{)}\nabla u_{h}\cdot\nabla p_{h}+\mathrm{div}(fV_{h})p_ {h}\Bigr{)}\,\mathrm{dx},\] where \(p_{h}\in X_{\Omega_{h}}\) solves \[\int_{\Omega_{h}}\nabla p_{h}\cdot\nabla\eta_{h}\,\mathrm{dx}=\int_{\Omega_{h }}\bigl{(}j_{u}(\cdot,u_{h},\nabla u_{h})\eta_{h}+j_{z}(\cdot,u_{h},\nabla u_{ h})\cdot\nabla\eta_{h}\bigr{)}\,\mathrm{dx}\quad\text{ for all }\eta_{h}\in X_{\Omega_{h}}. \tag{2.17}\] ### Descent algorithm With the notation introduced in the previous section we may now formulate a steepest descent method with Armijo search: **Algorithm 2.1** (Steepest descent).: _0. Let \(\Omega_{h}^{0}:=\hat{\Omega},\Phi_{h}^{0}=\mathrm{id}\)._ _For k=0,1,2,... :_ 1. _If_ \(\mathcal{J}^{\prime}_{h}(\Omega_{h}^{k})=0\)_, then stop._ 2. _Choose_ \(V_{h}^{k}\in\mathcal{V}_{\Phi_{h}^{k}}\) _such that_ \[V_{h}^{k}=\arg\min\{J^{\prime}_{h}(\Omega_{h}^{k})[W_{h}]\,|\,W_{h}\in\mathcal{ V}_{\Phi_{h}^{k}},\,|DW_{h}|\leq 1\text{ in }\bar{D}\}.\] 3. _Choose the maximum_ \(t_{k}\in\{\frac{1}{2},\frac{1}{4},\ldots\}\) _such that_ \[\mathcal{J}_{h}\bigl{(}(\mathrm{id}+t_{k}V_{h}^{k})(\Omega_{h}^{k})\bigr{)}- \mathcal{J}_{h}(\Omega_{h}^{k})\leq\gamma t_{k}\mathcal{J}^{\prime}_{h}(\Omega _{h}^{k})[V_{h}^{k}].\] 4. _Set_ \(\Phi_{h}^{k+1}:=(id+t_{k}V_{h}^{k})\circ\Phi_{h}^{k}\)_,_ \(\Omega_{h}^{k+1}:=(id+t_{k}V_{h}^{k})(\Omega_{h}^{k})\)_._ Here, \(\gamma\in(0,1)\) is a fixed constant. In view of the remarks after (2.15) the algorithm produces a sequence \((\Phi_{h}^{k})_{k\in\mathbb{N}_{0}}\subset\hat{\mathcal{U}}_{h}\) such that \(\Omega_{h}^{k}=\Phi_{h}^{k}(\hat{\Omega})\in\mathcal{S}_{h},k\in\mathbb{N}_{0}\). Our aim is to show that \[\|\mathcal{J}^{\prime}_{h}(\Omega_{h}^{k})\|:=\sup\{\mathcal{J}^{\prime}_{h}( \Omega_{h}^{k})[W_{h}]\,|\,W_{h}\in\mathcal{V}_{\Phi_{h}^{k}},|DW_{h}|\leq 1 \text{ in }\bar{D}\}\to 0,\quad\text{ as }k\to\infty.\] Convergence of the descent algorithm In the present section we investigate the global convergence of the descent Algorithm (2.1), where the discretisation parameter \(h\) is kept fixed. As a first step we note the following a-priori bounds on the discrete state and its adjoint state. **Lemma 3.1**.: _Let \(\Omega_{h}=\Phi_{h}(\hat{\Omega})\in\mathcal{S}_{h}\) and \(u_{h},p_{h}\in X_{\Omega_{h}}\) the solutions of (2.14), (2.17) respectively. Then_ \[\|u_{h}\|_{H^{1}}\leq c\|f\|_{L^{2}},\quad\|p_{h}\|_{H^{1}}\leq c\big{(}1+\|f\| _{L^{2}}^{q-1}\big{)}, \tag{3.1}\] _where the constant \(c\) only depends on \(d,j\) and \(D\). Here we think of \(u_{h}\) and \(p_{h}\) as being extended by zero to \(D\)._ Proof.: The first estimate is standard. In order to prove the bound on \(p_{h}\) we test (2.17) with \(\eta_{h}=p_{h}\in X_{\Omega_{h}}\) and use (2.3), (2.4), Holder's inequality and (2.7) to obtain \[\int_{D}|\nabla p_{h}|^{2}\,\mathrm{dx}=\int_{\Omega_{h}}|\nabla p _{h}|^{2}\,\mathrm{dx}\] \[\leq \int_{\Omega_{h}}\Big{[}\big{(}\varphi_{2}+c_{2}(|u_{h}|^{q-1}+| \nabla u_{h}|^{\frac{2(q-1)}{q}})\big{)}|p_{h}|+\big{(}\varphi_{3}+c_{3}(|u_{h }|^{\frac{2}{q}}+|\nabla u_{h}|)\big{)}|\nabla p_{h}|\Big{]}\,\,\mathrm{dx}\] \[\leq \big{(}\|\varphi_{2}\|_{L^{\frac{q}{q-1}}}+c_{2}(\|u_{h}\|_{L^{q }}^{q-1}+\|\nabla u_{h}\|_{L^{2}}^{\frac{2(q-1)}{q}})\big{)}\|p_{h}\|_{L^{q}}\] \[+\big{(}\|\varphi_{3}\|_{L^{2}}+c_{3}(\|u_{h}\|_{L^{q}}^{\frac{q} {q}}+\|\nabla u_{h}\|_{L^{2}})\big{)}\|\nabla p_{h}\|_{L^{2}}\] \[\leq c\big{(}1+\|u_{h}\|_{H^{1}}^{q-1}\big{)}\|p_{h}\|_{H^{1}}\leq c \big{(}1+\|f\|_{L^{2}}^{q-1}\big{)}\|p_{h}\|_{H^{1}}\leq c\big{(}1+\|f\|_{L^{q }}^{q-1}\big{)}\|\nabla p_{h}\|_{L^{2}},\] where we also made use of Poincare's inequality for \(D\) and the bound on \(u_{h}\). The estimate for \(\|p_{h}\|_{L^{2}}\) now follows from another application of Poincare's inequality. In order to establish the convergence of \(\mathcal{J}_{h}^{\prime}(\Omega_{h}^{k})\) we follow the general procedure outlined in Section 2.2.1 of [10]. The following result can be seen as an analogue of Lemma 2.2 in [10], where the uniform continuity of the derivative of the objective functional that is assumed in that result needs to be replaced by suitable arguments. **Lemma 3.2**.: _Let \(\Omega_{h}=\Phi_{h}(\hat{\Omega})\in\mathcal{S}_{h}\), \(\mathcal{V}_{\Phi_{h}}\) as in (2.15) and \(V_{h}\in\mathcal{V}_{\Phi_{h}}\) such that_ \[V_{h}=\arg\min\{J_{h}^{\prime}(\Omega_{h})[W_{h}]\,|\,W_{h}\in\mathcal{V}_{ \Phi_{h}},|DW_{h}|\leq 1\text{ in }\bar{D}\}.\] _Suppose that \(\mathcal{J}_{h}^{\prime}(\Omega_{h})[V_{h}]\leq-\epsilon\) for some \(\epsilon>0\). Then there exists \(0<\delta<1\) which only depends on \(j,f,D,d,\gamma\) and \(\epsilon\) such that_ \[\mathcal{J}_{h}(\Omega_{h,t})-\mathcal{J}_{h}(\Omega_{h})\leq\gamma t\mathcal{ J}_{h}^{\prime}(\Omega_{h})[V_{h}]\qquad\text{ for all }0\leq t\leq\delta,\] _where \(\Omega_{h,t}=T_{t}(\Omega_{h})\) and \(T_{t}=id+tV_{h}\)._ Proof.: We follow the standard procedure for calculating the shape derivative with special attention on controlling the remainder terms. Recalling the definition of \(\mathcal{J}_{h}\) we have \[\mathcal{J}_{h}(\Omega_{h,t})=\int_{\Omega_{h,t}}j(\cdot,u_{h,t},\nabla u_{h,t })\,\mathrm{dx},\] where \(u_{h,t}\in X_{\Omega_{h,t}}\) solves \[\int_{\Omega_{h,t}}\nabla u_{h,t}\cdot\nabla\eta_{h,t}\,\mathrm{dx}=\int_{ \Omega_{h,t}}f\eta_{h,t}\,\mathrm{dx}\qquad\forall\eta_{h,t}\in X_{\Omega_{h,t}}.\] For \(\eta_{h}\in X_{\Omega_{h}}\) we have that \(\eta_{h,t}:=\eta_{h}\circ T_{t}^{-1}\in X_{\Omega_{h,t}}\) and hence \[\int_{\Omega_{h,t}}\nabla u_{h,t}\cdot\nabla(\eta_{h}\circ T_{t}^{-1})\, \mathrm{dx}=\int_{\Omega_{h,t}}f\eta_{h}\circ T_{t}^{-1}\,\mathrm{dx}\qquad \forall\eta_{h}\in X_{\Omega_{h}}, \tag{3.2}\] from which we infer with the help of the transformation rule \[\int_{\Omega_{h}}\nabla u_{h,t}\circ T_{t}\cdot\nabla(\eta_{h}\circ T _{t}^{-1})\circ T_{t}\,|\mathrm{det}DT_{t}|\,\mathrm{dx}=\int_{\Omega_{h}}f \circ T_{t}\,\eta_{h}\,|\mathrm{det}DT_{t}|\,\mathrm{dx}\qquad\forall\eta_{h} \in X_{\Omega_{h}}. \tag{3.3}\] Since \(|DV_{h}|\leq 1\) in \(\bar{D}\) we have \[\mathrm{det}DT_{t}-1=t\mathrm{div}V_{h}+r_{1},\quad\mbox{ with }|r_{1}|\leq ct^{2}, \tag{3.4}\] where the constant \(c\) only depends on \(d\). In particular there is \(\delta_{1}>0\) so that \(\mathrm{det}DT_{t}>0,0\leq t\leq\delta_{1}\). If we define \(\hat{u}_{h,t}:=u_{h,t}\circ T_{t}\in X_{\Omega_{h}}\) and \(A_{t}:=(DT_{t})^{-1}(DT_{t})^{-\mathrm{T}}\mathrm{det}DT_{t}\) the relation (3.3) can be written in the form \[\int_{\Omega_{h}}A_{t}\nabla\hat{u}_{h,t}\cdot\nabla\eta_{h}\, \mathrm{dx}=\int_{\Omega_{h}}f\circ T_{t}\,\eta_{h}\,\mathrm{det}DT_{t}\, \mathrm{dx}\qquad\mbox{ for all }\eta_{h}\in X_{\Omega_{h}}. \tag{3.5}\] Thus we have \[\mathcal{J}_{h}(\Omega_{h,t})-\mathcal{J}_{h}(\Omega_{h}) \tag{3.6}\] \[= \int_{\Omega_{h}}\bigl{(}j(T_{t},\hat{u}_{h,t},DT_{t}^{-\mathsf{T }}\nabla\hat{u}_{h,t})\,\mathrm{det}DT_{t}-j(\cdot,u_{h},\nabla u_{h})\bigr{)} \,\mathrm{dx}\] \[= \int_{\Omega_{h}}j(\cdot,u_{h},\nabla u_{h})(\mathrm{det}DT_{t}- 1)\,\mathrm{dx}+\int_{\Omega_{h}}\bigl{(}j(T_{t},\hat{u}_{h,t},DT_{t}^{- \mathsf{T}}\nabla\hat{u}_{h,t})-j(\cdot,u_{h},\nabla u_{h})\bigr{)}\,\mathrm{dx}\] \[+\int_{\Omega_{h}}\bigl{(}j(T_{t},\hat{u}_{h,t},DT_{t}^{-\mathsf{ T}}\nabla\hat{u}_{h,t})-j(\cdot,u_{h},\nabla u_{h})\bigr{)}(\mathrm{det}DT_{t}- 1)\,\mathrm{dx}\] \[= \sum_{j=1}^{3}T_{j}.\] We deduce with the help of (3.4), (2.2), (2.7) and (3.1) that \[T_{1}=t\int_{\Omega_{h}}j(\cdot,u_{h},\nabla u_{h})\,\mathrm{div }\,V_{h}\,\mathrm{dx}+\int_{\Omega_{h}}j(\cdot,u_{h},\nabla u_{h})\,r_{1}\, \mathrm{dx} \tag{3.7}\] \[\leq t\int_{\Omega_{h}}j(\cdot,u_{h},\nabla u_{h})\,\mathrm{div}\,V_{ h}\,\mathrm{dx}+ct^{2}\int_{\Omega_{h}}(\varphi_{1}+c_{1}|u_{h}|^{q}+c_{1}| \nabla u_{h}|^{2})\,\mathrm{dx}\] \[\leq t\int_{\Omega_{h}}j(\cdot,u_{h},\nabla u_{h})\,\mathrm{div}\,V_{ h}\,\mathrm{dx}+ct^{2}\bigl{(}1+\|u_{h}\|_{H^{1}}^{q}\bigr{)}\leq t\int_{ \Omega_{h}}j(\cdot,u_{h},\nabla u_{h})\,\mathrm{div}\,V_{h}\,\mathrm{dx}+ct^{ 2}.\] In order to treat \(T_{2}\) we use Taylor's formula and write \[j(T_{t},\hat{u}_{h,t},DT_{t}^{-\mathsf{T}}\nabla\hat{u}_{h,t})-j (\cdot,u_{h},\nabla u_{h}) \tag{3.8}\] \[= tj_{x}(\cdot)\cdot V_{h}+j_{u}()(\hat{u}_{h,t}-u_{h})+j_{z}( \cdot)\cdot(DT_{t}^{-\mathsf{T}}\nabla\hat{u}_{h,t}-\nabla u_{h})\] \[\quad+\int_{0}^{1}(1-s)\frac{d^{2}}{ds^{2}}\left[j(\cdot+stV_{h}, s\hat{u}_{h,t}+(1-s)u_{h},sDT_{t}^{-\mathsf{T}}\nabla\hat{u}_{h,t}+(1-s) \nabla u_{h})\right]ds,\] where the first order derivatives of \(j\) are evaluated at \((\cdot,u_{h},\nabla u_{h})\). Thus we have \[T_{2} = t\int_{\Omega_{h}}j_{x}(\cdot,u_{h},\nabla u_{h})\cdot V_{h}\, \mathrm{dx}-t\int_{\Omega_{h}}j_{z}(\cdot,u_{h},\nabla u_{h})\cdot DV_{h}^{ \mathsf{T}}\nabla u_{h} \tag{3.9}\] \[+\int_{\Omega_{h}}j_{z}(\cdot,u_{h},\nabla u_{h})\cdot\bigl{(}(DT _{t}^{-\mathsf{T}}-I+tDV_{h}^{\mathsf{T}})\nabla\hat{u}_{h,t}+tDV_{h}^{\mathsf{ T}}\nabla(u_{h}-\hat{u}_{h,t})\bigr{)}\,\mathrm{dx}\] \[+\int_{\Omega_{h}}\bigl{(}j_{u}(\cdot,u_{h},\nabla u_{h})(\hat{u} _{h,t}-u_{h})+j_{z}(\cdot,u_{h},\nabla u_{h})\cdot\nabla(\hat{u}_{h,t}-u_{h}) \bigr{)}\,\mathrm{dx}\] \[+\int_{\Omega_{h}}\int_{0}^{1}(1-s)\frac{d^{2}}{ds^{2}}\left[j( \cdot+stV_{h},s\hat{u}_{h,t}+(1-s)u_{h},sDT_{t}^{-\mathsf{T}}\nabla\hat{u}_{h,t }+(1-s)\nabla u_{h})\right]ds\,\mathrm{dx}\] \[= \sum_{j=1}^{5}T_{2,j}.\] Let us begin with the term \(T_{2,3}\). Observing that \(DT_{t}^{-\mathsf{T}}=(I+tDV_{h})^{-\mathsf{T}}=I-tDV_{h}^{\mathsf{T}}+R_{2}\) with \(|R_{2}|\leq ct^{2}\) we deduce with the help of Holder's inequality, (2.4), (2.7) and (3.1) \[T_{2,3} \leq \int_{\Omega_{h}}|j_{z}(\cdot,u_{h},\nabla u_{h})|\big{(}|R_{2}| \,|\nabla\hat{u}_{h,t}|+t\,|\nabla(\hat{u}_{h,t}-u_{h})|\big{)}\,\mathrm{dx} \tag{3.9}\] \[\leq c\big{(}\|\varphi_{3}\|_{L^{2}}+\|u_{h}\|_{L^{4}}^{\frac{3}{2}}+ \|\nabla u_{h}\|_{L^{2}}\big{)}\big{(}t^{2}\|\hat{u}_{h,t}\|_{H^{1}}+t\|\hat{u }_{h,t}-u_{h}\|_{H^{1}}\big{)}\] \[\leq c\big{(}1+\|u_{h}\|_{H^{1}}^{\frac{3}{2}}\big{)}\big{(}t^{2}+c\| \hat{u}_{h,t}-u_{h}\|_{H^{1}}^{2}\big{)}\leq c\big{(}t^{2}+c\|\hat{u}_{h,t}-u_ {h}\|_{H^{1}}^{2}\big{)}.\] Next, using (2.17), (2.14) and (3.5) we obtain \[T_{2,4} = \int_{\Omega_{h}}\nabla p_{h}\cdot\nabla(\hat{u}_{h,t}-u_{h})\, \mathrm{dx}=\int_{\Omega_{h}}\nabla p_{h}\cdot\nabla\hat{u}_{h,t}\,\mathrm{dx} -\int_{\Omega_{h}}\nabla p_{h}\cdot\nabla u_{h}\,\mathrm{dx}\] \[= \int_{\Omega_{h}}(I-A_{t})\nabla p_{h}\cdot\nabla u_{h}\,\mathrm{ dx}+\int_{\Omega_{h}}(I-A_{t})\nabla p_{h}\cdot\nabla(\hat{u}_{h,t}-u_{h})\, \mathrm{dx}\] \[+\int_{\Omega_{h}}\big{(}f\circ T_{t}\,\mathrm{det}DT_{t}-f\big{)} p_{h}\,\mathrm{dx}=\sum_{k=1}^{3}\tilde{T}_{k}.\] Recalling that \(A_{t}=(DT_{t})^{-1}(DT_{t})^{-\mathsf{T}}\mathrm{det}DT_{t}\) it is not difficult to see that \[I-A_{t}=t\big{(}DV_{h}+DV_{h}^{\mathsf{T}}-\mathrm{div}\,V_{h}I\big{)}+R_{3}, \qquad\mbox{ with }|R_{3}|\leq ct^{2}, \tag{3.10}\] where \(c\) only depends on \(d\). Hence \[\tilde{T}_{1} = t\int_{\Omega_{h}}\big{(}DV_{h}+DV_{h}^{\mathsf{T}}-\mathrm{div} \,V_{h}I\big{)}\nabla u_{h}\cdot\nabla p_{h}\,\mathrm{dx}+\int_{\Omega_{h}}R_{ 3}\nabla u_{h}\cdot\nabla p_{h}\,\mathrm{dx} \tag{3.11}\] \[\leq t\int_{\Omega_{h}}\big{(}DV_{h}+DV_{h}^{\mathsf{T}}-\mathrm{div} \,V_{h}I\big{)}\nabla u_{h}\cdot\nabla p_{h}\,\mathrm{dx}+ct^{2}\] in view of (3.1). Next \[\tilde{T}_{2}\leq ct\|\nabla p_{h}\|_{L^{2}}\|\nabla(\hat{u}_{h,t}-u_{h})\|_{L ^{2}}\leq ct\|\nabla(\hat{u}_{h,t}-u_{h})\|_{L^{2}} \tag{3.12}\] again by (3.1). In order to deal with \(\tilde{T}_{3}\) we write for \(x\in\Omega_{h}\) \[f(T_{t}(x))=f(x)+t\int_{0}^{1}\nabla f(x+stV_{h}(x))\cdot V_{h}(x)\,\mathrm{ds},\] which, combined with (3.4) yields \[f\circ T_{t}\,\mathrm{det}DT_{t}-f = (f\circ T_{t}-f)+tf\circ T_{t}\,\,\mathrm{div}\,V_{h}+r_{1}f\circ T _{t}\] \[= t\nabla f\cdot V_{h}+tf\,\mathrm{div}\,V_{h}+t\int_{0}^{1} \big{(}\nabla f(\cdot+stV_{h})-\nabla f\big{)}\cdot V_{h}\,\mathrm{ds}\] \[+t^{2}\int_{0}^{1}\nabla f(\cdot+stV_{h})\cdot V_{h}\,\mathrm{ds }\,\,\mathrm{div}\,V_{h}+r_{1}f\circ T_{t}.\] This implies together with (3.4) and (3.1) \[\tilde{T}_{3} \leq t\int_{\Omega_{h}}\mathrm{div}(fV_{h})p_{h}\,\mathrm{dx}+ct^{2} \|p_{h}\|_{L^{2}}\|f\|_{H^{1}}+ct\|p_{h}\|_{L^{2}}\sup_{0\leq\sigma\leq t}\| \nabla f\circ T_{\sigma}-\nabla f\|_{L^{2}} \tag{3.14}\] \[\leq t\int_{\Omega_{h}}\mathrm{div}(fV_{h})p_{h}\,\mathrm{dx}+ct^{2} +ct\sup_{0\leq\sigma\leq t}\|\nabla f\circ T_{\sigma}-\nabla f\|_{L^{2}}.\] Here we have also used that \[|V_{h}(x)|\leq\mathrm{diam}(D)\sup_{y\in D}|DV_{h}(y)|\leq\mathrm{diam}(D),\;x \in D, \tag{3.15}\] since \(V_{h}=0\) on \(\partial D\) and \(|DV_{h}|\leq 1\) in \(D\). Collecting the above terms we have \[T_{2,4} \leq t\int_{\Omega_{h}}\bigl{(}DV_{h}+DV_{h}^{\mathsf{T}}-\operatorname{ div}V_{h}I\bigr{)}\nabla u_{h}\cdot\nabla p_{h}\operatorname{dx}+t\int_{\Omega_{h}} \operatorname{div}(fV_{h})p_{h}\operatorname{dx} \tag{3.16}\] \[+ct^{2}+c\|\hat{u}_{h,t}-u\|_{H^{1}}^{2}+ct\sup_{0\leq\sigma\leq t} \|\nabla f\circ T_{\sigma}-\nabla f\|_{L^{2}}.\] Finally, the term \(T_{2,5}\) involves a sum of products of second order partial derivatives of \(j\) with \(tV_{h},\hat{u}_{h,t}-u_{h}\) and \(DT_{t}^{-\mathsf{T}}\nabla\hat{u}_{h,t}-\nabla u_{h}\). By way of example we use (2.4) to estimate for \(0\leq s\leq 1\) \[\int_{\Omega_{h}}|j_{xz}(\cdot+stV_{h},s\hat{u}_{h,t}+(1-s)u_{h}, sDT_{t}^{-\mathsf{T}}\nabla\hat{u}_{h,t}+(1-s)\nabla u_{h})|\,t\,|V_{h}|\,|DT_{t}^{- \mathsf{T}}\nabla\hat{u}_{h,t}-\nabla u_{h}|\operatorname{dx}\] \[\leq t\|V_{h}\|_{L^{\infty}}\int_{\Omega_{h}}\bigl{(}\varphi_{3} \circ(\operatorname{id}+stV_{h})+|s\hat{u}_{h,t}+(1-s)u_{h}|^{\frac{s}{2}}\] \[\qquad\qquad\qquad+|sDT_{t}^{-\mathsf{T}}\nabla\hat{u}_{h,t}+(1 -s)\nabla u_{h}|\bigr{)}|DT_{t}^{-\mathsf{T}}\nabla\hat{u}_{h,t}-\nabla u_{h}| \operatorname{dx}\] \[\leq ct\bigl{(}\|\varphi_{3}\|_{L^{2}}+\|\hat{u}_{h,t}\|_{L^{3}}^{ \frac{3}{2}}+\|u_{h}\|_{L^{4}}^{\frac{3}{2}}+\|\nabla\hat{u}_{h,t}\|_{L^{2}}+ \|\nabla u_{h}\|_{L^{2}}\bigr{)}\|DT_{t}^{-\mathsf{T}}\nabla\hat{u}_{h,t}- \nabla u_{h}\|_{L^{2}}\] \[\leq ct\bigl{(}1+\|\hat{u}_{h,t}\|_{H^{1}}^{2}+\|u_{h}\|_{H^{1}}^{2} \bigr{)}\bigl{(}\|DT_{t}^{-\mathsf{T}}-I\|_{L^{\infty}}\|\hat{u}_{h,t}\|_{H^{1} }+\|\hat{u}_{h,t}-u_{h}\|_{H^{1}}\bigr{)}\] \[\leq ct\bigl{(}t+\|\hat{u}_{h,t}-u_{h}\|_{H^{1}}\bigr{)}\leq ct^{2}+c \|\hat{u}_{h,t}-u_{h}\|_{H^{1}}^{2}.\] Arguing in a similar way for the other terms we obtain \[T_{2,5}\leq ct^{2}+c\|\hat{u}_{h,t}-u_{h}\|_{H^{1}}^{2}, \tag{3.17}\] so that in conclusion \[T_{2} \leq t\int_{\Omega_{h}}j_{x}(\cdot,u_{h},\nabla u_{h})\cdot V_{h} \operatorname{dx}-t\int_{\Omega_{h}}j_{z}(\cdot,u_{h},\nabla u_{h})\cdot DV_{h} ^{\mathsf{T}}\nabla u_{h} \tag{3.18}\] \[+t\int_{\Omega_{h}}\bigl{(}DV_{h}+DV_{h}^{\mathsf{T}}- \operatorname{div}V_{h}I\bigr{)}\nabla u_{h}\cdot\nabla p_{h}\operatorname{ dx}+t\int_{\Omega_{h}}\operatorname{div}(fV_{h})p_{h}\operatorname{dx}\] \[+ct^{2}+c\|\hat{u}_{h,t}-u_{h}\|_{H^{1}}^{2}+ct\sup_{0\leq\sigma \leq t}\|\nabla f\circ T_{\sigma}-\nabla f\|_{L^{2}}.\] In order to treat \(T_{3}\) we write \[j(T_{t},\hat{u}_{h,t},DT_{t}^{-\mathsf{T}}\nabla\hat{u}_{h,t})- j(\cdot,u_{h},\nabla u_{h})\] \[= \int_{0}^{1}\frac{d}{ds}j(\cdot+stV_{h},s\hat{u}_{h,t}+(1-s)u_{h },sDT_{t}^{-\mathsf{T}}\nabla\hat{u}_{h,t}+(1-s)\nabla u_{h})ds,\] use the growth assumptions on \(j_{x},j_{u},j_{z}\) as well as (3.4) and derive \[T_{3}\leq ct\bigl{(}t+\|\hat{u}_{h,t}-u_{h}\|_{H^{1}}\bigr{)}\leq ct^{2}+c\| \hat{u}_{h,t}-u_{h}\|_{H^{1}}^{2}. \tag{3.19}\] If we insert the estimates (3.7), (3.18) and (3.19) into (3.6) and recall (2.16) we obtain \[\mathcal{J}_{h}(\Omega_{h,t})-\mathcal{J}_{h}(\Omega_{h})\leq t\mathcal{J}_{h}^{ \prime}(\Omega_{h})[V_{h}]+ct\bigl{(}t+\sup_{0\leq\sigma\leq t}\|\nabla f\circ T _{\sigma}-\nabla f\|_{L^{2}}\bigr{)}+c\|\hat{u}_{h,t}-u_{h}\|_{H^{1}}^{2}. \tag{3.20}\] In order to estimate \(\|\hat{u}_{h,t}-u_{h}\|_{H^{1}}\) we combine (2.14) and (3.5) to obtain \[\int_{\Omega_{h}}A_{t}\nabla(\hat{u}_{h,t}-u_{h})\cdot\nabla\eta_{h}\operatorname {dx}=\int_{\Omega_{h}}(I-A_{t})\nabla u_{h}\cdot\nabla\eta_{h}\operatorname{dx}+ \int_{\Omega_{h}}\bigl{(}f\circ T_{t}\operatorname{det}DT_{t}-f\bigr{)}\eta_{h} \operatorname{dx}\qquad\forall\eta_{h}\in X_{\Omega_{h}}.\] In view of (3.10) there exists \(0<\delta_{2}\leq\delta_{1}\) such that \(A_{t}\xi\cdot\xi\geq\frac{1}{2}|\xi|^{2}\) for all \(\xi\in\mathbb{R}^{d}\) and \(0\leq t\leq\delta_{2}\). Inserting \(\eta_{h}=\hat{u}_{h,t}-u_{h}\) into the above relation and using (3.10) as well as (3.13) we infer with the help of Poincare's inequality that \[\frac{1}{2}\int_{D}|\nabla(\hat{u}_{h,t}-u_{h})|^{2}\operatorname{dx} \leq ct\|\nabla u_{h}\|_{L^{2}}\|\nabla(\hat{u}_{h,t}-u_{h})\|_{L^{2}}+ ct\|f\|_{H^{1}}\|\hat{u}_{h,t}-u_{h}\|_{L^{2}}\] \[\leq \frac{1}{4}\|\nabla(\hat{u}_{h,t}-u_{h})\|_{L^{2}}^{2}+ct^{2},\] from which we deduce that \(\|\hat{u}_{h,t}-u_{h}\|_{H^{1}}\leq ct\). If we insert this bound into (3.20) and use that \(\mathcal{J}^{\prime}_{h}(\Omega_{h})[V_{h}]\leq-\epsilon\) we obtain \[\mathcal{J}_{h}(\Omega_{h,t})-\mathcal{J}_{h}(\Omega_{h})\leq t \mathcal{J}^{\prime}_{h}(\Omega_{h})[V_{h}]+ct\big{(}t+\sup_{0\leq\sigma\leq t} \|\nabla f\circ T_{\sigma}-\nabla f\|_{L^{2}}\big{)}\] \[\leq \gamma t\mathcal{J}^{\prime}_{h}(\Omega_{h})[V_{h}]+ct\big{(}t+ \sup_{0\leq\sigma\leq t}\|\nabla f\circ T_{\sigma}-\nabla f\|_{L^{2}}\big{)}-( 1-\gamma)\epsilon t. \tag{3.21}\] There exists \(0<\delta\leq\delta_{2}\) such that \(\sup_{0\leq\sigma\leq t}\|\nabla f\circ T_{\sigma}-\nabla f\|_{L^{2}}\leq\frac {1}{2c}(1-\gamma)\epsilon\) for \(0\leq t\leq\delta\). This can be seen as usual by approximating \(f_{z_{i}}\) by a continuous function \(g_{i}\), using the uniform continuity of \(g_{i}\) on \(\tilde{D}\) and noting that by (3.15) \[|T_{\sigma}(x)-x|=\sigma|V_{h}(x)|\leq\mathrm{diam}(D)t,\quad 0\leq\sigma \leq t,\,x\in D.\] By choosing \(\delta\) smaller if necessary we can achieve in addition that \(\delta\leq\frac{1}{2c}(1-\gamma)\epsilon\). Inserting these bounds into (3.21) we obtain the result of the theorem. We are now in position to prove our first convergence result. **Theorem 3.3**.: _Let \((\Phi_{h}^{k})_{k\in\mathbb{N}_{0}}\subset\hat{\mathcal{U}}_{h}\) and \((\Omega_{h}^{k}=\Phi_{h}^{k}(\hat{\Omega}))_{k\in\mathbb{N}_{0}}\subset \mathcal{S}_{h}\) be the sequences generated by Algorithm 2.1. Then:_ _(i)_ \(\|\mathcal{J}^{\prime}_{h}(\Omega_{h}^{k})\|\to 0\) _as_ \(k\to\infty\)_._ _(ii) If_ \(\sup_{k\in\mathbb{N}_{0}}|(D\Phi_{h}^{k})^{-1}|\leq C\)_, then there exists a subsequence_ \((\Phi_{h}^{k_{\ell}})_{\ell\in\mathbb{N}}\)_, which converges in_ \(W^{1,\infty}(D)\) _to a mapping_ \(\Phi_{h}\in\hat{\mathcal{U}}_{h}\) _and_ \(\Omega_{h}:=\Phi_{h}(\hat{\Omega})\) _is a stationary point of_ \(\mathcal{J}_{h}\)_, i.e. satisfies_ \(\mathcal{J}^{\prime}_{h}(\Omega_{h})[V_{h}]=0\) _for all_ \(V_{h}\in\mathcal{V}_{\Phi_{h}}\)_._ Proof.: (i) Since \(\mathcal{J}_{h}(\Omega_{h}^{k+1})\leq\mathcal{J}_{h}(\Omega_{h}^{k})\) and \[\mathcal{J}_{h}(\Omega_{h}^{k}) \geq -\int_{\Omega_{h}^{k}}|j(\cdot,u_{h}^{k},\nabla u_{h}^{k})|\, \mathrm{dx}\geq-\int_{\Omega_{h}^{k}}(\varphi_{1}+c_{1}\big{(}|u_{h}^{k}|^{q} +|\nabla u_{h}^{k}|^{2}\big{)}\,\mathrm{dx}\] \[\geq -c\big{(}1+\|u_{h}^{k}\|_{H^{1}}^{q}\big{)}\geq-c,\] we infer that \(\lim_{k\to\infty}\mathcal{J}_{h}(\Omega_{h}^{k})=:\beta\in\mathbb{R}\) exists. Then \[\sum_{k=0}^{\infty}\bigl{(}\mathcal{J}_{h}(\Omega_{h}^{k})-\mathcal{J}_{h}( \Omega_{h}^{k+1})\bigr{)}=\mathcal{J}_{h}(\Omega_{h}^{0})-\beta<\infty,\] so that \[\mathcal{J}_{h}(\Omega_{h}^{k})-\mathcal{J}_{h}(\Omega_{h}^{k+1})\to 0\quad \text{ as }k\to\infty. \tag{3.22}\] Suppose that \(\|\mathcal{J}^{\prime}_{h}(\Omega_{h}^{k})\|\nrightarrow 0\). Then there exists \(\epsilon>0\) and a subsequence \((\Omega_{h}^{k_{\ell}})_{\ell\in\mathbb{N}}\) such that \(\|\mathcal{J}^{\prime}_{h}(\Omega_{h}^{k_{\ell}})\|\geq\epsilon\) for all \(\ell\in\mathbb{N}\). In view of the definition of \(V_{h}^{k}\) we infer that \[\mathcal{J}^{\prime}_{h}(\Omega_{h}^{k_{\ell}})[V_{h}^{k_{\ell}}]=-\|\mathcal{ J}^{\prime}_{h}(\Omega_{h}^{k_{\ell}})\|\leq-\epsilon\quad\text{ for all }\ell\in\mathbb{N}, \tag{3.23}\] and Lemma 3.2 yields the existence of \(\delta>0\) which is independent of \(\ell\in\mathbb{N}\) such that \[\mathcal{J}_{h}\big{(}(\mathrm{id}+tV_{h}^{k_{\ell}})(\Omega_{h}^{k_{\ell}}) \big{)}-\mathcal{J}_{h}(\Omega_{h}^{k_{\ell}})\leq\gamma t\mathcal{J}^{\prime}_{ h}(\Omega_{h}^{k_{\ell}})[V_{h}^{k_{\ell}}]\qquad\text{ for all }0\leq t\leq\delta.\] Therefore we have that the Armijo step size satisfies \(t_{k_{\ell}}\geq\frac{\delta}{2}\) for all \(\ell\in\mathbb{N}\) from which we deduce with the help of (3.23) that \[\mathcal{J}_{h}(\Omega_{h}^{k_{\ell}})-\mathcal{J}_{h}(\Omega_{h}^{k_{\ell}+1}) \geq-\gamma t_{k_{\ell}}\mathcal{J}^{\prime}_{h}(\Omega_{h}^{k_{\ell}})[V_{h}^{ k_{\ell}}]\geq\gamma t_{k_{\ell}}\epsilon\geq\gamma\frac{\delta}{2}\epsilon \qquad\text{ for all }\ell\in\mathbb{N}\] contradicting (3.22). (ii) Since \(\hat{\mathcal{U}}_{h}\) is a subset of a finite-dimensional space and the sequence \((\Phi_{h}^{k})_{k\in\mathbb{N}_{0}}\) is bounded (recall that \(\Phi_{h}^{k}(\bar{D})=\bar{D}\)), there exists a subsequence, again denoted by \((\Phi_{h}^{k})_{h\in\mathbb{N}_{0}}\) and \(\Phi_{h}\in C^{0}(\bar{D},\mathbb{R}^{d})\) such that \(\Phi_{h}^{k}\to\Phi_{h}\) in \(W^{1,\infty}(D)\). Furthermore, as \(\sup_{k\in\mathbb{N}_{0}}|(D\Phi_{h}^{k})^{-1}|\leq C\) we have \[|x_{1}-x_{2}|\leq C|\Phi_{h}^{k}(x_{1})-\Phi_{h}^{k}(x_{2})|\quad\text{ for all }x_{1},x_{2}\in\bar{D},k\in\mathbb{N}_{0}\] from which we infer that \(\Phi_{h}\) is injective by letting \(k\to\infty\). Thus, \(\Phi_{h}\in\mathcal{U}_{h}\). Let us show that \(\Omega_{h}:=\Phi_{h}(\bar{\Omega})\) is a stationary point of \(\mathcal{J}_{h}\). As most of the necessary arguments have appeared in some form in the proof of Lemma 3.2 we only sketch the main ideas. Let us define \(T_{k}:=\Phi_{h}\circ(\Phi_{h}^{k})^{-1}\). Clearly \(T_{k}\to\mathrm{id}\) in \(W^{1,\infty}(D,\mathbb{R}^{d})\) as \(k\to\infty\). Furthermore, let \(\hat{u}_{h}^{k}:=u_{h}^{k}\circ T_{k}^{-1}\in X_{\Omega_{h}},\hat{p}_{h}^{k} :=p_{h}^{k}\circ T_{k}^{-1}\in X_{\Omega_{h}}\), where \(u_{h},p_{h}\) and \(u_{h}^{k},p_{h}^{k}\) are the discrete state and adjoint state in \(\Omega_{h}\) and \(\Omega_{h}^{k}\) respectively. One can show similarly as above that \(\hat{u}_{h}^{k}\to u_{h},\hat{p}_{h}^{k}\to p_{h}\) in \(H^{1}(\Omega_{h})\). Let us fix \(V_{h}\in\mathcal{V}_{\Phi_{h}}\) and consider the terms that appear in the formula (2.16) for \(\mathcal{J}_{h}^{\prime}(\Omega_{h})[V_{h}]\). For the first integral we write \[\int_{\Omega_{h}}j(\cdot,u_{h},\nabla u_{h})\,\mathrm{div}\,V_{h }\,\mathrm{dx}\] \[= \int_{\Omega_{h}}\left(j(\cdot,u_{h},\nabla u_{h})-j(\cdot,\hat{ u}_{h}^{k},\nabla\hat{u}_{h}^{k})\right)\mathrm{div}\,V_{h}\,\mathrm{dx}+\int_{ \Omega_{h}}j(\cdot,\hat{u}_{h}^{k},\nabla\hat{u}_{h}^{k})\,\mathrm{div}\,V_{h }\,\mathrm{dx}\] \[= \int_{\Omega_{h}}\int_{0}^{1}j_{u}(\cdot,su_{h}+(1-s)\hat{u}_{h}^ {k},s\nabla u_{h}+(1-s)\nabla\hat{u}_{h}^{k})(u_{h}-\hat{u}_{h}^{k})\,\mathrm{ ds}\,\mathrm{div}\,V_{h}\,\mathrm{dx}\] \[+\int_{\Omega_{h}^{k}}\int_{0}^{1}j_{z}(\cdot,su_{h}+(1-s)\hat{u} _{h}^{k},s\nabla u_{h}+(1-s)\nabla\hat{u}_{h}^{k})\cdot\nabla(u_{h}-\hat{u}_{h }^{k})\,\mathrm{ds}\,\mathrm{div}\,V_{h}\,\mathrm{dx}\] \[+\int_{\Omega_{h}^{k}}j(\cdot,u_{h}^{k},\nabla u_{h}^{k})( \mathrm{div}\,V_{h})\circ T_{k}\,\mathrm{det}DT_{k}\,\mathrm{dx}\] \[= \int_{\Omega_{h}^{k}}j(\cdot,u_{h}^{k},\nabla u_{h}^{k})\, \mathrm{div}(V_{h}\circ T_{k})\,\mathrm{dx}+o(1)\] since \(\hat{u}_{h}^{k}\to u_{h}\) in \(H^{1}(\Omega_{h})\) and \(T_{k}\to\mathrm{id}\) in \(W^{1,\infty}(D,\mathbb{R}^{d})\). If we argue in a similar way for the other terms in (3.2) we obtain that \[\mathcal{J}_{h}^{\prime}(\Omega_{h})[V_{h}]=\mathcal{J}_{h}^{\prime}(\Omega_{h }^{k})[V_{h}\circ T_{k}]+o(1).\] Observing that \(\|D(V_{h}\circ T_{k})\|_{L^{\infty}}\leq c\) we deduce with the help of (i) that \(\mathcal{J}_{h}^{\prime}(\Omega_{h})[V_{h}]=0\). Since \(V_{h}\in\mathcal{V}_{\Phi_{h}}\) was arbitrary, the result follows. ## 4 Convergence of stationary shapes We now investigate the convergence of stationary shapes when the discretisation parameter \(h\) tends to zero. To begin with we first introduce two appropriate convergence measures for shapes. ### Hausdorff convergence and Mosco-convergence Before we investigate the convergence of a sequence of stationary shapes we introduce two important concepts. The Hausdorff complementary distance of two open sets \(\Omega_{1},\Omega_{2}\subset D\) is defined as \[\rho_{H}^{c}(\Omega_{1},\Omega_{2}):=\max_{x\in\bar{D}}|d_{\bar{\Omega}\Omega_{ 1}}(x)-d_{\bar{\Omega}\Omega_{2}}(x)|,\] where \(d_{\bar{\Omega}\Omega}(x):=\inf\{|x-y|:y\in\bar{D}\setminus\Omega\}\) for all \(x\in D\), and we say that \((\Omega_{k})_{k\in\mathbb{N}}\) converges to \(\Omega\) in the sense of the Hausdorff complementary metric if \(\rho_{H}^{c}(\Omega_{k},\Omega)\to 0,k\to\infty\). Here \(\Omega_{k},\Omega\) are open subsets of \(D\). Since our optimisation problem is constrained by the elliptic boundary value problem (2.1) a stronger convergence concept is required that ensures continuity of (2.1) with respect to \(\Omega\) in an appropriate sense. For an open set \(\Omega\subset D\) we shall view \(H_{0}^{1}(\Omega)\) as a closed subspace of \(H_{0}^{1}(D)\) by associating with each element \(u\in H_{0}^{1}(\Omega)\) its extension by zero \(e_{0}(u)\in H_{0}^{1}(D)\). **Definition 4.1**.: _Let \(\Omega_{k},\Omega\) be open subsets of \(D\). We say that \((\Omega_{k})_{k\in\mathbb{N}}\) converges to \(\Omega\) in the sense of Mosco if the following conditions hold:_ _(i) For every \(u\in H^{1}_{0}(\Omega)\) there exists a sequence \((u_{k})_{k\in\mathbb{N}}\) with \(u_{k}\in H^{1}_{0}(\Omega_{k})\) such that \(e_{0}(u_{k})\to e_{0}(u)\) in \(H^{1}_{0}(D)\)._ _(ii) If \((u_{k_{\ell}})_{\ell\in\mathbb{N}}\) is a sequence with \(u_{k_{\ell}}\in H^{1}_{0}(\Omega_{k_{\ell}})\) and \(e_{0}(u_{k_{\ell}})\rightharpoonup v\) in \(H^{1}_{0}(D)\), then \(v\in H^{1}_{0}(\Omega)\)._ In order to formulate a corresponding convergence result we recall that the \(2\)-capacity of a set \(A\subset U\) relative to an open bounded set \(U\) is defined by \[\operatorname{cap}(A,U):=\inf\left\{\int_{U}|\nabla v|^{2}\operatorname{dx}| \,v\in H^{1}_{0}(U),v\geq 1\text{ a.e. in a neighbourhood of }A\right\}.\] **Definition 4.2**.: _Let \(\Omega\subset D\) be open._ _a) We say that_ \(\complement\Omega\) _satisfies a capacity density condition, if there exist_ \(\alpha>0,r_{0}>0\) _such that_ \[\frac{\operatorname{cap}(B_{r}(x)\cap\complement\Omega,B_{2r}(x))}{ \operatorname{cap}(B_{r}(x),B_{2r}(x))}\geq\alpha\qquad\text{ for all }0<r<r_{0}\text{ and all }x\in\partial\Omega. \tag{4.1}\] _We denote by \(\mathcal{O}_{\alpha,r_{0}}\) the collection of all open subsets \(\Omega\subset D\) that satisfy (4.1) with \(\alpha>0,r_{0}>0\)._ **Theorem 4.1**.: _Let \((\Omega_{k})_{k\in\mathbb{N}}\) be a sequence of open subsets of \(D\) belonging to \(\mathcal{O}_{\alpha,r_{0}}\), which converges in the sense of the Hausdorff complementary metric to an open set \(\Omega\). Then \((\Omega_{k})_{k\in\mathbb{N}}\) converges to \(\Omega\) in the sense of Mosco._ Proof.: In view of Theorem 3.4.12 in [10] the sequence \((\Omega_{k})_{k\in\mathbb{N}}\)\(\gamma\)-converges to \(\Omega\). However, according to Proposition 3.5.5 [10]\(\gamma\)-convergence and Mosco convergence are equivalent which implies the result. In order to make use of the above result in the setting considered in our paper we require the following lemma. **Lemma 4.2**.: _Let \(\Omega=\Phi(\hat{\Omega})\) for some bilipschitz map \(\Phi:\bar{D}\to\bar{D}\) satisfying_ \[M^{-1}|x-y|\leq|\Phi(x)-\Phi(y)|\leq M|x-y|\qquad\text{ for all }x,y\in D.\] _Then there exist \(\alpha>0,r_{0}>0\) depending on \(\hat{\Omega},d\) and \(M\) such that \(\Omega\in\mathcal{O}_{\alpha,r_{0}}\)._ Proof.: Let \(r_{0}:=Ms_{0}\) with \(s_{0}\) as in (2.11). For \(x\in\partial\Omega\) there exists \(\hat{x}\in\partial\hat{\Omega}\) such that \(x=\Phi(\hat{x})\). Given \(0<r<r_{0}\), we let \(s=\frac{r}{M}\in(0,s_{0})\) and choose \(\hat{y}\in B_{s}(\hat{x})\) such that \(B_{\lambda s}(\hat{y})\subset B_{s}(\hat{x})\cap\complement\Omega\) according to (2.11). Then \(y=\Phi(\hat{y})\) satisfies \[|y-x|=|\Phi(\hat{y})-\Phi(\hat{x})|\leq M|\hat{y}-\hat{x}|<Ms=r,\] so that \(y\in B_{r}(x)\). We claim that \[B_{\frac{\lambda r}{M^{2}}}(y)\subset B_{r}(x)\cap\complement\Omega. \tag{4.2}\] To see this, let \(z\in B_{\frac{\lambda r}{M^{2}}}(y)\), say \(z=\Phi(\hat{z})\) for some \(\hat{z}\in\bar{D}\). Then, \[|\hat{z}-\hat{y}|\leq M|\Phi(\hat{z})-\Phi(\hat{y})|=M|z-y|<\frac{\lambda r}{M }=\lambda s.\] Hence \(\hat{z}\in B_{\lambda s}(\hat{y})\) and therefore \(\hat{z}\in B_{s}(\hat{x})\cap\complement\Omega\). Then \(z=\Phi(\hat{z})\in\complement\Omega\) with \(|z-x|\leq M|\hat{z}-\hat{x}|<Ms=r\) implying (4.2). Since \(B_{2r}(x)\subset B_{3r}(y)\) we deduce from (4.2) \[\frac{\operatorname{cap}(B_{r}(x)\cap\complement\Omega,B_{2r}(x))}{ \operatorname{cap}(B_{r}(x),B_{2r}(x))}\geq\frac{\operatorname{cap}(B_{\frac{ \lambda r}{M^{2}}}(y),B_{3r}(y))}{\operatorname{cap}(B_{r}(x),B_{2r}(x))} \geq\alpha,\] where \(\alpha\) only depends on \(\lambda,M\) and \(d\). Here, the last inequality can be shown as in the proof of Theorem 6.31 in [10]. ### Convergence of discrete stationary shapes Let \((\hat{\mathcal{T}}_{h})_{0<h\leq h_{0}}\) be a regular family of triangulations of \(\bar{D}\) in the sense that there exists \(\sigma>0\) such that \[\frac{h_{\hat{T}}}{\rho_{\hat{T}}}\leq\sigma\quad\forall\hat{T}\in\hat{ \mathcal{T}}_{h},\;0<h\leq h_{0}. \tag{4.3}\] Here \(h_{\hat{T}}\) is the diameter of \(\hat{T}\) and \(\rho_{\hat{T}}\) the diameter of the largest ball contained in \(\hat{T}\). We consider the corresponding sequence of discrete shape functionals \(\mathcal{J}_{h}:\mathcal{S}_{h}\to\mathbb{R}\) given by (2.13). In what follows we assume the existence of a sequence \((\Omega_{h})_{0<h\leq h_{0}}\) such that \(\Omega_{h}=\Phi_{h}(\hat{\Omega})\) for some \(\Phi_{h}\in\hat{\mathcal{U}}_{h}\) and (A1) \(\forall 0<h\leq h_{0}\;\forall V_{h}\in\mathcal{V}_{\Phi_{h}}:\quad\mathcal{J}_{h} ^{\prime}(\Omega_{h})[V_{h}]=0\); (A2) \(\exists M>1\;\forall 0<h\leq h_{0}\;\forall x,y\in D:\quad M^{-1}|x-y|\leq| \Phi_{h}(x)-\Phi_{h}(y)|\leq M|x-y|\). Assumption (A1) states that the sequence \((\Omega_{h})_{0<h\leq h_{0}}\) is a sequence of stationary points, while (A2) can be interpreted as a compactness property of these sets. Such a condition appears in [10], where it occurs in the convergence analysis for a sequence of discrete minima. **Theorem 4.3**.: _Let \(\Omega_{h}=\Phi_{h}(\hat{\Omega})\), where \(\Phi_{h}\in\hat{\mathcal{U}}_{h}\) and \((\Phi_{h})_{0<h\leq h_{0}}\) satisfies (A2). Then there exists a sequence \((h_{k})_{k\in\mathbb{N}}\) with \(\lim_{k\to\infty}h_{k}=0\) and a map \(\Phi\in\mathcal{U}\) such that:_ _(i)_ \(\Phi_{h_{k}}\to\Phi\) _uniformly in_ \(\bar{D}\)_,_ \(\rho_{\hat{T}}^{c}(\Omega_{h_{k}},\Omega)\to 0\) _as_ \(k\to\infty\)_, where_ \(\Omega=\Phi(\hat{\Omega})\)_;_ _(ii)_ \(e_{0}(u_{h_{k}})\to e_{0}(u)\) _in_ \(H_{0}^{1}(D)\)_, where_ \(u_{h_{k}}\in X_{\Omega_{h_{k}}}\) _solves (_2.14_),_ \(u\in H_{0}^{1}(\Omega)\) _solves (_2.1_);_ _(iii)_ \(e_{0}(p_{h_{k}})\to e_{0}(p)\) _in_ \(H_{0}^{1}(D)\)_, where_ \(p_{h_{k}}\in X_{\Omega_{h_{k}}}\) _solves (_2.17_),_ \(p\in H_{0}^{1}(\Omega)\) _solves (_2.9_)._ Proof.: (i) In view of (A2) the sequences \((\Phi_{h})_{0,h\leq h_{0}}\) and \((\Phi_{h}^{-1})_{0<h\leq h_{0}}\) are uniformly bounded and uniformly equicontinuous, so that the Arzela-Ascoli theorem implies that there exists a sequence \(h_{k}\to 0\) and functions \(\Phi,\Psi\in W^{1,\infty}(D,\mathbb{R}^{d})\) such that \(\Phi=\mathrm{id}\) on \(\partial D\) and \[\Phi_{h_{k}}\to\Phi,\quad\Phi_{h_{k}}^{-1}\to\Psi\text{ in }C^{0}(\bar{D}, \mathbb{R}^{d})\;\text{ as }\;k\to\infty.\] Clearly, \(\Phi:\bar{D}\to\bar{D}\) is bilipschitz so that \(\Phi\in\mathcal{U}\). Let \(\Omega=\Phi(\hat{\Omega})\). We claim that \[\rho_{H}^{c}(\Omega_{h_{k}},\Omega)\leq\|\Phi_{h_{k}}-\Phi\|_{C^{0}(\bar{D}, \mathbb{R}^{d})}. \tag{4.4}\] To see this, let \(x\in\bar{D}\) and choose \(y\in\bar{D}\setminus\Omega\) such that \(d_{\mathbbm{C}\Omega}(x)=|x-y|\). In view of the definition of \(\Omega\) there exists \(z\in\bar{D}\setminus\hat{\Omega}\) such that \(y=\Phi(z)\). Then, \(y_{k}:=\Phi_{h_{k}}(z)\in\bar{D}\setminus\Omega_{h_{k}}\) and therefore \[d_{\mathbbm{C}\Omega_{h_{k}}}(x)-d_{\mathbbm{C}\Omega}(x)\leq|x-y_{k}|-|x-y| \leq|y_{k}-y|\leq\|\Phi_{h_{k}}-\Phi\|_{C^{0}(\bar{D},\mathbb{R}^{d})}.\] By exchanging the roles of \(\Omega_{h_{k}}\) and \(\Omega\) we deduce (4.4), which implies that \(\rho_{\hat{H}}(\Omega_{h_{k}},\Omega)\to 0\) as \(k\to\infty\). (ii) Our line of argument is similar as in [11]. Since \((e_{0}(u_{h_{k}}))_{k\in\mathbb{N}}\) is bounded in \(H_{0}^{1}(D)\) we may assume after possibly extracting a subsequence that there exists \(u^{*}\in H_{0}^{1}(D)\) such that \(e_{0}(u_{h_{k}})\rightharpoonup u^{*}\) in \(H_{0}^{1}(D)\). In view of (A2) and Lemma 4.2 we may apply Theorem 4.1 so that \((\Omega_{h_{k}})_{k\in\mathbb{N}}\) converges to \(\Omega\) in the sense of Mosco. In particular we infer that \(u^{*}\in H_{0}^{1}(\Omega)\). In order to see that \(u^{*}\) is the solution of (2.1) we fix \(\eta\in C_{0}^{\infty}(\Omega)\) and set \(K:=\text{supp}\eta\). Since \(\rho_{\hat{H}}^{c}(\Omega_{h_{k}},\Omega)\to 0\), Proposition 2.2.17 in [10] implies that there exists \(k_{0}\in\mathbb{N}\) such that \(K\subset\Omega_{h_{k}}\) for all \(k\geq k_{0}\). Let us denote by \(I_{h_{k}}\eta\in X_{\Omega_{h_{k}}}\) the standard Lagrange interpolation of \(\eta\), for which we have \[\|\eta-I_{h_{k}}\eta\|_{H_{0}^{1}(D)}\leq ch_{k}\|\eta\|_{H^{2}(D)}.\] Here we have used the fact that the family of triangulations \(\big{(}\Phi_{h}(\hat{T})\,|\,\hat{T}\in\hat{\mathcal{T}}_{h}\big{)}_{0<h\leq h_{0}}\) is regular with \[\frac{h_{T}}{\rho_{T}}\leq M^{2}\frac{h_{\hat{T}}}{\rho_{\hat{T}}}\leq M^{2} \sigma,\quad T=\Phi_{h}(\hat{T}),\,0<h\leq h_{0}, \tag{4.5}\] where \(\sigma\) appears in (4.3) and we have again applied (A2). Thus \(I_{h_{k}}\eta\to\eta\) in \(H^{1}_{0}(D)\) and by inserting \(I_{h_{k}}\eta\) into (2.14) we obtain \[\int_{\Omega}\nabla u^{*}\cdot\nabla\eta\,\mathrm{dx}=\lim_{k\to\infty}\int_{D} \nabla e_{0}(u_{h_{k}})\cdot\nabla I_{h_{k}}\eta\,\mathrm{dx}=\lim_{k\to\infty }\int_{D}fI_{h_{k}}\eta\,\mathrm{dx}=\int_{\Omega}f\eta\,\mathrm{dx}.\] Hence \(u^{*}=e_{0}(u)\), where \(u\) is the solution of (2.1). A standard argument (see Corollary 3.2.2 in [HP18]) then shows that \(e_{0}(u_{h_{k}})\to e_{0}(u)\) in \(H^{1}_{0}(D)\) as \(k\to\infty\). (iii) In the same way as in (ii) we infer that there exists \(p^{*}\in H^{1}_{0}(\Omega)\) such that \(e_{0}(p_{h_{k}})\rightharpoonup p^{*}\) in \(H^{1}_{0}(D)\) as \(k\to\infty\). After possibly extracting a further subsequence we may assume that \(e_{0}(u_{h_{k}})\to e_{0}(u)\) and \(\nabla e_{0}(u_{h_{k}})\to\nabla e_{0}(u)\) almost everywhere in \(D\). We claim that \[j_{u}(\cdot,e_{0}(u_{h_{k}}),\nabla e_{0}(u_{h,k})) \to j_{u}(\cdot,e_{0}(u),\nabla e_{0}(u))\quad\text{ in }L^{\frac{a}{a+1}}(D), \tag{4.6}\] \[j_{z}(\cdot,e_{0}(u_{h_{k}}),\nabla e_{0}(u_{h,k})) \to j_{z}(\cdot,e_{0}(u),\nabla e_{0}(u))\quad\text{ in }L^{2}(D,\mathbb{R}^{d}). \tag{4.7}\] In order to show (4.7) we set \(f_{k}:=|j_{z}(\cdot,e_{0}(u_{h_{k}}),\nabla e_{0}(u_{h,k}))-j_{z}(\cdot,e_{0} (u),\nabla e_{0}(u))|^{2}\). Clearly, \(f_{k}\to 0\) a.e. in \(D\), while (2.4) implies that \[f_{k}\leq c\big{(}\varphi_{3}^{2}+|u_{h_{k}}|^{q}+|\nabla u_{h_{k}}|^{2}+|u|^{ q}+|\nabla u|^{2}\big{)}=:g_{k}.\] We have that \(g_{k}\to g:=c\big{(}\varphi_{3}^{2}+2|u|^{q}+2|\nabla u|^{2}\big{)}\) a.e. in \(D\) as well as \(\int_{D}g_{k}dx\to\int_{D}gdx\) as \(k\to\infty\), so that the generalised Lebesgue dominated convergence theorem yields (4.7). The relation (4.6) is proved in the same way. With the help of (4.6) and (4.7) we obtain similarly as above that \(p^{*}=e_{0}(p)\), where \(p\) is the solution of (2.9) and then again \(e_{0}(p_{h_{k}})\to e_{0}(p)\) in \(H^{1}_{0}(D)\) as \(k\to\infty\). We can now examine the convergence of a sequence of discrete stationary points as \(h\to 0\). **Theorem 4.4**.: _Suppose that \((\Omega_{h})_{0<h\leq h_{0}}\) satisfies (A1) and (A2). Then there exists a sequence \((h_{k})_{k\in\mathbb{N}}\) with \(\lim_{k\to\infty}h_{k}=0\) and an open set \(\Omega\Subset D\) such that \(\rho^{c}_{H}(\Omega_{h_{k}},\Omega)\to 0\) as \(k\to\infty\). Furthermore, \(\Omega\) is a stationary point for \(\mathcal{J}\) on \(\mathcal{S}\)._ Proof.: We infer from Theorem 4.3 that there exists a sequence \((h_{k})_{k\in\mathbb{N}}\) with \(\lim_{k\to\infty}h_{k}=0\) and a bilipschitz map \(\Phi\colon\bar{D}\to\bar{D}\) such that for \(\Omega=\Phi(\hat{\Omega})\) we have \[\rho^{c}_{H}(\Omega_{h_{k}},\Omega)\to 0,\,\,\,e_{0}(u_{h_{k}})\to e_{0}(u)\,\,\text{ in }H^{1}_{0}(D),\,\,\,e_{0}(p_{h_{k}})\to e_{0}(p)\,\,\,\,\text{ in }H^{1}_{0}(D)\text{ as }k\to\infty \tag{4.8}\] where \(u_{h_{k}}\), \(u\), \(p_{h_{k}}\), and \(p\) are as in Theorem 4.3. In order to show that \(\Omega\) is a stationary point for \(\mathcal{J}\) on \(S\) we first claim that \[\chi_{\Omega_{h_{k}}}\to\chi_{\Omega}\quad\text{ a.e. in }D. \tag{4.9}\] Since \(\Phi\) is bilipschitz and \(\partial\hat{\Omega}\) has measure \(0\) we infer that the same is true for \(\partial\Omega=\Phi(\partial\hat{\Omega})\), so that it is sufficient to prove that \(\chi_{\Omega_{h_{k}}}(x)\to\chi_{\Omega}(x)\) for all \(x\in\Omega\cup D\setminus\bar{\Omega}\). To begin, Corollary 1 in Chapter 6, Section 4 of [DZ01] implies that \(\chi_{\Omega_{h_{k}}}(x)\to\chi_{\Omega}(x)\) for all \(x\in\Omega\). Next, let \(x\in D\setminus\bar{\Omega}\). We claim that there exists \(k_{0}\in\mathbb{N}\) such that \(x\in D\setminus\bar{\Omega}_{h_{k}}\) for all \(k\geq k_{0}\). Otherwise there is a subsequence \((k_{\ell})_{\ell\in\mathbb{N}}\) and \(y_{k_{\ell}}\in\bar{\Omega}\) such that \(\Phi_{h_{k_{\ell}}}(y_{k_{\ell}})=x\) for all \(\ell\in\mathbb{N}\). By passing to a further subsequence we may assume that \(y_{k_{\ell}}\to y\) for some \(y\in\bar{\Omega}\), which together with the uniform convergence of \((\Phi_{h_{k}})_{k\in\mathbb{N}}\) to \(\Phi\) implies that \(\Phi(y)=x\), a contradiction. Therefore \(\lim_{k\to\infty}\chi_{\Omega_{h_{k}}}(x)=0=\chi_{\Omega}(x)\) and (4.9) holds. Let us fix \(V\in W^{1,\infty}_{0}(D,\mathbb{R}^{d})\) and set \(V_{k}:=I_{h_{k}}V\in\mathcal{V}_{\Phi_{h_{k}}}\). We may assume after possibly extracting a further subsequence that \[V_{k}\to V\,\,\,\,\text{in }L^{\infty}(D;\mathbb{R}^{d})\,\,\,\,\,\,\,\,\,\text{ and }\,\,\,\,\,\,\,\,DV_{k}\stackrel{{*}}{{\rightharpoonup}}DV\,\,\text{in }L^{\infty}(D;\mathbb{R}^{d\times d}). \tag{4.10}\] As \(\Omega_{h_{k}}\) is a stationary point for \(\mathcal{J}_{h_{k}}\) we have \[0 = \mathcal{J}^{\prime}_{h_{k}}(\Omega_{h_{k}})[V_{k}]=\int_{\Omega_{h _{k}}}\Bigl{(}\bigl{(}DV_{k}+DV_{k}^{\mathsf{T}}-\operatorname{div}V_{k}I\bigr{)} \nabla u_{h_{k}}\cdot\nabla p_{h_{k}}+\operatorname{div}(fV_{k})p_{h_{k}} \Bigr{)}\,\mathrm{dx} \tag{4.11}\] \[+\int_{\Omega_{h_{k}}}\Bigl{(}j(\cdot,u_{h_{k}},\nabla u_{h_{k}}) \operatorname{div}V_{k}+j_{x}(\cdot,u_{h_{k}},\nabla u_{h_{k}})\cdot V_{k}-j_{z}( \cdot,u_{h_{k}},\nabla u_{h_{k}})\cdot DV_{k}^{\mathsf{T}}\nabla u_{h_{k}} \Bigr{)}\,\mathrm{dx}\] \[=: A_{k}+B_{k}.\] In view of (4.9) and (4.8) it is not difficult to verify that \[\chi_{\Omega_{h_{k}}}\nabla e_{0}(u_{h_{k}})\cdot\nabla e_{0}(p_{h_{k}})\to\chi_{ \Omega}\nabla e_{0}(u)\cdot\nabla e_{0}(p)\text{ in }L^{1}(D),\] which together with (4.10) yields \[A_{k} = \int_{D}\chi_{\Omega_{h_{k}}}\Big{(}\big{(}DV_{k}+DV_{k}^{\mathsf{ T}}-\operatorname{div}V_{k}I\big{)}\nabla e_{0}(u_{h_{k}})\cdot\nabla e_{0}(p_{h_{k} })+\operatorname{div}(fV_{k})e_{0}(p_{h_{k}})\Big{)}\,\mathrm{dx}\] \[\to \int_{D}\chi_{\Omega}\Big{(}\big{(}DV+DV^{\mathsf{T}}- \operatorname{div}VI\big{)}\nabla e_{0}(u)\cdot\nabla e_{0}(p)+\operatorname{ div}(fV)p\Big{)}\,\mathrm{dx}\] \[= \int_{\Omega}\Bigl{(}\big{(}DV+DV^{\mathsf{T}}-\operatorname{div}VI \big{)}\nabla u\cdot\nabla p+\operatorname{div}(fV)p\Big{)}\,\mathrm{dx}.\] Similarly we have \[B_{k}\to\int_{\Omega}\bigl{(}j(\cdot,u,\nabla u)\operatorname{div}V+j_{x}( \cdot,u,\nabla u)V-j_{z}(\cdot,u,\nabla u)\cdot DV^{\mathsf{T}}\nabla u\bigr{)} \,\mathrm{dx}.\] Passing to the limit in (4.11) we deduce that \(\mathcal{J}^{\prime}(\Omega)[V]=0\). **Remark 4.1**.: _We briefly describe how our analysis can be generalized to a setting in which an additional constraint is imposed on the admissible sets. In particular, we consider a volume constraint, so that_ \[\tilde{\mathcal{S}}=\{\Omega\subset D\,|\,\Omega=\Phi(\hat{\Omega})\text{ for some }\Phi\in\mathcal{U},|\Omega|=m_{0}\},\] _where \(0<m_{0}<|D|\) is a given constant. In this case we consider the modified functional_ \[\tilde{\mathcal{J}}_{h}:\mathcal{S}_{h}\to\mathbb{R},\ \ \tilde{J}_{h}(\Omega_{h}):= \mathcal{J}_{h}(\Omega_{h})+\mathcal{A}_{h}(\Omega_{h}),\text{ where }\mathcal{A}_{h}(\Omega_{h})=\frac{\mu_{h}}{2}\bigl{(}|\Omega_{h}|-m_{0} \bigr{)}^{2},\] _and \((\mu_{h})_{0<h\leq h_{0}}\) is a sequence of real numbers satisfying \(\lim_{h\searrow 0}\mu_{h}=\infty\). It is not difficult to verify that the results of Section 3 still hold for \(\tilde{\mathcal{J}}_{h}\). Next, suppose that \((\Omega_{h})_{0<h\leq h_{0}}\) is a sequence of stationary points of \(\tilde{\mathcal{J}}_{h}\) satisfying (A2). Arguing as in the proof of Theorem 4.4 one obtains a sequence \((\Omega_{h_{k}})_{k\in\mathbb{N}}\) and an open set \(\Omega\Subset D\) such that (4.8) holds. In order to show that \(|\Omega|=m_{0}\) we choose an open set \(U\Subset D\) such that \(\Omega_{h_{k}}\subset U\) for all \(k\in\mathbb{N}\) and a function \(\bar{V}\in C_{0}^{2}(D,\mathbb{R}^{d})\) such that \(\bar{V}(x)=x\) for all \(x\in U\). Then, \(\bar{V}_{k}:=I_{h_{k}}\bar{V}\in\mathcal{V}_{\Phi_{h_{k}}}\) satisfies \(\|\bar{V}_{k}-\bar{V}\|_{L^{\infty}}\leq ch_{k}\), \(\|D\bar{V}_{k}\|_{L^{\infty}}\leq c\) as well as \(\bar{V}_{k}(x)=x\) on \(\Omega_{h_{k}}\). Since \(\tilde{\mathcal{J}}_{h_{k}}^{\prime}(\Omega_{h_{k}})[\bar{V}_{k}]=0\) and \(\operatorname{div}\bar{V}_{k}=d\) on \(\Omega_{h_{k}}\) we obtain with the help of (2.16), our assumptions on \(j\) and (3.1) that_ \[\mu_{h_{k}}d|\Omega_{h_{k}}|\,\big{|}\,|\Omega_{h_{k}}|-m_{0}\big{|}=\big{|}\mu _{h_{k}}(|\Omega_{h_{k}}|-m_{0})\int_{\Omega_{h_{k}}}\operatorname{div}\bar{V}_ {k}\,\mathrm{dx}\big{|}=|\mathcal{A}^{\prime}_{h}(\Omega_{h_{k}})[\bar{V}_{k}] |=|-\mathcal{J}^{\prime}_{h}(\Omega_{h_{k}})[\bar{V}_{k}]|\leq c\] _for all \(k\in\mathbb{N}\). Observing that \(\mu_{h_{k}}\to\infty\) and \(|\Omega_{h_{k}}|\to|\Omega|=|\Phi(\hat{\Omega})|>0\) in view of (4.9), we deduce that \(|\Omega|=m_{0}\) so that \(\Omega\in\tilde{\mathcal{S}}\). Furthermore, \(\Omega\) can be shown to be a stationary point of \(\mathcal{J}\) in the sense that \(\mathcal{J}^{\prime}(\Omega)[V]=0\) for all \(V\in W^{1,\infty}_{0}(D)\) with \(\int_{\Omega}\operatorname{div}V\,\mathrm{dx}=0\)._ ## 5 Numerical experiments The numerical experiments we provide here show experimental evidence of the convergence we prove, alongside observing any possible rates of convergence. For the implementation of finite element methods, we will utilise DUNE [1], particularly the python bindings [10, 11]. The initial grid is constructed with pygmsh, [10]. Notice that the Hausdorff complementary metric requires the distance function for our provided shape, this is not so trivial to construct; as such, we make use of the construction in [13] as an approximation. Let \(\{x_{i}\}_{i=1}^{N}\) be the vertices of the triangulation of \(\mathcal{T}_{h}\) and let \(\{y_{i}\}_{i=1}^{n}\) be the vertices which lie on the boundary of \(\Omega_{h}\), then we set \[d^{h}_{\mathbb{G}\Omega_{h}}(x_{i})=\min_{j=1,\ldots,n}|x_{i}-y_{j}|\text{ if }x_{i}\in\Omega_{h},\text{ otherwise }0. \tag{5.1}\] We then calculate our discrete Hausdorff complementary distance to be given by \[\rho_{h}(\Omega_{h},\Omega^{*})=\max_{i=1,\ldots,N}|d_{\mathds{C}\Omega^{*}}(x_{i })-d^{h}_{\mathds{C}\Omega_{h}}(x_{i})|. \tag{5.2}\] In the experiments provided, we will consider shape optimisation problems with known, simple, minimisers. In particular we will know the explicit form of the complementary distance function \(d_{\mathds{C}\Omega^{*}}\). This allows us to measure the quantities of interest and compare. We will also measure the (maximum) radius ratio of the initial and final grids, this appears in [14], for example. On each cell, this quantity is closely related to the left hand side of (4.3). The radius ratio \(\sigma\) of a triangle \(T\) is given by \[\sigma(T):=\frac{r_{T}}{2\rho_{T}}, \tag{5.3}\] where \(r_{T}\) is the radius of the smallest ball which contains \(T\) and \(\rho_{T}\) is the radius of the largest ball contained in \(T\). It holds that \(\sigma\geq 1\) and \(\sigma(\hat{T})=1\) if and only if \(\hat{T}\) is equilateral. For the initial grid, we will measure \(\tilde{\sigma}_{0}:=\max_{\hat{T}\in\hat{\mathcal{T}}_{h}}\sigma(\hat{T})\) and on the final grid, defined by \(\Phi_{h}\), \(\tilde{\sigma}_{f}:=\max_{\hat{T}\in\hat{\mathcal{T}}_{h}}\sigma(\Phi_{h}( \hat{T}))\). ### Direction of steepest descent construction Throughout this work, we have made use of \(V_{h}\), a direction of steepest descent. While it is known that such a direction exists, by compactness in a finite dimensional space, the construction of it is not necessarily trivial. As in [11], we will make use of the Alternating Direction Method of Multipliers (ADMM) approach to approximate a solution. For a given \(\Omega_{h}:=\Phi_{h}(\hat{\Omega})\), let \[\mathcal{Q}_{\Phi_{h}}:=\{q_{h}\in L^{2}(D;\mathbb{R}^{d\times d}):q_{h}|_{T} \in P_{0}(T;\mathbb{R}^{d\times d}),\,T=\Phi_{h}(\hat{T}),\,\hat{T}\in\hat{ \mathcal{T}}_{h}\} \tag{5.4}\] be the space of piecewise constant \(d\times d\) matrix valued finite elements subordinate to the triangulation induced by \(\Phi_{h}\). In addition for given \(\tau>0\), we consider the Lagrangian \(\mathcal{L}_{\tau}\colon\mathcal{V}_{\Phi_{h}}\times\mathcal{Q}_{\Phi_{h}} \times\mathcal{Q}_{\Phi_{h}}\rightarrow\mathbb{R}\) given by: \[\mathcal{L}_{\tau}(V_{h},q_{h};\lambda_{h}):=\int_{D}\left(\lambda_{h}:(DV_{h} -q)+\frac{\tau}{2}(DV_{h}-q_{h}):(DV_{h}-q_{h})\right)\,\mathrm{dx}+\mathcal{ J}_{h}^{\prime}(\Omega_{h})[V_{h}]. \tag{5.5}\] Given \(V_{h}^{0}\in\mathcal{V}_{\Phi_{h}}\), \(\lambda_{h}^{0}\in\mathcal{Q}_{\Phi_{h}}\), and \(tol>0\), the algorithm is then given by **Algorithm 5.1** (Admm).: _0. Let \(R=\infty\) For \(k=0,1,2,\ldots\): 1. If \(R<tol\), then stop. 2. Find \(q_{h}^{k+1}=\arg\min\{\mathcal{L}_{\tau}(V_{h}^{k},q_{h};\lambda_{h}^{k}):q_{h }\in\mathcal{Q}_{\Phi_{h}},\,|q_{h}|\leq 1\}\). 3. Find \(V_{h}^{k+1}=\arg\min\{\mathcal{L}_{\tau}(V_{h},q_{h}^{k+1};\lambda_{h}^{k}):V_{h }\in\mathcal{V}_{\Phi_{h}}\}\). 4. Set \(\lambda_{h}^{k+1}=\lambda_{h}^{k}+\tau(DV_{h}^{k+1}-q_{h}^{k+1})\). 5. Update \(R=\left(\|\lambda_{h}^{k+1}-\lambda_{h}^{k}\|_{L^{2}(D;\mathbb{R}^{d\times d}) }^{2}+\|DV_{h}^{k+1}-DV_{h}^{k}\|_{L^{2}(D;\mathbb{R}^{d\times d})}^{2}\right) ^{\frac{1}{2}}\)._ There exist variants in which one may adapt \(\tau\) to reduce the number of steps required to achieve a given tolerance, see [1]. Such an adaptive variant is used in the numerical experiments. ### Experiments For the numerical experiments presented, we will consider a cascading approach. In the experiments, we run the described algorithm for up to 15 steps, or until the Armijo step length satisfies \(t_{k}\leq 2^{-11}\), perform a congruent refinement, and start the algorithm with \(\Phi_{h/2}^{0}=\Phi_{h}^{k^{*}}\), where \(k^{*}=\min(15,\inf\{k\in\mathbb{N}:t_{k-1}\leq 2^{-11}\})\). For the third experiment, we wish to measure some form of convergence, as such it is reasonable to continue to an appropriate convergence criteria, rather than stopping at some ad-hoc number of steps. We will again consider the mesh converged when the Armijo step length satisfies \(t_{k}\leq 2^{-11}\). However, the mesh will be saved at shape \(k^{*}\) as described above, and for the refinement, continue with \(\Phi_{h/2}^{0}=\Phi_{h}^{k^{*}}\). We expect this cascading approach to be useful for the efficient calculation of optimal shapes. We will fix the domain \(D=(-2,2)^{2}\). #### 5.2.1 Experiment 1 For this experiment, we consider \(j(\cdot,u,z):=\frac{1}{2}(u-u_{d})^{2}\), where we choose \(u_{d}(x)=\frac{4}{\pi}-|x|^{2}\), for the data for the Poisson problem, we take \(f=1\). A locally optimal shape is expected to be \(\Omega^{*}:=B(0,\frac{4}{\sqrt{3\pi}})\), the ball of radius \(\frac{4}{\sqrt{3\pi}}\) at the origin, which has energy \(\mathcal{J}(\Omega)=\frac{128}{27\pi^{2}}\). This may be found by assuming symmetry i.e., the solution is a ball of radius \(r>0\). One may then find the above critical radius and energy using calculus in one dimension. We choose \(\hat{\Omega}=(-1,1)^{2}\). The initial mesh is displayed on the left of Figure 3, with the hold all in blue, and the initial domain in red. In Figure 1, we see the energy and the discrete Hausdorff complementary distance (5.2) for the experiment along the shape iterates. For convergence of our scheme, in Theorem 4.4, we require that \(\|D\Phi_{h}\|_{L^{\infty}}\) and \(\|D\Phi_{h}^{-1}\|_{L^{\infty}}\) are bounded; the value of these along the iterations is found in Figure 2 The final domains are given on the right of Figure 3. Figure 1: On the left is the energy and on the right discrete Hausdorff complementary distance along the shape iterates for the experiment in Section 5.2.1. We see that the energy is reducing along the shapes, jumping to a lower energy when the mesh is refined and the distance decreases on average. Figure 2: Values for \(\|D\Phi_{h}\|_{L^{\infty}}\) (left) and \(\|D\Phi_{h}^{-1}\|_{L^{\infty}}\) (right) along the shape iterates for the experiment in Section 5.2.1. We see that the values of \(\|D\Phi_{h}\|_{L^{\infty}}\) and \(\|D\Phi_{h}^{-1}\|_{L^{\infty}}\) abruptly jump up at the start, and only slowly increase later on. #### 5.2.2 Experiment 2 For this experiment, we consider \(j(\cdot,u,z):=\frac{1}{2}(u-u_{d})^{2}\), where we choose \[u_{d}(x)=5\frac{-\pi|x|^{2}\ln(4)+3\ln(|x|^{2})+3\ln(\pi)+\ln(4)}{\pi\ln(256)},\] for the data for the Poisson problem, we take \(f=5\). Notice that \(u_{d}\) has zeros on \(|x|=\frac{1}{\sqrt{\pi}}\), \(\frac{2}{\sqrt{\pi}}\) and that \(-\Delta u_{d}=f\). The optimal shape is expected to be \(\Omega^{*}:=B(0,\frac{2}{\sqrt{\pi}})\setminus\overline{B(0,\frac{1}{\sqrt{ \pi}})}\), with \(\mathcal{J}(\Omega^{*})=0\). As in the first experiment, this is again calculated using axi-symmetric arguments. Notice that this is not a simply connected optimal shape, which may require some topology optimisation. We choose the initial domain given by an approximation of \(\hat{\Omega}=B(0,1.4)\setminus\overline{B(0,0.7)}\). Let us note that, without prior knowledge of the topology of the domain, e.g. starting with a ball of radius 1, the domains heads towards a non-axi-symmetric shape, which may possibly end up in a degenerate minimiser. A combined shape and topology optimisation may prove useful in such a setting. The development of this is work in preparation. The initial mesh is displayed on the left of Figure 6, with the hold all in blue, and the initial domain in red. In Figure 4, we see the energy and the discrete Hausdorff complementary distance (5.2) for the experiment along the iterations. The value of \(\|D\Phi_{h}\|_{L^{\infty}}\) and \(\|D\Phi_{h}^{-1}\|_{L^{\infty}}\) along the iterations is found in Figure 5. The final domains are given on the right of Figure 6. #### 5.2.3 Experiment 3 For this experiment, we choose \(f=1\) and consider \(j(x,u,z):=\frac{1}{2}|z+\frac{r}{2}|^{2}\) along with a penalty for the volume as in Remark 4.1, where we set \(m_{0}=4\). Without the penalty term and for this choice of \(j\), it holds that for any \(r>0\), the balls \(B_{r}(0)\) would be a minimiser with 0 energy. By penalising the volume to be equal to 4, one has that the minimiser will be the ball of radius \(r=\frac{2}{\sqrt{\pi}}\), with \(\mathcal{J}(\Omega^{*})=0\). As remarked at the beginning of this section, this experiment is performed slightly differently to the previous two. While we will cascade to a finer mesh after a maximum of 15 steps, we will also continue with the shape optimisation to provide the first shape which is produced with an Armijo step of \(2^{-11}\). This allows for a fair comparison of how the energy and Hausdorff complementary distance appear when the shape is approximately stationary. When we refine through the cascade, we will increase the penalty parameter so that it scales like \(h^{-\frac{1}{2}}\). Figure 3: Final domains for the experiment in Section 5.2.1, refinements increasing from top left to bottom right. Taking the maximum over all triangles, the radius ratio for the initial grid is \(\tilde{\sigma}_{0}\approx 1.633917\) and for the final, most fine, grid is \(\tilde{\sigma}_{f}\approx 1.651944\). Figure 4: On the left is the energy and on the right discrete Hausdorff complementary distance along the shape iterates for the experiment in Section 5.2.2. We see that the energy is decreasing, jumping up as the mesh is refined. However, the energy recovers only a few steps later to a value smaller than the coarser mesh. The distance is reducing on average along the iterations, rarely increasing For this experiment, we use a symmetric grid, rather than one generated by pygmsh. The initial domain, given by \(\hat{\Omega}=(-1,1)^{2}\), appears in red on the left of Figure 9, with the hold all in blue. In Figure 7, we see the energy and the discrete Hausdorff complementary distance (5.2) for the experiment along the iterations. In Table 1, we tabulate the mesh size of the reference domain for each of the approximately converged shape along with the associated energy and discrete Hausdorff complementary distance. We also provide the _experimental order of convergence_, which for a given functional \(E(h)\), here depending on the size \(h\) of the reference mesh, is defined by \[EOC:=\frac{\ln E(h_{1})-\ln E(h_{2})}{\ln h_{1}-\ln h_{2}}.\] The value of \(\|D\Phi_{h}\|_{L^{\infty}}\) and \(\|D\Phi_{h}^{-1}\|_{L^{\infty}}\) along the iterations is found in Figure 8 The final domains are given on the right of Figure 9. Figure 6: On the left is the initial domain and on the right is a quarter of the final domains for the experiment in Section 5.2.2, refinements increasing from top left to bottom right. Taking the maximum over all triangles, the radius ratio for the initial grid is \(\tilde{\sigma}_{0}\approx 1.787287\) and for the final, most fine, grid is \(\tilde{\sigma}_{f}\approx 1.917235\). Figure 7: On the left is the energy and on the right discrete Hausdorff complementary distance along the shape iterates for the experiment in Section 5.2.3. We see that the energy is reducing along the shapes, jumping to a lower energy when the mesh is refined. The coarser grids require many more shape updates to reach the prescribed convergence criteria than the finer grids. \begin{table} \begin{tabular}{c|c|c|c|c|c} \(h\) & \(\mu_{h}\) & Energy & EOC Energy & HCD & EOC HCD \\ \hline 0.5 & 0.5 & 0.105327 & – & 0.0308699 & – \\ \hline 0.25 & \(\frac{1}{\sqrt{2}}\) & 0.0268579 & 1.97146 & 0.0273133 & 0.176597 \\ \hline 0.125 & 1 & 0.00712922 & 1.91353 & 0.016456 & 0.730993 \\ \hline 0.0625 & \(\sqrt{2}\) & 0.00179752 & 1.98774 & 0.00912092 & 0.851362 \\ \hline 0.03125 & 2 & 0.000493593 & 1.86461 & 0.00451768 & 1.0136 \\ \end{tabular} \end{table} Table 1: Energy and discrete Hausdorff complementary distance for the final domains for each refinement level for the experiment in Section 5.2.3. We see a clear decrease in the Energy and the Discrete Hausdorff Complementary Distance as we refine the mesh. Figure 8: Values for \(\|D\Phi_{h}\|_{L^{\infty}}\) (left) and \(\|D\Phi_{h}^{-1}\|_{L^{\infty}}\) (right) along the shape iterates for the experiment in Section 5.2.3. As in the previous experiments, the values of \(\|D\Phi_{h}\|_{L^{\infty}}\) and \(\|D\Phi_{h}^{-1}\|_{L^{\infty}}\) abruptly jump up at the start. We see for the coarser grids that \(\|D\Phi_{h}^{-1}\|_{L^{\infty}}\) increases a large amount relative to the finer grids. For the finer grids, both values appear very stable with minimal increase beyond the initial jump. Figure 9: Final domains for the experiment in Section 5.2.3, refinements increasing from top left to bottom right. The most refined mesh is not shown due to the very small triangles. Taking the maximum over all triangles, the radius ratio for the initial grid is \(\tilde{\sigma}_{0}=\frac{1}{2}+\frac{1}{\sqrt{2}}\approx 1.207107\) and for the final, most fine, grid is \(\tilde{\sigma}_{f}\approx 1.489403\). Conclusions This work presents a numerical finite element solution framework for discrete PDE constrained shape optimisation in the \(W^{1,\infty}\)-topology based on the steepest descent method with Armijo step size rule. In Theorem 3.3, global convergence of this method is shown for a fixed discretisation parameter. Moreover, in Theorem 4.4 it is shown that a sequence of discrete stationary shapes under assumption (A2) converges with respect to the Hausdorff complementary metric to a stationary point of the limit problem (1.1) for the mesh parameter tending to zero. The proof of this result is based on the continuity of the Dirichlet problem with respect to the Hausdorff complementary metric in terms of \(\gamma\)-convergence. In future work our numerical concept and convergence analysis could be extended to the numerical investigation of shape Newton methods in the \(W^{1,\infty}\)-topology, which we addressed in [10], or that of transformations which preserve some geometric quantity, as in [12].
2305.18932
The Information Retrieval Experiment Platform
We integrate ir_datasets, ir_measures, and PyTerrier with TIRA in the Information Retrieval Experiment Platform (TIREx) to promote more standardized, reproducible, scalable, and even blinded retrieval experiments. Standardization is achieved when a retrieval approach implements PyTerrier's interfaces and the input and output of an experiment are compatible with ir_datasets and ir_measures. However, none of this is a must for reproducibility and scalability, as TIRA can run any dockerized software locally or remotely in a cloud-native execution environment. Version control and caching ensure efficient (re)execution. TIRA allows for blind evaluation when an experiment runs on a remote server or cloud not under the control of the experimenter. The test data and ground truth are then hidden from public access, and the retrieval software has to process them in a sandbox that prevents data leaks. We currently host an instance of TIREx with 15 corpora (1.9 billion documents) on which 32 shared retrieval tasks are based. Using Docker images of 50 standard retrieval approaches, we automatically evaluated all approaches on all tasks (50 $\cdot$ 32 = 1,600~runs) in less than a week on a midsize cluster (1,620 CPU cores and 24 GPUs). This instance of TIREx is open for submissions and will be integrated with the IR Anthology, as well as released open source.
Maik Fröbe, Jan Heinrich Reimer, Sean MacAvaney, Niklas Deckers, Simon Reich, Janek Bevendorff, Benno Stein, Matthias Hagen, Martin Potthast
2023-05-30T10:48:50Z
http://arxiv.org/abs/2305.18932v1
# The Information Retrieval Experiment Platform ###### Abstract. We integrate ir_datasets, ir_measures, and PyTerrier with TIRA in the Information Retrieval Experiment Platform (TIREx) to promote more standardized, reproducible, scalable, and even blinded retrieval experiments. Standardization is achieved when a retrieval approach implements PyTerrier's interfaces and the input and output of an experiment are compatible with ir_datasets and ir_measures. However, none of this is a must for reproducibility and scalability, as TIRA can run any dockerized software locally or remotely in a cloud-native execution environment. Version control and caching ensure efficient (re)execution. TIRA allows for blind evaluation when an experiment runs on a remote server or cloud not under the control of the experimenter. The test data and ground truth are then hidden from public access, and the retrieval software has to process them in a sandbox that prevents data leaks. We currently host an instance of TIREx with 15 corpora (1.9 billion documents) on which 32 shared retrieval tasks are based. Using Docker images of 50 standard retrieval approaches, we automatically evaluated all approaches on all tasks (50 - 32 = 1,600 runs) in less than a week on a midsize cluster (1,620 CPU cores and 24 GPUs). This instance of TIREx is open for submissions and will be integrated with the IR Anthology, as well as released open source. Retrieval evaluation; Reproducibility; Shared tasks; TIREx + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmmmm](https://doi.org/10.1145/mmmm.mmmmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmmmm](https://doi.org/10.1145/mmmm.mmmmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM. ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM. ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM. ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM. ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM. ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM. ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM. ISBN 978-x-xxxxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM. ISBN 978-x-xxxxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ISBN 978-x-xxxx-xxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) + Footnote †: c) 2023 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxxxx-xxx-x/YY/MM.. $15.00 [https://doi.org/10.1145/mmmm.mmmm](https://doi.org/10.1145/mmmm.mmmm) of tools for working with IR data (ir_datasets(Krishan et al., 2017)), for executing retrieval pipelines (PyTerrier (Krishan et al., 2017)), and for evaluating IR systems (ir_measures(Krishan et al., 2017)) with the TIRA Integrated Research Architecture (Krishan et al., 2017), a continuous integration service for reproducible shared tasks and experiments. TIRex is designed to for reproducibility through software submissions while keeping an experimenter's or task organizer's workload comparable to run file submissions. On our Betaweb and Gammweb clusters,4 we have deployed an instance of TIRex that is open for software submissions and experiments. A substantial efficiency boost comes from integrating GPU cores and result caching into the platform to accelerate neural IR approaches. As a proof of concept, we conducted a large-scale evaluation of 50 "standard" retrieval approaches on 32 shared retrieval tasks (based on 15 corpora with a total of 1.9 billion documents). This experiment consists of 1,600 runs and was started by just clicking a button. It finished unattended in less than a week. Footnote 4: website.de/facilities.html/hardware ## 2. Background and Related Work We review ad hoc retrieval experiments in evaluation campaigns, common problems and pitfalls in IR experiments, best practices for leaderboards, existing reproducibility initiatives, and tools to support reproducibility. Insights from all these areas have influenced our implementation decisions for TIRex. Ad hoc Retrieval Experiments in Evaluation CampaignsToday's shared task-style experiments for ad hoc retrieval evolved from the Cranfield experiments (Trabel et al., 2017). In the 1960s, the Cranfield experiments (Trabel et al., 2017; Krishan et al., 2017) were conducted on a corpus of 1,400 documents with complete relevance judgments for 225 topics. Since corpus sizes grew substantially, complete judgments became infeasible almost immediately thereafter (Trabel et al., 2017). The current practice at shared tasks in IR thus is to only assess the relevance of per-topic pools of the submitted systems' top-ranked documents (Trabel et al., 2017). Subsequent evaluations on the same corpus usually are based on the assumption that the pools are "essentially complete", i.e., unjudged documents that were not in the pool are non-relevant (Trabel et al., 2017). Although this completeness assumption is reasonable for tasks with a diverse set of submitted runs pooled at high depth (Trabel et al., 2017), recent observations suggest that scenarios with many relevant documents per query (e.g., corpora with many duplicates (Krishan et al., 2017)) or with topics representing broad information needs (Krishan et al., 2017) are rather problematic. Especially for shared tasks that do not attract diverse submissions, TIRex can help to produce a more diverse judgment pool, as a wide range of different baseline retrieval systems is directly available and can be applied to any imported retrieval task. Common Problems and Pitfalls in IR ExperimentsEven though the current discussion about how to conduct IR experiments (Krishan et al., 2017; Krishan et al., 2017; Krishan et al., 2017) includes some controversial points (e.g., whether MRR should be abandoned (Krishan et al., 2017) or not (Trabel et al., 2017; Krishan et al., 2017)), there is still a consensus in the IR community on many characteristics of "bad" or "good" experiments. For instance, it is rather undisputed that retrieval studies should be internally valid (conclusions must be supported by the data) and externally valid (repeating an experiment on different but similar data should yield similar observations) (Krishan et al., 2017). Still, external validity of IR experiments remains an open problem (Krishan et al., 2017). TIRex can help to further improve both: the internal validity via archiving all experiments and results on some corpus (e.g., to accurately correct for multiple hypothesis tests), and the external validity via simplifying to run a submitted software on different data. Thakur et al. (Thakur et al., 2017) attempted to address the external validity problem by combining diverse retrieval corpora in the BEIR benchmark for _en masse_ evaluation. However, in practice, running an approach on all corpora in BEIR requires some effort, so that many studies still only report results for a selected subset (e.g., (Krishan et al., 2017; Krishan et al., 2017; Krishan et al., 2017))--often even without clearly justifying the selection. In contrast, a software in TIRex can rather easily be evaluated against many if not all corpora so that analyzing improvements and limitations of an approach on diverse data is not much effort. An often criticized practice is that many IR studies compare a new approach against weak or "wrong" baselines (i.e., not the best or most reasonable previous approaches). Any improvements Figure 1. Overview of typical shared task-like IR experiments and how the tools in TIRex support them. claimed in such studies are not really meaningful (Beng et al., 2017; Lin et al., 2018). One reason for choosing a wrong baseline could be that neither the researchers nor the reviewers are actually aware of what previous approaches exist for a specific corpus since results are often scattered across multiple publications (Lin and Zhang, 2018). Centralized leaderboards that directly show the effectiveness of diverse approaches for a wide range of tasks would address this problem, but multiple efforts have failed so far (Lin and Zhang, 2018). In TIREx, we include many popular corpora and standard retrieval approaches right from the start so that the TIREx leaderboards can initially gain traction. The more shared tasks (but also researchers) employ TIREx for software submissions, the broader TIREx' coverage will get over time. Maintaining Ongoing LeaderboardsInspired by the observation that many IR studies do not compare a new approach against reasonable baselines (e.g., the most effective TREC runs) (Beng et al., 2017), Armstrong et al. (2017) released EvaluateIR, a public leaderboard accepting run file submissions. Although the concept was highly valuable for the community in helping researchers and reviewers alike to select appropriate baselines, "EvaluateIR never gained traction, and a number of similar efforts following it have also floundered" (Lin and Zhang, 2018). While there is still no centralized general leaderboard for IR, certain task-specific leaderboards are quite popular. For instance, the leaderboard of the recent MIRACL Challenge (Lin and Zhang, 2018) received 25 submissions within one week, and the MS MARCO leaderboard (Lin and Zhang, 2018) has been popular for years. Maintaining such long-running leaderboards comes with some caveats, as they are conceptually turn-based games where every leaderboard submission might leak information from the test set (Lin and Zhang, 2018). Lin et al. (Lin and Zhang, 2018) propose best practices, inspired by previous problems of the Netflix prize.5 Most importantly, Lin et al. note that, while submissions to a leaderboard are open, the retrieval results should not be public, nor should system descriptions or implementations, as this would potentially leak information from the test set and foster "uninteresting" approaches like ensembles of all the top submissions. With TIREx and its blind evaluation, organizers can choose to blind all submissions as long as they need to, with the ability to unblind approaches and submissions as they see fit, so that TIREx supports the best practices recommended by Lin et al. (Lin and Zhang, 2018). Footnote 5: www.netflixprize.com Footnote 6: Examples at ECIR 2023 and SIGIR 2023: ecir20203.org/calls/reproducibility.html and sigir.org/sigir2023/submit/call-for-reproducibility-track-papers. Reproducibility Initiatives in IRReproducibility is a major challenge in research. For instance, a survey among 1,576 researchers revealed that more than 50% failed at least once to reproduce their own experiments (Beng et al., 2017). The IR community makes substantial efforts to foster reproducibility. There are, for instance, dedicated reproducibility tracks at conferences7 and dedicated reproducibility initiatives like OSIRRC (Beng et al., 2017; Lin and Zhang, 2018) or CENTRE (Liu and Zhang, 2018; Liu and Zhang, 2018; Liu and Zhang, 2018; Liu et al., 2018). OSIRRC aims to produce archived versions of retrieval systems that are replicable, while CENTRE runs replicability and reproducibility challenges across IR evaluation campaigns. Lin and Zhang (Lin and Zhang, 2018) looked at all the artifacts produced in the OSIRRC 2015 challenge (Beng et al., 2017) to verify which results are still replicable four years after their creation. Out of the seven systems that participated in the challenge, only the results of Terrier (Terrier, 2018) were fully reproducible out of the box, while two other systems could still be fixed by manual adjustments to the code. The main reasons for failure were that external dependencies could not be loaded anymore, or that platform dependencies changed (i.e., the operating system with its packages). To mitigate the problem of changing platform dependencies, the follow-up iteration of OSIRRC (Lin and Zhang, 2018) focused on Docker images that had to implement a strict specification (enforced by the companion tool "jig") that triggered the indexing and subsequent retrieval via Docker hooks. Even though 17 systems have been dockerized to follow the jig specification, the concept has not gained traction. By centering TIREx around shared tasks in the beginning, we hope that we can kick off and maintain the attention of the community. Furthermore, we believe that there are many retrieval scenarios that can not be encapsulated into the two-step index-then-retrieve pipeline that jig imposes (e.g., explicit relevance feedback). We thus minimize the TIREx requirements: just Docker images in which commands are executed without Internet access on read-only mounted data. Footnote 7: codalab.org. eval.ai, stella-project.org, tira.io Tooling for ReproducibilityMany tools have been developed to support shared tasks by reducing the workload of organizers and participants while increasing the reproducibility (Kang et al., 2018; Liu and Zhang, 2018; Liu and Zhang, 2018; Liu and Zhang, 2018; Liu and Zhang, 2018; Liu and Zhang, 2018). For instance, as documenting the metadata of experiments improves reproducibility (Lin and Zhang, 2018), ir_metadata (Liu and Zhang, 2018) simplifies the documentation of IR experiments according to the PRIMAD model (Liu and Zhang, 2018) (platform, research goal, implementation, method, actor, data). There are also platforms that support organizing and running shared tasks, among which four are still active: CodaLab, EvalAI, STELLA, and TIRA.8 They implement the so-called evaluation-as-a-service paradigm in the form of cloud-based web services for evaluations (Terrier, 2018). Of these four systems, STELLA and TIRA are hosted within universities, while CodaLab and EvalAI use Microsoft Azure and Amazon S3, respectively. We use TIRA for TIREx as it supports blinded experimentation and as it is based on (private) git repositories hosted on GitLab or GitHub to versionize shared tasks and to distribute the workloads via runners connected to the corresponding repositories. The computation can thus be done in the cloud but also on private machines. We substantially extend large parts of TIRA as part of TIREx so that it supports the current IR workflows like chaining multiple retrieval stages. Footnote 8: [https://github.com/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blblog](https://github.com/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blog/blblog) involve multi-stage "telescoping" pipelines.8 To address these requirements, TIREx extends TIRA with common IR tools for data access, indexing, retrieval, and evaluation, and implements multi-stage pipelines on top of TIRA's underlying execution protocol. Below, we elaborate on how TIREx supports IR experiments, discuss the interaction between integrated tools, provide examples of using available retrieval approaches in TIREx, and demonstrate how TIREx promotes post-experiment replicability and reproducibility through declarative PyTorch pipelines. Footnote 8: For instance, the Mono-Duo-Reranking pipelines (Peters et al., 2019), where a more complex re-ranker improves part of the ranking of a less complex one ahead in the pipeline. ### Experiments in the IR Experiment Platform As illustrated in Figure 1, TIREx facilitates the entire process of conducting retrieval experiments. It allows shared task organizers and individual experimenters to import data and utilize any pre-existing retrieval software submitted to TIREx as baselines. Following that, submissions of new retrieval approaches for evaluation can be made as software submissions or, if enabled, also as run submissions. Any submission can be accompanied by descriptive annotations and metadata; for instance, run submissions can be grouped to denote that they were generated by the same retrieval approach for multiple retrieval tasks. By providing relevance judgments, organizers or experimenters can directly evaluate all available runs. To incorporate a new corpus and topics into TIREx, they can be easily added to ir_datasets, utilizing a private branch if the data is sensitive. This data can then be imported by TIRA through a Docker image with a matching ir_datasets installation. Participants submit their software as Docker images as well. TIRA ensures their reproducibility and prevents test data leaks by executing them in a sandbox. Among other things, the sandbox disables Internet connectivity for the running software, which ensures that the software and its dependencies are fully installed and no data is sent to unauthorized third parties. Participants can provide additional data their software needs during execution by uploading it to TIRA. This is particularly useful for non-reproducible elements of a submission, such as manual query reformulations. TIREx also provides a "starter implementation" for five commonly used IR research frameworks, which participants can use as a development base. The simplest starter uses BM25 retrieval, which is implemented using a few lines of declarative PyTorchic code in a Jupyter notebook.9 Footnote 9: github.com/itra-io/ir-experiment-platform/starter-for-ryterrier-in-jupyter TIREx allows for software submissions to be executed on demand within a cloud-based execution environment, utilizing GitLab or GitHub CI/CD pipelines. In order to meet varying demand, experiment organizers can incorporate additional runners as necessary. TIREx maintains a comprehensive record of every artifact of a retrieval experiment within a specific git repository (Figure 1, right), which can be exported and published. This "archived shared task" is entirely self-contained, enabling the independent re-execution of approaches with identical or differing data using PyTorchic pipelines. The availability of every software that generated a run as part of the repository makes it a key outcome and asset of an experiment. Consequently, TIREx facilitates "always-on" shared tasks for the IR community, along with an extensive variety of ablation studies. ### Reproducible Shared Tasks with TIRA TIRA is used to handle software submissions in shared tasks since 2012 (Titra et al., 2013; Titra et al., 2014)--the CLEF labs PAN and Touche being two long-running examples.10 A first version of TIRA provided participants with access to virtual machines to deploy their software. However, this setup required manual overhead on the part of organizers thus did not scale far beyond these two events. Moreover, software reexecution was possible in principle and has been demonstrated once at scale (Titra et al., 2013), but proved to be error-prone and required manual bug fixing inside the virtual machines as participant software was not robust against slight data format variations that were in principle supported by the underlying formatting schema. This also has prevented external researchers from reproducing the collected software for a given task at scale. Footnote 10: pan.webis.de, touche.webis.de Meanwhile, Docker has gained maturity and widespread adoption and is now supported by many cluster computing frameworks such as Kubernetes. Especially their integration as GitHub and GitLab runners made automatic deployment widely available. Hence, TIRA was completely redeveloped based on the now industry-standard CI/CD pipelines (continuous integration and deployment) using Git, Docker, and Kubernetes (Git et al., 2014). In the new version of TIRA, participants upload their software implemented in Docker images to a private Docker registry dedicated to their team, ensuring that different teams do not influence each other while a shared task is running--the approaches can remain private until the task ends. For on-demand execution, TIRA presently runs the software on our Kubernetes cluster (1,620 CPU cores, 25.4 TB RAM, 24 GeForce GTX 1080 GPUs). This version of TIRA was first used in two NLP tasks hosted at SemEval 2023 to which 71 of 170 registered teams submitted 647 runs based on software submissions (Titra et al., 2013; Titra et al., 2013). While preparing the TIRA setup for the retrieval-oriented Touche 2023 tasks (Titra et al., 2013), we realized that the new TIRA still had some shortcomings. There was no unified access to IR data, no separation between full-rank or re-rank approaches, no modularization of software components with caching, and typical IR workflows were only realizable inefficiently or via workarounds. For instance, full-rank retrieval in TIRA would have required any software to build an index from scratch and different re-rank approaches would each have to re-create the baseline rankings. A re-ranking approach for the ClueWeb22-based Task 2 of Touche 2023 (Titra et al., 2013), for example, should have been able to use a ChatNoir baseline ranker (Titra et al., 2013) from within TIRA, but our pilot experiments showed that retrieving the top-1000 ChatNoir results for some set of 50 Touche topics (Titra et al., 2013; Titra et al., 2013; Titra et al., 2013) takes 54 to 134 minutes (ChatNoir requests can fail so that a client has to retry the requests). Blocking GPUs--often required by re-rankers--for such a long time would waste resources and the baseline's top-1000 results should ideally be cached so that different re-rankers can directly use them. To solve all these problems, we substantially expanded TIRA and redeveloped major parts to integrate ir_datasets, ir_measures, and PyTorchic. ### Standardized Data Access with ir_datasets The ir_datasets toolkit (Titra et al., 2013) provides a standard interface to access over 200 corpora and over 500 topic sets frequently used in IR experiments. The data is kept up-to-date (e.g., most TREC 2022 tracks are included) and processing documents or topics is possible via a single line of Python code. Thus, ir_datasets already serves as a common data layer in numerous IR frameworks and tools (e.g., Capreolus (Levin et al., 2017), Experimaestro-IR (Kumar et al., 2017), FlexNeuART (Kumar et al., 2018), OpenNIR (Kumar et al., 2018), Patapsco (Peters et al., 2019), PyTerrier (Peters et al., 2020)) and can be easily incorporated by most others (e.g., Anserini (Levin et al., 2019), PISA (Peters et al., 2020)). We integrate ir_datasets into TIRA via Docker images that can import complete corpora (for full-rank approaches) and that can create re-rankings for any given run file (for re-ranking approaches). To configure an IR experiment in TIRA, the experiment organizer only needs to provide an ir_datasets Docker image--standard images are available in TIREx but other images are also possible (e.g., for proprietary data). In the following, we further describe the new 'default_text' fields that we added to ir_datasets to enable re-using single-field retrieval software on corpora with multiple text fields, and we describe how the integration of ir_datasets into Docker images that run on-demand also ensures interchangeability and compatibility of retrieval components in retrieval pipelines. Re-Usable Retrieval Software via default_textWhile some corpora have a single text field for each document (e.g., the MS MARCO passage ranking corpus (Kumar et al., 2017; Kumar et al., 2017; Peters et al., 2020)), others provide rich structural information or metadata (e.g., the Touche corpora (Levin et al., 2017; Levin et al., 2019) with structured arguments or comparison aspects). Similarly, some retrieval tasks have a single text field per topic (e.g., Antique (Levin et al., 2019)), while others provide metadata for each topic and/or multiple fields for versions of a query (e.g., TREC Precision Medicine (Peters et al., 2020; Peters et al., 2020)). Corpora and retrieval tasks with fine-grained structure usually address the development of built-for-purpose retrieval systems that exploit the task-specific setup. For instance, an argument retrieval system submitted to Touche may specifically focus on the argumentative premises contained in a document, and an approach in the Precision Medicine track may use a query's structure to adjust the relevance criteria. Instead, corpora and tasks with single fields for document texts and queries often rather address "general search" scenarios (i.e., retrieval approaches that can be applied in a variety of contexts rather than targeting one specific case). To also enable the evaluation of such general purpose retrieval systems (that expect a single document text field and a single query field) on data with more fields, we created default_text fields for every dataset in ir_datasets. There often is a natural choice for a document's or a query's "default text" (e.g., we simply concatenated the two fields 'title' and 'abstract' of MEDLINE documents as the default document text and we often selected a TREC topic's title as the query text--after a manual review). Still, there also are more difficult cases for which we then carefully tried to select the most important content of the documents or topics--being open to corrective pull requests from the community. The new default_text fields now are part of the ir_datasets package and thus also applicable in TIREx to ensure reusability of single-field retrieval approaches on data originally only available with multiple fields. Ensuring Compatibility of Modularized Retrieval StagesTIREx aims to support experiments in which components for the individual stages of modularized retrieval pipelines can be easily replaced and compared without having to adapt the complete retrieval software each time. Therefore, TIRA distinguishes between two types of retrieval approaches: (1) full-rank approaches with a document corpus and topics as input, and (2) re-rankers with a re-rank file as input (basically, query--document pairs). From any retrieval software's output, a re-rank file can be automatically created and cached in TIREx by the ir_datasets integration. As the structure of these re-rank files always is the same, any re-ranker can easily run on the output of any previous retrieval approach. Note that some data in ir_datasets can not be downloaded from the Web and/or requires license agreements (e.g., the ClueWeb and GOV corpora). As we have valid license agreements on our local TIREx instance, we can directly mount such data into the ir_datasets container, but, by default, then only show effectiveness scores for a run and no retrieval results (i.e., participants do not get access to the corpus as their software is executed in a sandbox and all outputs other than effectiveness scores are not shown on confidential datasets). Table 1 shows the data fields that the ir_datasets integration makes available. For full-rank software, the documents.jsonl.gz file for each document contains an identifier 'docn', the new default_text in the field 'text', and all original structured fields of a document in 'original_document'. The topics.jsonl.gz file for each topic contains an identifier 'qid', the new default_text in the field 'query', and all original structured fields of a topic in 'original_topic'. For re-rankers, the ir_datasets integration creates a file re-rank.jsonl.gz from the output of a previous retrieval stage (i.e., the run file), where each entry contains query-document pairs to be reranked along their score and rank assigned by the previous stage. When relevance judgments exist, the ir_datasets integration can also make them available in a qrels.txt file so that the evaluator software specified by the experiment organizer can automatically evaluate submitted retrieval approaches. ### Sanity-checked Evaluation with ir_measures TIRA can automatically evaluate run files (created by retrieval software submissions or uploads) via an ir_measures evaluator. First, the evaluator performs a sanity check to test whether a run file can be parsed and warns of potential errors (e.g., score ties, NaN scores, empty result sets, unknown queries, scores contradicting the ranks, etc.). Then, if relevance judgments have been provided, the evaluator derives all specified measures averaged over all queries and per query (suitable for significance tests). ### Reproducible IR Pipelines with TIRA To improve the efficiency of common IR workflows in TIREx, we redeveloped and extended TIRA's ability to define and run modularized software even spanning multiple Docker images. All software in TIRA is immutable so that outputs of one software (e.g., an index) can be cached and reused by another software. Modularized Software with Multiple ComponentsRetrieval software in TIRA can have multiple components that form a sequence similar to UNIX pipes or even a directed acyclic graph (DAG). Each component has a Docker image with a command to be executed and can have none, one, or many preceding components, respectively. TIRA passes the corresponding input and output directories to each component via three variables (cf. Table 2). The variable SinputDataset points to the directory that contains the actual input (e.g., re-rank.jsonl.gz for re-ranking software). The variable version of the component. Immutability enables the implementation of efficient and reliable retrieval pipelines, since their output can both be cached and traced back and replicated by that same (version) of a component. TIRA disallows the deletion of components or outputs that have been used as inputs by some other component. When any component is requested to produce an output on some data, it is first checked whether that output already is cached in which case executing the component is not necessary. This way, retrieval pipelines in TIRA can efficiently re-use components and remain replicable, as the steps to produce a final run are fully tracked and versioned in the experiment repository. ### Local Pipeline Reproduction with PyTerrier When an experiment repository is exported and published by the organizers, by default, the test data is kept private but the run files are published via TIRA and software submissions are uploaded as Docker images to Docker Hub. All possible follow-up studies (e.g., a reproducibility study for a shared task) can be conducted independent of TIRA, as archived experiment repositories are fully self-contained. In the following, we briefly showcase some post-hoc experiments in PyTerrier:11 Footnote 11: Examples available at: github.com/tira-io/ir-experiment-platform/reproducibility Listing 1 shows how a full-rank approach from a TIRA experiment repository can be reproduced with a declarative PyTerrier pipeline. The approach is identified as the <software> submitted by team <user-name> to the shared task <task-name> and is applied to <dataset> (does not need to be the original task data). Internally, the required Docker images are downloaded and run in their required order to obtain the results. These results can then be re-ranked by any PyTerrier re-ranker, allowing for experiments to improve an original submission. Also re-rankers available in some TIRA experiment repository can be used in post-hoc PyTerrier experiments (cf. Listing 2 for an example re-ranking of BM25). Listing 3 shows how run files resulting from some (software) submission can be loaded into PyTerrier. The from submission method allows to access some submitted approach's output without having to re-run it (e.g., this also eases pooling for task organizers). The PyTerrier integration allows easy replicability experiments if the dataset is the same as in the original experiment, and reproducibility experiments if some other dataset is used for retrieval approaches. ## 4. Evaluation To demonstrate the scalability of TIREx, we report about an experiment with 50 retrieval approaches on 32 retrieval tasks based on 15 corpora (1.9 billion documents). The resulting leaderboards are public and new submissions can be made at any time.12 We also describe a repro_eval-based (Krishnan et al., 2018) case study on system preference reproducibility for different tasks. Footnote 12: github.com/tira-io/ir-experiment-platform/submission ### Scalable Retrieval Experiments Table 3 shows the 15 corpora currently available in TIREx. Each has been used for 1 to 4 shared retrieval tasks, consists of 1,400 to 1 billion documents, and comes with the relevance judgments created during the respective shared tasks. Table 4 overviews the 50 retrieval approaches that we imported into TIREx from 5 retrieval frameworks: BEIR (Krishnan et al., 2018), ChatNoir (Chatnoir et al., 2018), Pyserini (Pyserini, 2018) (our import was not ready during the experiments), PyGaggle (Pyserini, 2018), PyTerrier (Pyserini, 2018) (including two PyTerrier plugins for duoT5 (Pyserini, 2018) and ColBERT (Pyserini, 2018)). From BEIR, we use 17 dense retrieval approaches (e.g., ANCE (Pyserini, 2018), DPR (Krishnan et al., 2018), and TAS-B (Pyserini, 2018)) by using the different SBERT (Pyserini, 2018) models available in BEIR. ChatNoir is an Elasticsearch-based BM25F search engine hosting all three ClueWeb corpora. It can be accessed from within TIRA to allow retrieval approaches on huge corpora with a REST-API that is kept consistent to ensure reproducibility. From Pyserini, we use the 4 lexical models available trough the SimpleSearcher interface. From PyGaggle, we \begin{table} \begin{tabular}{l c c l} \hline \hline \multicolumn{3}{c}{**Corpus**} & \multicolumn{3}{c}{**Associated Retrieval Tasks**} \\ Name & Docs. & Size & Details & \# \\ \hline Args.me & 0.4 m & 8.3 GB & Touche 2020–2021 (Krishnan et al., 2018; Krishnan et al., 2018) & 2 \\ Antique & 0.4 m & 90.0 MB & QA Benchmark (Pyserini, 2018) & 1 \\ ClueWeb09 & 1.0 b & 4.0 TB & Web tracks 2009–2012 (Krishnan et al., 2018; Krishnan et al., 2018; Krishnan et al., 2018) & 4 \\ ClueWeb12 & 731.7 m & 4.5 TB & Web tracks (Pyserini, 2018; Krishnan et al., 2018), Touché (Krishnan et al., 2018) & 4 \\ ClueWeb22B & 200.0 m & 6.8 TB & Touché (Pyserini, 2018) & 1 \\ CORD-19 & 0.2 m & 7.1 GB & TREC-COVID (Pyserini, 2018) & 1 \\ Cranfield & 1,400 & 0.5 MB & Fully Judged Corpus (Krishnan et al., 2018; Krishnan et al., 2018) & 1 \\ Disks4+5 & 0.5 m & 602.5 GB & TREC-7-8 (Pyserini, 2018; Krishnan et al., 2018), Robust04 (Pyserini, 2018; Krishnan et al., 2018) & 3 \\ GOV & 1.2 m & 4.6 GB & Web tracks 2002–2004 (Krishnan et al., 2018; Krishnan et al., 2018; Krishnan et al., 2018) & 3 \\ GOV2 & 25.2 m & 87.1 GB & TREC TB 2004–2006 (Krishnan et al., 2018; Krishnan et al., 2018) & 3 \\ MEDLINE & 3.7 m & 5.1 GB & TREC Genomics (Krishnan et al., 2018; Krishnan et al., 2018) & 4 \\ MS MARCO & 8.8 m & 2.9 GB & Deep Learning (Krishnan et al., 2018; Krishnan et al., 2018) & 2 \\ NFCorpus & 3,633 & 30.0 MB & Medical LTR Benchmark (Krishnan et al., 2018) & 1 \\ Vaswani & 11,429 & 2.1 MB & Scientific Abstracts & \\ WaPo & 0.6 m & 1.6 GB & TREC Core 2018 & 1 \\ \hline \(\sum\) = 15 corpora & 1.9 b & 15.3 TB & & 32 \\ \hline \hline \end{tabular} \end{table} Table 3. The 15 corpora and the associated 32 retrieval tasks currently available in TIREx (all are open for submissions). use 8 variants of monoBERT (Yang et al., 2019) and monoT5 (Zhu et al., 2020) (including the state-of-the-art monoT5 with 3 billion parameters), and from PyTerrier, we use 20 lexical retrieval models (e.g., BM25, PL2, etc.). From the duoT5 plugin of PyTerrier, we use 3 variants based on different duoT5 models (including the state-of-the-art model with 3 billion parameters). For all retrieval approaches, we keep all parameters at their default values. Almost all approaches use the default_text-based fields that we added to ir_datasets, except for ChatNoit that is a full-rank software for the CluWeb corpora and uses different fields (title, body, etc.). The lexical approaches in PyTerrier and the dense approaches in BEIR can be configured as full-rank software (i.e., a first component building an index and a second component retrieving from the index) or re-rank software--but are just counted as one approach in Table 4. All duoT5 and PyGaggle approaches only work as re-rankers. For ColBERT, we only use the re-rank variant, as ColBERT indices become very large. In TIREx, all of these variants are available. To increase result comparability, however, our analysis fixes the first stage markers to ChatNoir for the CluWeb corpora and PyTerrier BM25 on all other corpora. Their respective results are then handed to the total of 50 available re-ranking approaches mentioned above. Altogether, 50 approaches are executed on all 32 tasks listed in Table 3. We executed the lexical approaches using 1 CPU and 10 GB RAM, while all other approaches had additional access to a GeForce GTX 1080 GPU with 8 GB RAM. Some models fail on this GPU as 8 GB of RAM do not suffice: ColBERT and two SBERT models failed on a few tasks, while the 3 billion parameter monoT5 / duoT5 failed on all tasks. To handle these cases, we added two runners with access to an A100 GPU with 40 GB RAM to TIRA, which was sufficient. TIRA manages metadata about the resources used to produce a run, making hardware difference between evaluations transparent. Table 5 shows the aggregated evaluation results on 31 tasks (leaving out the CluWeb22 as there are no judgments yet). We report the effectiveness as nDCG@10 (macro-averaged in case a corpus is associated with multiple tasks) for BM25, ColBERT, TAS-B, all three duoT5 variants, and monoT5 (in its default configuration with its default model) and the best, median, and worst approaches from the groups of 20 lexical, 17 bi-encoder, and 8 PyGaggle approaches. All deep learning models were trained on MS MARCO and thus substantially improve upon the lexical models on MS MARCO. However, on other corpora the deep learning models work in a zero-shot manner so that sometimes a lexical approach achieves the highest effectiveness (Args.me, ClueWeb09, and MEDLINE). Our results further show that BM25 is not always the best lexical ranker (e.g., on Args.me: 0.43 vs. 0.57). The effectiveness gap between the best and the worst model of a group can be substantial on some corpora (e.g., lexical models on Args.me: 0.14 vs. 0.57), while being negligible on others (e.g., lexical models on NFCorpus). The leaderboards of TIREx as aggregated in Table 5 allow to easily select competitive baselines for very different tasks--often much easier than before. ### Case Study: Reproducibility Analysis As an example of a post-hoc analysis enabled by TIREx, we use repro_eval to analyze to which degree system preferences from the TREC Deep Learning 2019 task can be reproduced on other tasks. For each preference between approaches on TREC Deep Learning 2019 (e.g., monoT5 with an nDCG@10 of 0.71 compared to BM25's 0.48 induces a clear system preference), we set the approach with the lower effectiveness on TREC Deep Learning 2019 as the "baseline" in repro_eval and the other approach as the "advanced system". We study the reproducibility of the preferences on two dimensions (Han et al., 2019): (1) the effect ratio of the reproduction, and (2) the delta relative improvement of the reproduction. The effect ratio measures to which degree the advanced system is still better than the baseline on the different task (1 indicates a perfect reproducibility, values between 0 and 1 indicate reproducibility with diminished improvements on the different task, and 0 indicates failed reproducibility), while the delta relative improvement measures the relative effectiveness difference of the advanced system to the baseline (0 indicates perfect reproducibility, values between -1 and 0 indicate an increased relative improvement of the advanced system, values between 0 and 1 indicate a smaller relative improvement, and 1 indicates failed reproducibility). Table 6 shows the results of the preference reproducibility analysis. We report the ratio of system preferences with a successful reproduction (i.e., effect ratio > 0) and the 25%, 50%, and 75% quantiles for the effect ratio and the relative delta improvement. We order the tasks by the percentage of successfully reproduced preferences and show the top-5 tasks and every fifth lower ranked task. Not that surprising, the reproducibility on the very similar TREC Deep Learning 2020 is very good (88.1%) but declines fast for other tasks (e.g., only 57.8% for the Web track 2003 on rank 15). Analyzing the quantiles yields similar observations (e.g., 50% of the system preferences have an almost perfect effect ratio of 0.90 or higher for TREC Deep Learning 2020, while the Web track 2003 on rank 15 has a median effect ratio of 0.04). ## 5. Discussion Potential Impact of TIRExWe believe that TIREx can have a substantial conceptual impact as we see no alternative to blinded retrieval evaluations in the future (given the practice of training LLMs on basically all available ground truth for IR and NLP tasks (Krizhevsky et al., 2017)). Additionally, the platform eases the organization of reproducible IR experiments with software submissions. Shared task organizers can simply provide the well-documented open-source baselines from TIRA as starting points for the participants and can also use the baselines to ensure some more diverse judgment pools, especially for tasks that attract few participants. For shared tasks that \begin{table} \begin{tabular}{l l l c c} \hline \hline **Framework** & **Type** & **Description** & \multicolumn{2}{c}{**Approaches**} \\ \cline{4-5} & & & Full-rank & Re-rank \\ \hline BEIR (Srivastava et al., 2017) & Bi-encoder & Dense retrieval & 17 & 17 \\ ChatNoir (Chen et al., 2019) & BM25F & Elasticsearch cluster & 1 & 0 \\ ColBERT(Pert, 2019) & Late interaction & PyTerrier plugin & 0 & 1 \\ DuoT56aPT (Zhu et al., 2020) & Cross-encoder & Pairwise transformer & 0 & 3 \\ PyGaggle (Srivastava et al., 2017) & Cross-encoder & Pointwise transformer & 0 & 8 \\ PyTerrier (Srivastava et al., 2017) & Lexical & Traditional baselines & 20 & 20 \\ \hline \hline \end{tabular} * Our import of Pyserini was not ready during the experiments but is now available. \end{table} Table 4. Overview of the retrieval frameworks and the 50 retrieval approaches imported into TIREx. run multiple years on different data, the organizers can automatically re-run all approaches submitted to previous editions to track progress. TIREx combines leaderboards with immutable software, promoting provenance of results, and enabling researchers and reviewers to identify and locally reproduce good baselines. The submission platform TIRA proved robust after its complete redevelopment (Tikora et al., 2018): two NLP tasks used TIRA at SemEval 2023 (Tikora et al., 2018; Tikora et al., 2018) for which 71 of the 171 registered teams created 647 runs with software submissions. Our initial retrieval experiments with TIREx produced another 1,600 runs on standard corpora in less than a week, showing the platform to be robust and to have the potential for scaling up. When adopted by shared tasks and in individual IR experiments, TIREx can become a (federated) hub for IR resources and serve as a reference for reviewers. If a sufficient number of retrieval approaches, corpora, and supplementary data (e.g., manual query reformulations) are available through TIREx, integrating new resources gives direct access to an entire ecosystem, furthering the nascent standardization of IR experiments. Future Extensions of TIRExInteresting directions for future development besides including further IR frameworks and libraries are integrations of TIREx with the IR Anthology (Tikora et al., 2018) and with Diff-IR (Tikora et al., 2018). An integration with the IR Anthology would enable links between entries in the TIREx leaderboards and the corresponding publications in the IR Anthology to provide more detailed information on an approach but also to "extend" a publication by adding results on different corpora than originally used and putting an approach in a broader context with other approaches run on the same data. An integration with DiffIR would enable the rendering of runs as search engine result pages to easily contrast the quantitative evaluations already possible via the integrated ir_measures with more qualitative evaluations of ranking differences or even (basic) user studies. ## 6. Conclusion With TIRExThe IR Experiment Platform--we aim to substantially ease conducting (blinded) IR experiments and organizing "always-on" reproducible shared tasks on the basis of software submissions. TIREx integrates ir_datasets, ir_measures, and PyTerrier with TIRA. Retrieval workflows can be executed on-demand via cloud-native orchestration, reducing the effort for reproducing IR experiments since software submitted to TIREx can be executed in post-hoc experiments. The platform has no lock-in-effect, as archived experiments are fully self-contained, work stand-alone, and are easily exported. By keeping test data private, TIREx promotes further standardization and provenance of IR experiments following the example of, e.g., medicine, where blinded experiments are the norm. TIREx is open to the IR community and ready to include more corpora, shared tasks, and retrieval approaches. ###### Acknowledgements. This work has been partially supported by the OpenWebSearch.eu project (funded by the EU; GA 101070014). \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline **Task** & **Rank** & **Succ.** & \multicolumn{3}{c}{**Effect Ratio**} & \multicolumn{3}{c}{**Delta Rel. Impr.**} \\ \cline{3-13} & & & 25\% & 50\% & 75\% & 25\% & 50\% & 75\% \\ \hline TREC DL 2020 & 1 & 88.1 & 0.68 & 0.90 & 1.11 & -0.03 & 0.02 & 0.08 \\ Touche 2020 (Task 2) & 2 & 77.1 & 0.12 & 0.38 & 0.73 & -0.09 & 0.04 & 0.17 \\ Web track 2004 & 3 & 75.5 & 0.01 & 0.29 & 0.89 & -0.07 & 0.10 & 0.31 \\ TREC-7 & 4 & 73.9 & -0.03 & 0.31 & 1.11 & -0.02 & 0.12 & 0.34 \\ Core 2018 & 5 & 70.2 & -0.05 & 0.24 & 0.90 & -0.03 & 0.13 & 0.35 \\ NFCorpus & 10 & 66.4 & -0.06 & 0.06 & 0.32 & 0.02 & 0.23 & 0.42 \\ Web track 2003 & 15 & 57.8 & -0.14 & 0.04 & 0.23 & -0.08 & 0.15 & 0.36 \\ Web track 2009 & 20 & 44.1 & -0.40 & -0.04 & 0.26 & 0.00 & 0.30 & 0.52 \\ Web track 2010 & 25 & 36.3 & -0.49 & -0.14 & 0.18 & 0.03 & 0.32 & 0.59 \\ Web track 2013 & 30 & 31.0 & -0.43 & -0.21 & 0.13 & 0.06 & 0.30 & 0.63 \\ \hline \hline \end{tabular} \end{table} Table 6. Reproducibility of TREC DL 2019 system preferences on other tasks. Success rate in percent (effect ratio > 0; tasks ordered by success rate) and the 25%, 50%, and 75% quantiles for the effect ratio and delta relative improvement. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c} \hline \hline **Corpus** & **ChatNoir** & \multicolumn{3}{c}{**Lexical**} & \multicolumn{3}{c}{**Late Int.**} & \multicolumn{3}{c}{**Bi-Encoder**} & \multicolumn{3}{c}{**duoT5**} & \multicolumn{3}{c}{**PyGaggle**} \\ \cline{2-13} & & BM25 & Best & Median & Worst & ColBERT & TAS-B & Best & Median & Worst & Base & Large & 3b & MonoT5 & Best & Median & Worst \\ \hline Antique & \(-\) & 0.51 & 0.53 & 0.51 & 0.36 & 0.47 & 0.40 & 0.49 & 0.44 & 0.30 & 0.54 & 0.46 & 0.52 & 0.51 & **0.54** & 0.51 & 0.45 \\ Argg-me & \(-\) & 0.43 & **0.57** & 0.43 & 0.14 & 0.26 & 0.17 & 0.33 & 0.24 & 0.13 & 0.33 & 0.29 & 0.29 & 0.30 & 0.39 & 0.34 & 0.27 \\ CORD-19 & \(-\) & 0.28 & 0.64 & 0.55 & 0.21 & 0.58 & 0.50 & **0.70** & 0.60 & 0.50 & 0.66 & 0.61 & 0.66 & 0.69 & 0.63 & 0.55 \\ CiteWeb09 & 0.16 & 0.18 & **0.24** & 0.18 & 0.12 & 0.17 & 0.16 & 0.20 & 0.17 & 0.13 & 0.15 & 0.18 & 0.17 & 0.19 & 0.17 & 0.12 \\ CiteWeb12 & **0.36** & 0.24 & 0.27 & 0.25 & 0.14 & 0.23 & 0.25 & 0.28 & 0.26 & 0.23 & 0.33 & 0.30 & 0.35 & 0.26 & 0.28 & 0.26 & 0.23 \\ Cranfield & \(-\) & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 & 0.00 & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 & 0.01 \\ Disks+5 & \(-\) & 0.44 & 0.46 & 0.44 & 0.37 & 0.46 & 0.39 & 0.49 & 0.43 & 0.37 & 0.45 & 0.38 & 0.44 & 0.53 & **0.57** & 0.53 & 0.43 \\ GOV & \(-\) & 0.22 & 0.24 & 0.22 & 0.15 & 0.23 & 0.22 & 0.27 & 0.24 & 0.21 & 0.19 & 0.15 & 0.22 & 0.26 & **0.29** & 0.26 & 0.22 \\ GOV2 & \(-\) & 0.47 & 0.49 & 0.44 & 0.25 & 0.45 & 0.34 & 0.46 & 0.42 & 0.34 & 0.47 & 0.43 & 0.48 & 0.48 & **0.51** & 0.48 & 0.41 \\ MS MARCO & \(-\) & 0.49 & 0.50 & 0.48 & 0.37 & 0.69 & 0.64 & 0.71 & 0.66 & 0.64 & 0.57 & 0.63 & 0.71 & **0.74** & 0.71 & 0.63 \\ MEDLINEINE & \(-\) & 0.34 & **0.42** & 0.27 & 0.18 & 0.25 & 0.14 & 0.26 & 0.21 & 0.14 & 0.34 & 0.32 & 0.36 & 0.25 & 0.35 & 0.27 & 0.24 \\ NFCorpus & \(-\) & 0.27 & 0.28 & 0.27 & 0.26 & 0.27 & 0.25 & 0.29 & 0.26 & 0.24 & 0.28 & 0.24 & 0.29 & 0.30 & **0.31** & 0.30 & 0.28 \\ Vaswani & \(-\) & 0.45 & 0.46 & 0.45 & 0.30 & 0.43 & 0.34 & 0.44 & 0.38 & 0.22 & 0.41 & 0.34 & 0.46 & 0.31 & **0.48** & 0.41 & 0.08 \\ WaPo & \(-\) & 0.38 & 0.39 & 0.37 & 0.24 & 0.43 & 0.34 & 0.43 & 0.37 & 0.33 & 0.40 & 0.28 & 0.40 & 0.45 & **0.49** & 0.45 & 0.40 \\ \hline Avg. &
2304.11381
Incomplete Multimodal Learning for Remote Sensing Data Fusion
The mechanism of connecting multimodal signals through self-attention operation is a key factor in the success of multimodal Transformer networks in remote sensing data fusion tasks. However, traditional approaches assume access to all modalities during both training and inference, which can lead to severe degradation when dealing with modal-incomplete inputs in downstream applications. To address this limitation, our proposed approach introduces a novel model for incomplete multimodal learning in the context of remote sensing data fusion. This approach can be used in both supervised and self-supervised pretraining paradigms and leverages the additional learned fusion tokens in combination with Bi-LSTM attention and masked self-attention mechanisms to collect multimodal signals. The proposed approach employs reconstruction and contrastive loss to facilitate fusion in pre-training while allowing for random modality combinations as inputs in network training. Our approach delivers state-of-the-art performance on two multimodal datasets for tasks such as building instance / semantic segmentation and land-cover mapping tasks when dealing with incomplete inputs during inference.
Yuxing Chen, Maofan Zhao, Lorenzo Bruzzone
2023-04-22T12:16:52Z
http://arxiv.org/abs/2304.11381v1
# Incomplete Multimodal Learning for Remote Sensing Data Fusion ###### Abstract The mechanism of connecting multimodal signals through self-attention operation is a key factor in the success of multimodal Transformer networks in remote sensing data fusion tasks. However, traditional approaches assume access to all modalities during both training and inference, which can lead to severe degradation when dealing with modal-incomplete inputs in downstream applications. To address this limitation, our proposed approach introduces a novel model for incomplete multimodal learning in the context of remote sensing data fusion. This approach can be used in both supervised and self-supervised pretraining paradigms and leverages the additional learned fusion tokens in combination with Bi-LSTM attention and masked self-attention mechanisms to collect multimodal signals. The proposed approach employs reconstruction and contrastive loss to facilitate fusion in pre-training while allowing for random modality combinations as inputs in network training. Our approach delivers state-of-the-art performance on two multimodal datasets for tasks such as building instance / semantic segmentation and land-cover mapping tasks when dealing with incomplete inputs during inference. Data Fusion, Multimodal, Transformer, Remote Sensing. ## I Introduction Remote sensing becomes more and more important in various Earth Observation (EO) tasks due to its superior ability in observing our planet. With the increasing availability of multimodal RS data, researchers now can develop more diverse downstream applications. Despite the abundance of multimodal remote sensing data, each modality captures only certain specific properties and, therefore, cannot thoroughly describe the observed scenes, which poses a great constraint on subsequent applications. Multimodal RS data fusion proves to be a feasible solution to overcome the inadequacy posed by unimodal data [1]. Different modal data provide complementary information due to their varying perspectives, allowing the researchers to conduct extensive research on combining useful information from different modalities to better achieve specific application goals. For instance, synthetic aperture radar (SAR) provides physical structure information, while LiDAR collects depth information [2]. Meanwhile, multispectral (MS) and hyperspectral (HS) data measure radiation reflectance across different wavelengths of the electromagnetic spectrum. By integrating the complementary information from multimodal data, researchers can make a more robust and reliable model in various tasks, such as change detection [3] and land-cover mapping [4]. To integrate the complementary information provided by these modalities, such as MS, HS and SAR, as well as the remote sensing products (e.g., Land Cover Land Use Maps), traditional methods have been intensively studied by designing handcrafted features based on domain-specific knowledge and exploiting rough fusion strategies, which inevitably impairs the fusion performance, especially for heterogeneous data. For example, Deus [5] tried to design some features manually (i.e. NDVI, RFDI, texture, etc.) based on ALOS PALSAR and Landsat TM images to improve the performance of support vector machines on land cover classification and forest mapping. Thanks to the growth of artificial intelligence, deep learning shows great potential in modelling a complex relationship between different modality data and is widely used in remote sensing data fusion tasks. However, most of them are based on the supervised learning paradigm. Supervised approaches are task-specific and cannot be generalized to other tasks. However, training on a large amount of multimodal data is cost expensive and collecting adequate multimodal data for each task is challenging for end-users. Thus, the research community seeks a few finetuning steps on a pre-trained model, which can help downstream tasks. Pretraining without supervision has gained a lot of attention as it is more general and can lead to improvements. Self-supervised learning for SAR-optical feature fusion proposed by Chen et al. [3] is an example of such an approach. Nevertheless, both supervised and unsupervised methods assume that all modalities are available during training and inference, which can be a limiting factor in practical applications, as data collection processes may miss modalities. In such cases, existing multimodal data fusion methods may fail to deal with incomplete modalities, leading to severe degradation in downstream tasks. As a solution, a robust multimodal method is needed for flexible and practical RS applications, with or without missing modalities. The algorithm used in this situation is called incomplete multimodal learning, which aims at learning methods that are robust with any subset of available modalities at inference. A simple strategy for incomplete multimodal learning is to synthesize the missing modalities using generative models. For instance, Generative Adversarial Networks (GANs) can effectively overcome the problems arising from missing or incomplete modalities in building footprint segmentation, as proposed by Bischke et al [6]. Another set of methods explores knowledge distillation from complete to incomplete modalities. In this approach, Kampffmeyer et al. [7] proposed to use an additional network, the hallucination network, for when data modalities are missing during the testing of Urban Land Cover Classification tasks. The network takes a modality as input that is assumed to be available during both training and testing, trying to learn a mapping function from this modality to the missing modality. Although promising results are obtained, such methods have to train and deploy a specific model for each subset of missing modalities, which is complicated and burdensome in downstream tasks. Meanwhile, all these methods require complete modalities during the training process. Recent incomplete multimodal learning methods focus on learning a unified model, instead of a bunch of distilled networks, for downstream tasks. In this context, the modality-invariant fusion embedding across different modalities may contribute to more robust performance, especially when one or more modalities are missing. Transformer is widely used in this task for its flexibility and multimodality modelling abilities. Current works exploited Transformers for audio and video data fusion using contrastive learning [8, 9]. However, the dedicated Transformer for incomplete multimodal learning of remote sensing tasks has not been carefully tapped yet and the existing multimodal RS data fusion methods cannot allow missing data in the training process. This paper proposes to exploit Transformer to build a unified model for incomplete multimodal learning of remote sensing tasks, which can be used in both the supervised paradigm and self-supervised pre-training paradigm. This is achieved by using additional learned fusion tokens for multimodal signal collection in the network. However, only using the additional learned fusion token cannot query enough information from other modality tokens. In this context, we use the Bi-LSTM attention block to further distil different modality information to fusion tokens. Using this algorithm, the proposed approach can leverage MultiMAE and contrastive loss to build fusion across the different modalities in pre-training. And also make it possible to use a random modality combination training strategy in downstream task finetuning. All these make the learning and inference of the modal can be in an incomplete modality input. This paper presents three contributions: (1) we propose to use Bi-LSTM and masked self-attention in multimodal Transformer to help build additional fusion tokens across different modalities, which enable both contrastive and generative self-supervised pre-training for incomplete multimodal inputs; (2) based on the proposed strategies, we use the random modality combination training strategy in downstream tasks, which ensures task performance with incomplete inputs on inference. (3) we benchmark our approach on two datasets: DFC2023 track2 and created quadruplet dataset, which shows the proposed approach can be pre-trained on a large-scale remote sensing multimodal dataset in a self-supervised manner. The proposed approach achieves state-of-the-art performance when compared with the vallina multimodal Transformer on RS. The rest of this paper is organized as follows. Section II presents the related works of multimodal RS data fusion, Masked Autoencoder and multimodal Transformer. Section III introduces the proposed approach by describing the network architecture, Bi-LSTM attention, masked self-attention, reconstruction pretraining and contrastive pretraining as well as the random modality combination training strategy. The descriptions of the dataset, network setup, experimental settings and downstream tasks are given in Section IV. Experimental results obtained on building instance/semantic segmentation and LULC mapping tasks are also illustrated in Section IV. Finally, Section VI concludes the paper. ## II Related Works ### _Multimodal RS Data Fusion_ In recent years, deep learning methods have been widely adopted in multimodal RS data fusion, including LiDAR-optical [10, 11, 2, 2], SAR-optical [13, 14, 15, 16, 14, 16], image-map fusion [17, 18]. In the case of LiDAR-optical data fusion, Paisitkriangkrai et al. [19] proposed fusing optical and Lidar data through concatenating deep and expert features as inputs to random forests. Several advanced techniques have subsequently been developed, with the aim of enhancing feature extraction ability. Audebert et al. [20] suggest the use of deep fully convolutional networks to investigate the early and late fusion of LiDAR and multispectral data. Similarly, Chen et al. [21] employ a two-branch network to separately extract spectral-spatial-elevation features, followed by utilizing a fully connected layer to integrate these heterogeneous features for final classification. Other novel fusion strategies are also introduced, such as cross-attention module [22], a reconstruction-based network [23], and a graph fusion network [24]. Additionally, recent studies by Roy et al. [25] propose a multimodal Transformer network to fuse Lidar and hyperspectral images for classification. Similar to Lidar-optical fusion, many researchers also developed the DSM-optical fusion methods, where the DSM was acquired from stereo-optical images. Meanwhile, SAR-optical data fusion also gets a lot of attention and widely adopts deep learning methods. For example, Kussul _et al._[16] first explores the deep CNNs in SAR-optical fusion for LULC classification and demonstrates their superiority with respect to traditional MLP classifiers. Recent studies by Dino _et al._[15] propose a deep learning architecture, namely TWINNS, to fuse Sentinel-1 and Sentinel-2 time series data in land-cover mapping. Similarly, Adrian _et al._[14] use the 3-dimensional deep learning network to fuse multi-temporal Sentinel-1 and Sentinel-2 data for mapping ten different crop types, as well as water, soil and urban area. Map data, such as topography, land use, road and census data, may be combined with remotely sensed data to improve the accuracy of image classification, object recognition, and change detection. For example, Sun et al. [26] provides a method of data fusion of GIS and RS using a neural network with unchanging data memory structure based on users' aim. Xu et al [18] propose to conduct road extraction based on satellite images and partial road maps using a two-branch partial to-complete network. ### _Masked Autoencoder_ The MAE (masked autoencoder) [27] is a novel self-supervised learning algorithm that demonstrates state-of-the-art performance on various vision benchmarks. Instead of relying on a contrastive objective, the MAE utilizes a pretext task that involves reconstructing masked patches of the input. The MAE network follows an asymmetric encoding and decoding scheme. Suppose the input image is a tensor of dimensions \(I\in R^{C\times H\times W}\), where \(H,W\) are the height and width of the image, respectively, and \(C\) is the number of channels. The image is initially divided into non-overlapping patches \(S\in R^{L\times P^{2}C}\), where \(P\) is the height and width of the patch, and \(L=(H/P)\times(W/P)\) is the number of patches. These patches are then transformed into a sequence of embedded patch tokens \(S^{\prime}\in R^{L\times D}\), using a patch embedding function \(f_{p}:R^{P^{2}C}\to R^{D}\). A fraction \(p_{m}\) of the sequence tokens is randomly masked, and the remaining visible tokens are fed into an encoder, which is a Vision Transformer (ViT). Due to the lack of positional information, additional positional embeddings are then added to patch embeddings to capture the spatial location of the patch in the image. The decoder is composed of multiple transformer blocks that are trained for all tokens, where the masked tokens are replaced as the initialized learnable tokens. The decoder produces a reconstructed image, which is compared to the original image using mean-squared error (MSE) loss, computed only on masked patches. Positional encoding allows the transformer to encode positional information. In MAE the positional encoding is: \[\mathrm{Encode}(k,2i)=\sin\frac{k}{\Omega^{\frac{2i}{2}}},\mathrm{ Encode}(k,2i+1)=\cos\frac{k}{\Omega^{\frac{2i}{2}}} \tag{1}\] Here, \(k\) is the position, \(i\) is the index of feature dimension in the encoding, \(d\) is the number of possible positions, and \(\Omega\) is a large constant. In MAE, the position is defined as the index of the patch along the \(x\) or \(y\) axis. Therefore, \(k\) ranges from \(0\) to \(H/P\) or \(W/P\). This encoding provides two unique dimensions, one for \(x\) and one for \(y\) coordinates, which are concatenated for the final encoding representation. The Multimodal Masked Autoencoder (MultiMAE) [28] utilizes a standard single-modal ViT and the encoder. The encoder is equipped with 2-D sine-cosine positional embeddings following the linear projection. MultiMAE does not make use of modality-specific embeddings. Because the bias term in each linear projection is sufficient. MultiMAE employs a separate decoder for each task that is responsible for reconstructing the masked-out tokens from the visible tokens. The input to each decoder is a full set of visible tokens from all different modalities, including the learnable modality embeddings with 2-D sine-cosine positional embeddings. The input is then followed by MLPs and Transformer blocks. Only the masked tokens are considered in the loss calculation. The mask sampling strategy employed in MultiMAE plays a crucial role in achieving predictive coding across different modalities. This sampling strategy ensures that most modalities are represented to similar degrees. MultiMAE adopts a symmetric Dirichlet distribution to select the proportion of tokens per modality \(\lambda\) (\(\lambda_{i}\)\(Dir(\alpha)\)), where \(\sum\lambda_{i}=1,\lambda>0\). The concentration parameter \(\alpha>0\) controls the sampling. For simplicity and better representation parameter \(\alpha=1\) in MultiMAE. ### _Multimodal Transformer_ The self-attention blocks of Transformers build a natural bridge among multimodal signals in a unified architecture. Different from the Convolutional Neural Networks (CNNs) using different network for different modalities, the Transformer only use the same main architecture for all modalities with a modal-specific projector. Transformers integrate input tokens from all modalities into a single representation, while CNNs fuse features of each modality through concatenation or tensor fusion. However, such explicit integration necessitates the presence of all modalities during training, which undermines the pipeline in case of a missing modality. In contrast, Transformers use self-attention to embed a holistic multimodal representation and handle the absence of modalities by applying a mask on the attention matrix. Thus, multimodal Transformers are more adaptable to deal with modal-incomplete inputs. In addition, an easy-to-train model is vital for multimodal learning. The training load of a conventional multimodal backbone grows as the number of modalities increases since the backbone usually consists of modality-specific sub-models that need to be trained independently for each modality. Instead, Transformers process all modalities altogether in a single model, significantly reducing the training load. However, Transformer models exhibit significant deterioration in performance with model-incomplete inputs, especially in the context of multimodal inference where Transformer models tend to overfit to dominate modalities. To overcome this challenge, MBT [8] build a multimodal architecture for video and audio, where it uses an additional fusion token to force information among different modalities to pass through by using cross-attention. However, the representation of each modality can also access each other in MBT, which means they are not independent. Furthermore, Zorro [9] employs a modality-aware masking mechanism in all attention operations to isolate the allocation of latent representations of individual modalities, which leads to a resultant representation that is partially unimodal (i.e., part of the representation attends to a single modality) and partially multimodal (i.e., part of the representation attends to all modalities), thereby allowing for the use of contrastive learning. ## III Methodology In this Section, we describe the incomplete multi-modal fusion architecture with additional learned fusion tokens, Bi-LSTM and masked Self-Attention in an optical-SAR-DEM-MAP data fusion example. Then, we introduce the details of the pretraining using MultiMAE and contrastive loss, and the details of training using random modality combination on downstream tasks (as shown in Fig. 1. ### _Network Architecture_ The main architecture of the proposed approach is a ViT with modality-specific patch projection layers for each input modality. Specifically, patches of each modality are projected to tokens using a different linear projection for each modality. In this work, we use a 2D convolution to extract \(16\times 16\) patches and project them to the input dimension \(D\). Next, position embeddings are added to the projected vectors so that the model is able to localize and distinguish each embedded patch. In addition to the multimodal input data, the learnable fusion tokens are introduced as one of the inputs. Different to the bottleneck fusion tokens in MBT and Zrror, we use the spatial tokens for dense downstream tasks, which have the same number of tokens of full input patches. In order to get local features, we add 2D sine-cosine positional embeddings on the spatial fusion tokens and use Bi-LSTM to aggregate all modality information to fusion tokens. Then the projected patches together with the learnable tokens are concatenated into a sequence of tokens and given as input to the same Transformer encoder with masked attention. Since all our input data have a 2D structure, we add 2D sine-cosine positional embeddings after linear projection. Following the setting of MultiMAE, we don't consider any modality-specific positional embedding. ### _Bi-LSTM Attention_ We use a Bi-LSTM with an attention mechanism to integrate different modality input embeddings into learned fusion tokens for improving the feature learning ability. Consider one direction of the LSTM network: let \(\overrightarrow{h_{i}}\) be the output of the LSTM for the multimodal inputs (Optical, SAR, DEM, MAP) and the learned fusion tokens. Bi-LSTM performs forward training and backward training separately for each training sequence and then combines the results of forward training and backward training together as the output of each modality, which is noted as \(h_{i}=[\overrightarrow{h_{i}},\overrightarrow{h_{i}}]\). We use \(h_{f}\) (fusion tokens) to attend to all multimodal inputs \(h_{o}\) (optical tokens), \(h_{s}\) (SAR tokens), \(h_{d}\) (DEM tokens), \(h_{m}\) (map tokens) and measure the importance of each modality through the similarity with a learning parameter \(u\). Then we get a normalized importance weight \(\beta_{i}\) through a softmax function. \[\beta_{i}=\frac{exp(\mathbf{u}^{\top}\tanh{(\mathbf{W}\left[\mathbf{h_{f}};\mathbf{h_{i}} \right]+b)})}{\sum_{i=1}^{t-1}exp(\mathbf{u}^{\top}\tanh{(\mathbf{W}\left[\mathbf{h_{f}}; \mathbf{h_{i}}\right]+b)})} \tag{2}\] where \(\mathbf{u}\) and \(\mathbf{h}\) have the same dimension as the cell state of the LSTM, [] is the concatenate operation. \(\mathbf{W}\) is a weight matrix of the MLP and \(b\) is a bias vector of the MLP. The final new fusion token is thus: \[\mathbf{a}=\sum_{i=1}^{t-1}\beta_{i}\cdot\mathbf{h}_{i} \tag{3}\] Fig. 1: Overview of the proposed framework. The input to our model is optical images, SAR images, DEM and Maps. Each of those inputs is patched using 2D convolution and projected to feature vectors. All inputs are concatenated with a set of learnable fusion tokens and added to the position embedding. Next, we process these inputs through the Transformer Encoder, where the Bi-LSTM Attention and the masked Self-Attention strategy are applied. (1) In pretraining, task-specific decoders reconstruct the masked patches by using the output fusion tokens. Meanwhile, the global vectors of each modality and fusion tokens are output using cross-attention, which allows using contrastive loss between fusion tokens and each modality. (2) In the supervised training, the proposed framework can be trained on downstream tasks by using a random modality combination strategy. ### _Masked Self-Attention_ Masked Self-Attention is the key block of Multimodal Transformer in contrastive pre-training. Using masked attention, we force part of the representation to attend only to itself, while other parts can attend to the whole representation. The main goal of this approach is to split the representation into five parts: a part which only focuses on Optical tokens, a part which focuses on SAR tokens, a part which focuses on DEM tokens, a part which focuses on MAP tokens, and the fusion tokens which can attend to the whole representation. In this architecture, the self-attention in each layer and the cross-attention in the last layer both used this masking strategy. Here we introduce the masking binary tensor m that specifies which vectors can access each other. Entries of the masking matrix are \(m_{i,j}=1\) if information can flow from latent \(j\) to latent \(i\). Versus, we set \(m_{i,j}=0\). The mask is applied to the standard attention output operation can be expressed as : \[o_{i}=\sum_{j}\hat{a}_{ij}\cdot v_{j};\ \ where\ \hat{a}_{ij}=\frac{m_{ij}\exp \left(\frac{q_{i}^{\top}k_{j}}{\sqrt{D}}\right)}{\sum_{\left\{j^{\prime},m_{ ij^{\prime}}=1\right\}}\exp\left(\frac{q_{i}^{\top}k_{j^{\prime}}}{\sqrt{D}}\right)} \tag{4}\] In order to keep the performance of a single modality when other modalities are absent, the modality-specific representation can not access the fusion representation or other modalities. This explicitly prevents the information of the fusion stream from leaking into the unimodal representation. This is the key to preserving pure streams that correspond to single modalities. For example, after applying this mask, the SAR-specific output \(o_{s}\) only contains information coming from the SAR input. The optical-specific output \(o_{o}\) only contains information coming from the optical input. The DEM-specific output \(o_{d}\) only contains information coming from the DEM input. The MAP-specific output \(o_{m}\) only contains information coming from the MAP input. The fusion output \(o_{f}\) access all outputs in the model. ### _Reconstruction Pretraining_ In order to train our network in an MAE way, we use a separate decoder for each generation task. The input to each decoder is the spatial tokens output from the cross attention. Following the same setting of MAE, we use shallow decoders with a low dimensionality which consists of two Transformer blocks. MultiMAE mask across different modalities ensures the model develops predictive coding across different modalities besides different spatial patches. According to MultiMAE, we set a constant number of visible tokens at 256, which corresponds to 1/4 of all tokens in our experiment (learned fusion tokens and three modality inputs with 256 \(\times\) 256 image size and 16 \(\times\) 16 patch size). The proportion of tokens per modality \(\lambda\) by sampling from a symmetric Dirichlet distribution \((\lambda_{Optical},\lambda_{SAR},\lambda_{DEM},\lambda_{MAP})\sim Dir(\alpha)\), where \(\lambda_{Optical}+\lambda_{SAR}+\lambda_{DEM}+\lambda_{MAP}=1,\lambda\geq 0\). For simplicity and better representation of any possible sampled task, we use a concentration parameter \(\alpha=1\). As shown in Fig. 1, we adopt reconstruction loss (\(l_{1}\) distance Mean Squared Error) to respectively recover the pixel color and height information following MultiMAE and using cross-entropy loss (\(l_{ce}\)) on land-cover map reconstruction: \[\begin{split}& L_{DEM}=l_{1}(Dec(o_{f}),DEM)\\ & L_{SAR\_RGB}=l_{2}(Dec(o_{f}),SAR)+l_{2}(Dec(o_{f}),RGB)\\ & L_{MAP}=l_{ce}(Dec(o_{f}),MAP)\end{split} \tag{5}\] ### _Contrastive Pretraining_ We also add the class token for each modality input data and an additional global class token for learned fusion tokens. To integrate information from the encoded visible tokens of other modalities, we add a single cross-attention layer using these tokens as queries that cross-attend to the encoded tokens of the last self-attention layer. We utilise the standard cross-attention operation and produce four different outputs: the vector outputs for each modality and a fusion vector output. This design opens the possibility for contrastive learning among different modalities and fusion tokens. For better multimodality alignment, we propose to use extra contrastive loss between each modality-specific output and the fusion vector. Specificaly, given the Optical vector output \(z_{o}=g_{o}(o_{o})\) and the fusion output \(z_{f}=g_{f}(o_{f})\), where \(g_{o}\) and \(g_{o}\) are the linear projection for each modality. The contrastive loss can be formulated as: \[L_{c}(z_{o},z_{f})=-\underset{S}{\mathbb{E}}\Bigg{[}\log\frac{e^{sim(z_{o}^{ \prime},z_{f}^{\prime})/\tau}}{\sum_{j=1}^{N}e^{sim(z_{o}^{\prime},z_{f}^{ \prime})/\tau}}\Bigg{]} \tag{6}\] where \(sim\) is a similarity function (i.e., cosine similarity), \(S\) is a set that contains \(N-1\) negative samples and one positive sample. This equation introduces the loss for RGB-FUSION contrastive training. In order to contrast the output of all outputs, we define a contrastive loss between unimodal representations and fusion representations. Finally, we can write the full loss as: \[\begin{split} L=& L_{DEM}+L_{SAR\_RGB}+\lambda_{2 }*(L_{c}(z_{f},z_{o})\\ &+L_{c}(z_{f},z_{s})+L_{c}(z_{f},z_{d})+L_{c}(z_{f},z_{m}))\end{split} \tag{7}\] ### _Random Modalities Combination_ Besides the network design, the training strategy is vital to the performance of modal-incomplete inputs. The research [29] finds that the Transformer models tend to overfit to dominate modalities in tasks. To improve the robustness of the proposed approach against modal-incomplete data, we propose to leverage a random modality combination training strategy. Thanks to the proposed approach, we can randomly choose the different modality combinations or unimodal data in pretraining or supervised training on downstream tasks. Because the proposed approach fuses all modalities using additional learned tokens, which greatly reduces the effects of modal-incomplete input. ## IV Experiments In this section, we evaluate the proposed approach in multiple settings. We first introduce the multimodal dataset used in this work. Then, we present the details of pre-training and the details of training on downstream tasks, as well as the evaluation procedures. Finally, we ablate the performance of the complete and the incomplete multimodal input that show the proposed approach's flexibility. ### _Experimental Details_ In order to showcase the proposed approach across the different modalities, we train the proposed approach in a completely supervised paradigm and a fine-tuning paradigm with a pre-trained weight. Many works have pointed out that the pretraining of the big model on multimodal can be beneficial on downstream tasks [30]. And, the pre-trained model can be used for arbitrary downstream tasks with the finetuning of the task-specific decoder. Hence we can train a giant model on a large multimodal data set with as many modalities as possible. The pre-trained model can strengthen the ability to extract features that are only trained on a few or single modality data. In this section, we provide the details of the self-supervised pre-training and the supervised training on downstream tasks as well as the multimodal datasets. ### _Description of Datasets_ We train and test the performance of the proposed approach on two multimodal datasets for two downstream tasks, namely building instance / semantic segmentation and LULC mapping. #### Iii-B1 DFC2023 track2 _Building instance / semantic segmentation:_ The former is performed on the track 2 dataset of DFC2023, which comprises a combination of RGB images, Synthetic Aperture Radar (SAR) images, and Digital Surface Model (DSM) data. While the objective of the original task is building height estimation, this study simplifies it as building instance / semantic segmentation. The dataset consists of images obtained from GaoJing-1, GaoFen-2 and GaoFen-3 satellites, with respective spatial resolutions of 0.5 m, 0.8 m and 1 m. Normalized Digital Surface Models (nDSMs) are used as a reference in Track2 and are created from stereo images captured by GaoFen-7 and WorldView-1 and -2 with approximately 2 m ground sampling distance (GSD). The dataset was collected from seventeen cities across six continents and hence is highly diverse in terms of landforms, building types and architecture. The labels of building instance segmentation adopt the MS COCO format and are provided in a JSON file. A sample of the labels is shown in Fig. 2 for illustration. #### Iii-B2 Quadruplets Dataset: Land-Use Land-Cover (LULC) mapping In the second dataset, we utilize diverse data sources obtained from Google Earth Engine (GEE) platform, encompassing Sentinel-1, Sentinel-2, Lidar DEMs and Dynamic World LULC maps, as shown in Fig. 3 and Fig. 4. The dataset comprises 37 regions across various landscapes and LULC classes in France and Australia. The Sentinel-1 mission provides data from a dual-polarization C-band SAR instrument and produces the calibrated and orth-corrected S1 GRD products. We downloaded the data from the COPERNICUS/S1_GRD category on GEE, resampling it into 10 m resolution and using dual-band VV+VH. Similarly, we downloaded the Sentinel-2 data from the COPERNICUS/S2_SR_HARMONIZED category, which provides multi-spectral imaging with 13 spectral bands suitable for large-scale LULC mapping. We resampled the Sentinel-2 data into 10 m resolution, and use the RGBN bands in this work. The two types of Lidar DEM are provided in this research. In France, we utilized the RGE ALTI dataset, which is a digital elevation model (DEM) created using airborne lidar, with a pixel size of 1 m. We resampled this dataset to 10 meters, and its vertical accuracy ranges from 0.2 m to 0.5 m with an average accuracy of 7 m in steep slope areas. In Australia, we use the digital elevation model 5 m grid derived from 236 individual LiDAR surveys conducted between 2001 and 2015. We compiled and resampled the available 5 m resolution LiDAR-derived DEMs using a neighbourhood-mean method to create 10 m resolution datasets for each survey area, which we used in this work. The Dynamic World MAP (DNW) dataset comprises globally consistent, 10 m resolution, near real-time land-use and land cover predictions derived from Sentinel-2 imagery. It features ten bands that include estimated probabilities for each of the nine LULC classes (water, trees, grass, crops, shrub and scrub, flooded vegetation, built-up area, bare ground, and snow & ice). It also has a class "label" band indicating the class with the highest estimated probability, which makes it suitable for multi-temporal analysis and custom product creation. Lastly, we utilized the labelled class-reference from the UrbanAtlas 2018 database containing 27 LULC classes as the label of this dataset. The dataset provides integer rasters with index labels. We created raster maps with 10 m resolution Fig. 2: DFC2023 track2 data sample. that geographically match the Sentine-1/-2 images using the open-data vector images freely available on the European Copernicus program website. #### Iv-B3 Downstream Tasks We evaluate the proposed approach against state-of-the-art methods on two downstream tasks: building instance / semantic segmentation, and the LULC mapping. In particular, the evaluation is performed on the supervised learning and finetuning two paradigms. For these two downstream tasks, we replace the pre-trained decoders with randomly initialized Mask2Former. In the following, we give an overview of two tasks. **Building Instance / Semantic Segmentation:** We follow the Mask2Former but replace the backbone with the proposed network. In a supervised fashion, we train the whole network from scratch using a random modality combination strategy. In the finetuning fashion, we provide two ways, one is only to update the network on the pre-trained ViT-B backbones using a generative way, and the other is to update the whole network on the pre-trained ViT-B backbones using reconstruction and contrastive losses. We train our model on DFC2023 track2 train split and report the validation accuracy on the validation split. Along with the result of building instance segmentation, we also provide the binary building semantic segmentation results. **Land Use Land Cover Mapping:** We still use the Mask2Former with the proposed backbone on the quadruplets dataset to perform LULC maps. However, we considered seven classes merged from the semantic hierarchy defined by UrbanAtlas. For that, we extract 7 semantic classes by taking the argmax of the prediction head. Same training strategy as the building instance segmentation is used in this task. We train our model on 10 (5340 samples) cities and report the validation accuracy on the other 2 (783 samples) cities. #### Iv-B4 Architectural Details The proposed approach uses a ViT-B as the main structure consists of 4 and 5 input adapters with a patch size of 16\(\times\)16 pixels for the pre-training in two different tasks. Different from standard MultiMAE, we add the learnable fusion tokens as input which use an additional input adapter to add 2D Sine-Cosin position Encoding. The fusion tokens have as many as the number of patched inputs of each modality. After adding the position encodings, the fusion tokens with all modality inputs are input into a one-layer Bi-LSTM attention block. In self-attention, we use the masked algorithm to avoid the fusion information leak to a single modality. In order to get the global feature of each modality and the fusion output, we use an additional cross-attention layer to map the patch embeddings into the vector output. Then an auxiliary contrastive loss is added between each modality output vector and fusion output vector. For reconstruction learning, we follow the same setting of the MultiMAE decoder but without positional embeddings and cross-attention layer. The fusion tokens are projected to the decoder dimension by using a linear projection layer and then added a learned modality embedding. After these, two Transformer blocks and a linear projector are used to project and reshape it to form an image or map. For two downstream tasks, we adopt the same settings from Fig. 4: Dynamic World Map and European Urban Atlas data. Fig. 3: Sentinel1, Sentinel-2 and DEM data. Mask2Former. For the pixel decoder, we use 6 MSDeformAttn layers applied to feature maps with resolution 1/8, 1/16 and 1/32, and use a simple upsampling layer with lateral connection on the final 1/8 feature map to generate the feature map of resolution 1/4 as the per-pixel embedding. We use the Transformer decoder with 9 layers and 100 queries for instance segmentation, 9 queries for binary building semantic segmentation and 9 queries for LULC mapping. We use the binary cross-entropy loss and the dice loss for the mask loss. The final loss is a combination of mask loss and classification loss. For instance segmentation, we use the standard AP@50 metric. For semantic segmentation, we use mIoU (mean Intersection-over-Union). #### Iv-B5 Training Details For pre-training, we train our model for 1600 epochs on 5700 triplet data on DFC2023 and 6123 quadruplet data, individually. We use the AdamW optimizer with a base learning rate of 1e-4 and weight decay of 0.05. We warm up training for 40 epochs, starting from using cosine decay. We set the batch to 40 using a single 3090. All data are resized to 256\(\times\)256. The number of non-masked tokens given to the encoder is set to 256 on two data sets. For the second dataset, where we use the land-cover map as an additional modality input with 64-dimensional class embeddings. For instance segmentation and semantic segmentation using Mask2Former, we use AdamW optimizer and the step learning rate schedule. We use an initial learning rate of 0.0001 and a weight decay of 0.05. A learning rate multiplier of 0.1 is applied to the backbone with the pretraining and not in the supervised paradigm. And, we decay the learning rate at 0.9 and 0.95 fractions of the total number of training steps by a factor of 10. We train our models for 50 epochs with a batch size of 10 in the semantic segmentation task and 300 epochs in the instance segmentation task. ### _Experimental Results_ #### Iv-C1 Multimodal Comparison We evaluate the proposed approach by two paradigms, one is supervised from scratch, and the other is finetuning with pre-trained weights. To evaluate the former, we compare the proposed approach against a technique that uses origin self-attention and does not employ the random modality combination training strategy, termed MultiViT, on modal-complete and modal-incomplete inputs for building instance/semantic segmentation and LULC mapping tasks. The results reported in Tables I and II reveal that the proposed approach outperforms MultiViT in building instance/semantic segmentation tasks when evaluated with modal-complete inputs. However, for the LULC mapping task, the performance of the proposed approach and MultiViT are comparable. With regards to model-incomplete inputs, the proposed approach performs impressively on all modality incomplete inputs and single modality inputs for both tasks due to the proposed attention block and random modality combination training strategy. For building instance/semantic segmentation, there is a visible dominance of RGB images over all other modalities, followed by DSM, while SAR images make the slightest contribution to the task, even causing the noise. In this situation, MultiViT completely overfits on dominant modality inputs and fails on the task with single modality inputs when evaluated with model-incomplete inputs. Similarly, for LULC mapping, sentinel-2 images along with a dynamic world map have a significant influence on the task, followed by sentinel-1 and DEM images. The proposed approach achieves the best performance with the mIoU of 0.244 with modal-complete inputs while MultiViT overfits on dynamic world maps, and performs slightly better when a dynamic world map is present but fails altogether when it is not present in the inputs. In the context of the finetuning paradigm, the proposed approach is assessed through two distinct pretraining methods: one that employs generative pretraining and another that combines generative and contrastive pretraining. The outcomes of the evaluation for both tasks are presented in Table I and Table II. As one can see, two tasks show controversial results. Specifically, in the case of building instance/semantic segmentation tasks, the training-from-scratch model outperforms all other models. However, the model that leverages both generative and contrastive pretraining methods is closely ranked as the second-best. In contrast, for the land-cover mapping task, the fully finetuned model is the top-performing model among all the models listed in the tables, demonstrating the potential of pre-training in augmenting downstream LULC tasks. For the single modality input, our goal is not to show state-of-the-art performance in this setting, as we are trying to solve the dramatic degradation of unimodal inference with a multimodal backbone. Here we show the ability of the proposed approach to producing meaningful unimodal outputs when fed with unimodal data. To do this, we only input one modality and miss out on other modality inputs. As we can see from both datasets (Table I and Table II), the MultiViT suffers significant degradation from the modal missing and completely fails to work on the non-dominated modalities. In contrast, the ## V Conclusion Fig. 5: Results of proposed approaches in supervised paradigm and two finetuning paradigms versus MultiViT on DFC2023 track2 dataset. [MISSING_PAGE_POST] proposed approach using the random modality combination strategy achieves high performance when only one modality is available. This is due to the fact that in those models, some capacity is allocated to each modality specifically and the model is able to produce unimodal outputs. Besides the quantitative analysis, we also provide a visual qualitative comparison, where Fig. 2 and Fig. 3 show the results of building instance / semantic segmentation and LULC mapping. For building instance / semantic segmentation, similar to Table I, the proposed approach with supervised paradigm achieved the best performance, then is the results of finetuning. The MultiViT achieves the worst performance, especially with the modal-incomplete inputs. Our experimental results reveal that the SAR modality produced inferior results compared to other modalities. For the LULC mapping task, the finetuning with contrastive and generative pre-trained weights outperformed other approaches, while MultiViT exhibited reliable performance only with DNW input. For different modalities, we conclude that the Sentinel-1/2 images and DNW maps contributed equally as effective modalities, while the DEM input was determined to be a single-class predictor, indicating its inability to extract useful information. #### Iv-D2 Albation We now analyze the proposed approach through a series of ablation studies on both finetuning and supervised paradigms. To evaluate the generalizability of the proposed components, all ablations were performed on both tasks: the building instance / semantic segmentation and LULC mapping. **Random Modality Combination & Bi-LSTM Attention.** We first validate the importance of the modality random combination training strategy on downstream tasks in a supervised paradigm. As shown in Table III and Table IV, the model without the modality random combination training strategy experienced severe degradation with modal-incomplete inputs and even failed with a single modality on both tasks. In addition, we test the effect of the Bi-LSTM attention by removing it from the proposed network. The corresponding results showed a significant drop in performance, indicating that the Bi-LSTM enables superior interaction of the fusion token with each modality and facilitates learning more discriminative features for downstream tasks. **Partial Fine-tuning and Non-masked Attention.** In addition to the finetuning of whole model, partial finetuning is also used to evaluate the quality of the learned representation in a self-supervised approach. Partial finetuning involves freezing the backbone and updating only the task-specific decoder on two tasks. It is important to note that contrastive pre-training relies on masked attention to keep each modality independent, especially when working with different data formats such as text and images. The use of masked attention in contrastive pre-training helps in avoiding information flow from one modality to the other, thereby keeping modality-specific information through the network. This is more beneficial for downstream tasks that involve only a single modality. However, when using generative pre-training, masked self-attention is not mandatory. Here, we show the finetuning result based on the combination pretraining (the use of reconstruction loss and contrastive loss), the generative pretraining (the only use of reconstruction loss), and the finetuning result without masked self-attention. The results are shown in Table V and Table VI for both tasks. In the first row, we remove the masked Self-Attention blocks while keeping the random modality combination training strategy in finetuning, which results in significant improvement in performance. This is probably because masked self-attention hinders the interaction between different modalities. Compared with the generative pre-training, the use of masked attention in the combination pre-training helps to avoid the information flow from one modality to the other. As one can see, the unimodal inference performs close to the modal-incomplete inputs as the modality streams are more independently treated. In contrast, the results without contrastive pre-training tend to overfit on dominant modalities and perform relatively poorly on other modalities. And, it further faces lower performance on one single modality. ## V Conclusion In this work, we introduce an incomplete multimodal learning framework for multimodal remote sensing data fusion which can be used in both supervised training and self-supervised pretraining paradigms. Unlike previous multimodal remote sensing data fusion approaches, the proposed approach enables the training and inference of models with modal-incomplete inputs. By using the Bi-LSTM attention mechanism and masked self-attention, we are able to pre-train the network using contrastive and reconstruction losses in the MultiMAE framework, and also to train the network from scratch or finetune the model on downstream tasks using a random modality combination strategy. This strategy allows the network to maintain performance even when dealing with modal-incomplete inputs or a single modality in the inference stage. We evaluate our model on two multimodal remote sensing datasets, demonstrating flexibility in network training and inference, and state-of-the-art performance when presented with modal-incomplete inputs. However, this study focuses solely on different modality raster data. In future work, diverse modalities such as text and vector data will be incorporated into the framework.
2302.12506
Exploring the Enablers of Digital Transformation in Small and Medium-Sized Enterprise
Recently, digital transformation has caught much attention of both academics and practitioners. With the advent of digital technologies, small-and-medium-sized enterprises (SMEs) have obtained the capacity to initiate digital transformation initiatives in a similar fashion to large-sized organizations. The innate characteristics of digital technologies also favor SMEs in promoting initiation of digital transformation. However, the process digital transformation in SMEs remains a black box and the existing findings of digital transformation in SMEs are limited and remain fragmented. Considering the important contribution SMEs can offer to nations and economies; it is timely and relevant to conduct a profound analysis on digital transformation in SMEs. By conducting a thorough review of existing related literature in management, information systems, and business disciplines, this book chapter aims to understand both internal and external enablers of the digital transformation in SMEs.
Sachithra Lokuge, Sophia Duan
2023-02-24T08:30:25Z
http://arxiv.org/abs/2302.12506v1
Exploring the Enablers of Digital Transformation in Small and Medium-Sized Enterprises ## Abstract _Recently, digital transformation has caught much attention of both academics and practitioners. With the advent of digital technologies, small-and-medium-sized enterprises (SMEs) have obtained the capacity to initiate digital transformation initiatives in a similar fashion to large-sized organizations. The innate characteristics of digital technologies also favor SMEs in promoting initiation of digital transformation. However, the process digital transformation in SMEs remains a black box and the existing findings of digital transformation in SMEs are limited and remain fragmented. Considering the important contribution SMEs can offer to nations and economies; it is timely and relevant to conduct a profound analysis on digital transformation in SMEs. By conducting a thorough review of existing related literature in management, information systems, and business disciplines, this book chapter aims to understand both internal and external enablers of the digital transformation in SMEs._ Keywords: Digitalization, Digital Transformation, Enablers, SMEs, Literature Review ## Introduction In recent times, digital technologies (i.e., social media, mobile technologies, analytics, cloud computing, and internet-of-things), also known as SMAC-IoT have provided myriad prospects for all businesses despite their size (Kraus et al., 2021; Lokuge et al., 2019). For example, in the past, only large-sized organizations with financial capacity had access to resources, the ability to invest in technologies and lead innovation in their respective organizations (Lokuge & Sedera, 2020). As such, technology-led innovation was limited to larger, resourceful organizations (Lokuge & Sedera, 2016; Tan et al., 2017). However, in recent times, digital technologies have disrupted the norms and have given rise to concepts such as digital transformation which has become a buzzword in the business landscape (Vial, 2019). Digital transformation refers to "the use of digital technologies to create value-added products and processes in organizations and integrate them into business processes, organizational structures, and working models" (Warner and Wager (2019). As per Vial (2019, p. 118) digital transformation is "a process that aims to improve an entity by triggering significant changes to its properties through combinations of information, computing, communication, and connectivity technologies." Wessel, Baiyere, Ologeanu-Taddei, Cha, and Jensen (2020), extends this conversation and highlights that digital technology plays a key role in digital transformation, whereby they initiate a new organizational identity to develop. Digital transformation enables improved decision-making, value augmentation and improved customer services. The whole objective of investing in digital technologies in organizations is to develop new organizational capabilities for attaining a competitive edge over the market competition (Adikari et al., 2021; Lokuge et al., 2019; Vial, 2019). With the ever-changing customer requirements, advancement of modern technologies and the new normal introduced through COVID-19 pandemic, all organizations, especially small and medium-sized enterprises (SMEs) are interested in digital transformation initiatives considering its benefits. SMEs are considered as the backbone of the global economy (Deng et al., 2019; Li et al., 2018). For example, in Australia alone 90% of the organizations that contribute to the national income are SMEs (Duan et al., 2012). As such, to increase organizational efficiency, it is important for the SMEs to increase the application of digital technologies and initiate digital transformation (Deng et al., 2019). Therefore, an investigation into the digital transformation in SMEs will assist in uncovering enormous benefits offered through the introduction of digital technologies. As a result of such digital transformations, SMEs will in turn be able to contribute to the economic growth of the community, society, and country. In defining SMEs, various definitions have been adopted in various countries (Beck et al., 2005; Duan et al., 2012). In general, SMEs are defined as enterprises with less than 250 employees (Beck et al., 2005). SMEs are considered as a distinguished group from large-sized organizations as they possess innate characteristics such as technical incapabilities, limited resources and infrastructure, over influence of the SME owner, lack of control, inadequate capital, high dependence on business partners, informal planning, lack of formal culture, and uncertainty (Deng et al., 2019; King et al., 2014; Nagahawatta et al., 2021). However, on a positive side, SMEs are considered as more agile, flexible, and responsive to market needs (Ghodadian & Gallear, 1997; Li et al., 2018; Sedera & Lokuge, 2017). As such, organizational theories, and day-to-day practices that are relevant for a large-sized organization might not be suitable for a SME (Ghodadian & Gallear, 1997; North et al., 2019; Salim et al., 2015; Szopa & Cyllik, 2020). Prior research has attempted to unfold the complex process of digital transformation by investigating the strategy of an organization (Bharadwaj et al., 2013; Sedera et al., 2022), its influence on organizational structure (Selander & Jarvenpaa, 2016), business processes (Vial, 2019), and the organizational culture (Karimi & Walter, 2015). Such studies, however, have mostly focused on understanding the digital transformation in large-sized organizations or have taken an "one-size-fits-all" approach regardless of unique features of SMEs. Considering the importance of digital transformation and the unique characteristics of SMEs, it is considered highly timely and relevant to unravel the black box of digital transformation projects among SMEs (Arguelles et al., 2021; Crupi et al., 2020; Gupta & Bose, In Press; Kraus et al., 2021; Lokuge & Duan, 2021; Wessel et al., 2020). In recent times, researchers have seen the value of studying digital transformation in SMEs and have focused on investigating this topic. Even though, there is an interest among scholars in understanding this topic, the extant knowledge regarding the process of digital transformation of SMEs remain limited and disjointed (Ab Wahid & Aziidah Zulkifli, 2021; Garzoni et al., 2020; Li et al., 2018; Lokuge & Duan, 2021). Further, the importance and the unique features of SMEs in the contemporary economic conditions warrant an investigation into the topic of digital transformation in SMEs. As such, a review of the related literature on digital transformation in SMEs can provide a clear understanding of the topic and thereby fill the gap to inform researchers. As such, the proposed overarching research question of this study is derived as follows: _What are the internal and external enablers of digital transformation in SMEs?_ In order to address this research question, the authors conducted a comprehensive systematic literature review in digital transformation in SMEs. The authors examined 71 articles to identify the enablers of digital transformations in SMEs. The enablers derived through the review of literature provide an early assessment for the SMEs for designing digital strategies for digital transformation. The structure of this book chapter is as follows. The research method section describes the literature review method followed. The findings of the literature analysis are provided next. Finally, in the conclusion section, future research agenda, limitation and concluding remarks are provided. ## 2 Research Method A literature review approach was adopted in this study as the objective of this study is to identify internal and external enablers of digital transformation in SMEs. Such an approach is beneficial as (i) it allows the authors to get an insight of the existing literature, and (ii) thereby, it helps authors to determine the research gaps. Five stages were followed for the literature review including defining the scope of the literature review, searching for the related literature, selecting, and finalizing the sample, analyzing the sample, and finalizing the findings (Wolfswinkel et al., 2013). When defining the scope of the literature review specific inclusion and exclusion criteria were considered. In this study, the authors selected journal papers from the databases such as AIS Library, INFORM, ProQuest, Wiley, Emerald, and EBSCOhost. For a broad coverage of management, business and information systems studies, keywords such as, "digital transformation," OR "digitalization," OR "digitization," OR "digital disruption" AND "Small business," "SME," "Small and medium enterprise" were used for identifying papers. The search was limited to articles published in the English language. The authors only considered articles published before the cut-off date of 16th February 2022 for the sample when the database search was conducted. The analysis was conducted in early February, to ensure all papers considered for the analysis, the authors have considered this cutoff date. In the next phase, the authors ran the search queries within the chosen databases for obtaining relevant publications for the analysis. Several types of articles were included such as research articles, review articles, opinion pieces, discussion papers, and letters to the editor published in scholarly journals. The search returned 133 articles. Then, in the next phase, the authors involved screening and selecting the final samples for detailed analysis. The titles and abstracts of all initially identified articles were screened for checking the relevance to the research focus. Duplicate articles were removed. As a result, 80 articles for further review were left. The identified 80 articles have been read in full. Then, the two authors coded the papers. From this, 9 publications were removed as they were not relevant to the topic of analysis. Then, each remaining article (71 articles) identified through the search too was read completely and then determined their relevance. Table 1 below provides details of the sample. Each selected publication was analyzed and coded to identify enablers of the digital transformation process in SMEs. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline **Year of publication** & **Selected papers** & **Number of papers** \\ \hline 2022 & (Alam et al., 2022) (Owoseni et al., 2022; Sastararuji et al., 2022) & 3 \\ \hline 2021 & (Ab Wahid \& Aziidah Zulkifli, 2021; Akpan et al., 2021; Ghobakhloo \& Iranmanesh, 2021; Lindblom et al., 2021; Ratten \& Thompson, 2021) & 13 \\ \hline 2020* & (Balakrishnan \& Das, 2020; Crupi et al., 2020; Del Giudice et al., 2020; Depaoli et al., 2020; Garzoni et al., 2020; Nwaiwu et al., 2020; Pelletier \& Cloutier, 2019; Philipp, 2020; Sanchez-Segura et al., 2020; Szopa \& Cyllik, 2020; Wewege et al., 2020; Zangiacomi et al., 2020) & 11 \\ \hline 2019* & (Baber et al., 2019; Bouwman et al., 2019; Chan et al., 2019; Galindo-Martin et al., 2019; Garbellano \& Da Veiga, 2019; Llinas \& Abad, 2019; Nair et al., 2019; North et al., 2019; Pelletier \& Cloutier, 2019; Riera \& Iijima, 2019; Sehlin et al., 2019; Ulas, 2019) & 11 \\ \hline 2018 & (Chester Goduscheit \& Faullant, 2018; Kim et al., 2018; Lee et al., 2018; Sousa \& Wilks, 2018) & 4 \\ \hline 2017 & (Scuotto et al., 2017) & 1 \\ \hline 2016 & (Ansari et al., 2016; Ojala, 2016) & 2 \\ \hline 2015 & (Prindible \& Patrick, 2015; Taiminen \& Karjaluoto, 2015) & 2 \\ \hline \end{tabular} \end{table} Table 1: **Overview of the Literature Review Sample** ## Analysis The two authors independently analyzed each of the papers identified in the literature sample. When analyzing the sample, authors identified key themes that were emerging. By conducting this approach, the authors were able to maintain an open, and free-flowing mental state that enabled absorbing the phenomenon of interest. All the papers were analyzed separately to code key enablers. When coding the literature sample, the authors labelled any important detail until the existing labels were repeated. Then, each label identified was discussed and explored. The authors refined the coded enablers that were identified independently. Discussions were made until a consensus was made for enablers of digital transformation in SMEs. ## Findings of the Literature This section provides an overview of the analysis of the literature sample. As per the sample, it was evident that the scholars in studying digital transformation in SMEs are focusing more on this topic and this is epitomized by the heightened number of papers published each year from 2013 (only one article) to 2021 (n\(=\)35). Among these papers, only a few papers (around 28) focused on studying digital transformation in a particular industry setting such as banking, manufacturing, and IT. When examining the research method adopted in these articles, it was evident that most studies (about 25 papers) followed a qualitative approach, followed by a quantitative approach (n\(=\)16), mixed method (n\(=\)9), and literature reviews (n\(=\)10). Among the papers that conducted a qualitative approach, case studies method was the most applied method and interviews were used to collect qualitative data. Among the sample that applied a quantitative approach, survey method was the most applied approach, followed by econometric modelling. The mixed-method studies mainly applied survey methods along with a few qualitative approaches. In most of the papers in the sample, maturity model as a theoretical framework was applied. In addition, in the sample, commonly used theories and frameworks were absorptive capacity, dynamic capabilities, IS success model, resource-based view, and technology, organization, and environment framework in investigating digital transformation in SMEs. ### Defining Digital Transformation The accessibility of digital technologies and the recent COVID-19 situation has forced organizations to embrace digital transformation (Arguelles et al., 2021; Vial, 2019). Verhoef et al. (2021, p. 889) define digital transformation as "a change in how a firm employs digital technologies, to develop a new digital business model that helps to create and appropriate more value for the firm." Similarly, researchers such as Fitzgerald et al. (2014) and Lokuge et al. (2021) describe it as the application of new digital technologies to improve the organizational performance in the areas such as developing new business models, customer experience enhancement, and restructuring business operations. Vial (2019, p. 118) define digital transformation as "a process that aims to improve an entity by triggering significant changes to its properties through combinations of information, computing, communication, and connectivity technologies," Wessel et al. (2020), argue that in a digital transformation initiative a new organizational identity is emerged as a result of this transformation. In this process the organizational resources, the practices they follow, and the business strategies they follow radically transformed (Wessel et al., 2020). As per Rowe (2018), digital transformation is "the investment in people and technology to drive a business that is prepared to grow, adapt, scale, and change into the foreseeable future." This shows that digital transformation is a process that employs digital technologies and initiates a positive organizational change in the organization. As per Verhoef et al. (2021), digitization is the starting point of digital transformation process. In digitization, organizations transform analogue data into digitized format. The next step of the digital transformation process, which is digitalization, focuses on automation, value adding and digitalizing business processes. Digital transformation looks at digitalizing business models, ensuring customer and employee experience. In all these endeavors, organizations focus on positive outcomes such as increased organizational performance, increased cost efficiency, competitive advantage, and better customer services (Balakrishnan & Das, 2020; Crupi et al., 2020; Garzoni et al., 2020; Gong et al., 2020; Lokuge, Sedera, Ariyachandra, et al., 2020; Pelletier & Cloutier, 2019; Sedera & Lokuge, 2019; Szopa & Cyllik, 2020). ### Small and Medium-Sized Enterprises SMEs are considered as an important entity of the global economy. The most important and significant part of the world economy is small businesses (Cakar & Erturk, 2010; Hung et al., 2004; Lokuge & Duan, 2021). SMEs vary from large organizations regarding their capacity, structure, and size (Tambunan, 2019). The capacity of the SMEs is determined through staff count, assets, size, and revenue. SMEs generally differ from large-sized organizations in terms of their capacity, structure, leadership, finance, and business size (Beck et al., 2005). SMEs possess innate characteristics such as the lack of IT resources, professionals, specialized services or goods, smaller market, and are constantly looking for ways to save costs (Simpson & Docherty, 2004). However, since they have limited staff, communication is efficient, and the ability to execute and decisions making is easy (Simpson & Docherty, 2004). SMEs are more independent, have less governance structures and are agile (Alsharari et al., 2020; Bouaynaya, 2020; Lokuge & Sedera, 2014b). As per Ghobadian and Gallear (1997), SMEs possess characteristics such as integrated power, multifunctional management, limited product range, flexibility, small management teams, inadequate organizational planning, and unsophisticated IT applications or software. Since they have small teams to manage, SMEs are more risk taking and tend to initiate new products, services and lead digital transformation initiatives. However, due to financial resource limitations, there can be restrictions for successful digital transformation initiatives that they need to manage. This book chapter details both internal and external enablers that SMEs need to consider when initiating digital transformations. ## Understanding the Enablers of Digital Transformation The analysis of literature sample emphasized several important enablers of the digital transformation in SMEs. For example, extant literature highlighted the major role of the changes and the forces in the external environment that forces digital transformation in SMEs. The tension that was arise from continuous changes in the technological landscape, ever increasing market demands, the changes to the lifestyle of the customers and unprecedented events such as the war and the pandemic force digital transformation initiatives to be launched in organizations. Initially, the authors identified 48 enablers for digital transformation in SMEs from the literature sample. The two authors independently coded and categorized the enablers by merging synonyms and eliminating duplicated codes. In Figure 1, the overview resulted in identifying internal and external enablers of digital transformation in SMEs are shown. ### Internal Enablers The analysis of the literature sample derived internal enablers that assisted SMEs in initiating digital transformation initiatives successfully. Figure 2 summarizes the enablers identified through the analysis. The details of each of the enablers are provided below. **Organizational Strategy:** Organizational strategy is commonly defined as specified goals, objectives, policies, and plans established for operating a particular organization (Miles et al., 1978). Especially for SMEs, most of the articles in the sample emphasized the key role of the clarity of goals and objectives for a SME to conduct digital transformation projects (Nair et al., 2019; Sedera, 2006). For example, having a proper organizational strategy will not affect the digital transformation success. Proper communication of such a strategy is an important element of organizational strategy (Lokuge and Duan, 2021; Lokuge, Sedera, & Palekar, 2020). Considering the size of the SMEs, proper communication of organizational strategy may seem easier. However, it is important to ensure that this communication is properly conducted. When a digital strategy of a particular organization is developed, it is important to identify positioning of their Figure 1: Key enablers for digital transformation in SMEs Figure 2: Internal enablers for digital transformation in SMEs resources, mapping of their capabilities and leveraging such capabilities (Lokuge & Sedera, 2016; Pelletier & Cloutier, 2019). Since SMEs face major resource limitations, it is crucial for the SMEs to determine such capabilities for introducing a strategy that aligns with their requirements (Lokuge & Sedera, 2018, 2020; Lokuge et al., 2016; Nwaiwu et al., 2020). Further, the criticality of top leadership involvement in managing and ensuring the alignment of such organizational strategies was also considered to be important (Ahmad et al., 2013; Lokuge et al., 2018; Philipp, 2020; Sedera & Dey, 2013; Szopa & Cylplik, 2020). For SMEs, this is one of the most cited enablers that leads to success in digital transformation initiatives. Prior literature agrees that without a proper organizational strategy, strategic initiatives have very little success (Bharadwaj et al., 2013; Henfridsson & Lind, 2014; Nylen & Holmstrom, 2015; Tan et al., 2015). **Sustainable Technology Capabilities:** IT capabilities are commonly defined as an organization's ability to construct and use its IT-based resources (including physical IT resources and organization's IT staff), with other organizational resources and capabilities (Bharadwaj et al., 2013; Lokuge & Sedera, 2018) For a SME to ensure positive digital transformation project requires proper utilization of IT capabilities. However, SMEs are inherently possessed limitation in resources (Walther et al., 2013; Walther et al., 2018). Mathrani et al. (2013) highlight that for an SME to be successful in digital transformation projects, it is critical to manage their IT resources and capacities that includes staff, and skills to develop their competencies. Considering the lack of IT resources among SMEs, in instances where they have exclusive access to IT resources, it will determine successful launch of digital transformation initiatives (Philipp, 2020). However, Sedera et al. (2016) highlights that organizations can use their existing IT, such as enterprise systems bundling with complementary IT resources. Especially with the advent of digital technologies, SMEs can afford digital technologies for digital transformation initiatives (Lokuge & Sedera, 2014). Such endeavors emphasize the criticality of IT knowledge and the capabilities of the staff (Lokuge & Sedera, 2017; Mathrani et al., 2013; Rosemann et al., 2000), which can be challenging for SME. As such, managing technology capabilities determines digital transformation success. **Accessibility to Partnerships:** As per Ghobakhloo and Iranmanesh (2021) the survival and the success of digital initiatives of the SMEs rely heavily on their capabilities to open up their boundaries to external partners. This includes the partners in their value chain, supply network and new partners introduced through new technologies (Queiroz et al., 2018). The accessibility to leverage the assets, knowledge, and competencies of suppliers, external partners through mergers, alliances, and joint ventures determines the strength of the network (Nuwangi et al., 2018), access to resources and the agility of the SME. In the contemporary digital business landscape, the capacity of the SME is determined through the network of the SME. With the advancement of platform culture, SMEs and all organizations do not necessarily require possessing all resources and need not to have access to resources (Lokuge & Sedera, 2019; Lokuge et al., 2019). If the SMEs have access to a network that possesses the required resources, SMEs can thrive in digital transformation initiatives. In the digital era, with the advent of the shared economy, businesses have the opportunity to engage with new partners. For example, social media provides access to real-time feedback for innovation. In addition, platform providers and mobile app developers create a new ecosystem that provides new opportunities for SMEs. Since SMEs inherently do not have necessary resources and capacity, such alliances and partnerships will create new opportunities to thrive in digital transformation initiatives. **Flexible Organizational Controls:** SMEs inherently possess no or flexible organizational hierarchy, low degree of formalization, high involvement of the top management, high visibility of the top management, close top management intervention, and high personal authority (Ghobadian & Gallear, 1997). Such characteristics reflect factors that favor innovation (Jansen et al., 2006). As such, in theory, SMEs can easily initiate digital transformation projects as they possess flexibility in organizational structures and controls. Formalization refers to coordinating and controlling the organization (Bodewes, 2002). It is important to maintain a balance between formal and informal controls. Minimal formal coordinating and controlling mechanisms leads to is a SMEs facing negative consequences such as failure of digital transformation projects. As such, while the existing organizational structures and controls of SMEs favor innovation, managing organizational coordination and control are important for SMEs. As such, a balanced organizational structure and control will determine the success of digital transformation initiatives of SMEs. **Skilled People:** The skilled people enabler entails, knowledge, empowerment, expertise, leadership, and involvement of the managers. Considering digital transformation projects as an innovation to the organization, it can be argued that in all prior literature on innovation the researchers emphasize the importance of managers in leading innovation (Damanpour & Aravind, 2012; Lokuge, Sedera, & Palekar, 2020). As digital transformation process is a radical technological project, managers role is important (Kohli & Melville, 2019). Similarly, in the literature sample most of the papers emphasized the critical role of managers in sharing their knowledge, expertise, leadership, practices that enable successful projects (Crupi et al., 2020). As top management involvement, and support are relatively high in SMEs compared to large organization, digital transformation projects relatively highly successfully in the SME context. In addition, managerial staff are required to educate themselves and reignite their innovation cognition and enhance managerial social capital to promote such projects (Li et al., 2018). As a result, the manager's knowledge and capacity determine the success of digital transformation projects. Further, possessing skilled people is also associated with accessibility to partnerships as partnerships enable existence of skilled people. **Agile Business Processes:** Organizational business processes being agile is important for the success of digital transformation projects. In particular, the agility of such business processes determines the success of digital transformation initiatives. As per Markus and Tanis (2000), implementation of an enterprise system was a radical innovation that re-engineers the organizational business processes. Similarly, digital transformation processes can restructure business processes and develop a new organizational identity (Depaoli et al., 2020). Even though such radical changes are important, it is mandatory to manage such radical changes carefully. Especially for SMEs, managing the business processes carefully becomes one of the prime factors as most SMEs do not possess standard, consistent business processes existing in their organizations. Such irregularities may hinder the digital transformation projects in SMEs. However, agility in redeveloping the organizational business processes is key for success of digital transformation endeavors. This is evident in the COVID-19 pandemic where organization agility emerges as a key enabler in driving digital transformation in SMEs to counter the uncertain environment (Troise et al. 2022). **Organizational Culture:** As per Wood (2001), organizational culture is defined as the systems of shared beliefs and values that develop within an organization or within its sub-units and that guides the behavior of its members. As per extant literature organizational culture is considered as one of the most important factors that establish the extent of innovation in organizations (Adams et al., 2006; Boudreau & Lakhani, 2013; Buschgens et al., 2013; Lee et al., 2016). This is especially true for SMEs as considering its size and the limited resources availability, it is very important for the managers to ensure positive innovation favoring culture in the organization. Such innovation-favoring environment will ensure the success of digital transformation projects as well. When considering SME context, commonly studied cultural dimensions like empowerment, collectivism, uncertainty avoidance, power distance highlight positives of SMEs that enable digital transformation in the organization (Cakar & Erturk, 2010). Organizational culture guides the organizational business processes as well as controls. As such, proper management and establishing enriching organizational culture will determine the success of digital transformation initiatives in SMEs. ### External Enablers Three external enablers (as displayed in Figure 3) were identified from the literature analysis that have key influence on driving the digital transformation in SMEs. While the authors acknowledge additional enablers such as governmental interventions, policies and rules as factors that may influence digital transformation in organizations, the findings are limited as the book chapter reports only the findings of the literature analysis sample. The details of each of the enablers are provided below. **Digital Technology:**: Nambisan (2013) identified digital technologies as new IT-based resources that include social media, mobile technologies, analytics, cloud computing, and internet-of-things. In the contemporary technology landscape, in addition to new changes to these existing technologies, new and advanced technologies are on the rise. For example, technologies such as robotics, nanotechnologies, blockchain and cognitive technologies are changing the very nature of how organizations conduct their businesses. Such changes are demanding organizations to be innovative and create a competitive business landscape. As such, digital technology becomes an external enabler that drives digital transformation initiatives. However, internally, SMEs need to consider the expertise and financial capacity to manage these technologies before investing in digital technologies for digital transformation initiatives. They also need to understand how to adapt and integrate digital technologies with business functions, so that they can leverage their competitiveness through digital transformation. While the innate characteristics of digital technologies such as cost efficiency, ease-of-use, ease-of-implementation favors SMEs, it is important to understand the market and establish a strategy that aligns with the organizational capacity. **Digital Competition:** The advancement of social media and mobile technologies have changed the way organizations connect with their customers. The advent of these technologies has provided even microsized organizations with an opportunity to compete with the rest of the organizations. As such, in this modern day and age, there is extremely fierce competition among organizations. The price of the goods and services, the availability, delivery times of the goods and services no longer provide a competitive advantage for the organizations. As per the Darwinian law, the 'digitally strongest' organizations thrive and survive in this digital competition. With the capacities provided through social media, SMEs can understand market needs. The real-time feedback offered via such platforms enables SMEs to make changes to their products and services. Not only for obtaining feedback, but SMEs are also able to fiercely market their products and services via these channels. Previously, organizations were required to allocate an extensive budget for marketing and promotion. However, such initiatives have been competitive with the advent of digital technologies. Therefore, this is considered as an external driver for digital transformation initiatives in SMEs. **Digital Customer Behavior:** The contemporary business landscape has changed immensely due to the advent of digital technologies. The COVID-19 pandemic has exacerbated this situation and as a result, how organizations operate has changed as well. For example, before the pandemic, remote working was not a prominent concept. In addition, e-commerce was common, however, among SMEs this was not a popular Figure 3: External enablers for digital transformation in SMEs concept. For the survival of the SMEs, it was mandatory for them to have a digital presence and build customer agility [13]. Customer agility is the extent to which a SME senses and responds to customer needs [1]. SMEs are in a better position to obtain knowledge regarding market opportunities when they utilize the feedback of customers and involve them in creating new ideas, products, and services. Such digital transformation initiatives were enforced due to digital customer behaviors. The advent of concepts such as doing business via social media sites, social media influencers have transformed the very nature of modern businesses. The lifestyles, routines, norms of the customers are invariably intertwined with digital technologies. As such, customer behaviors too, have changed due to these changes. As such, digital customer behavior is considered as an external driver of digital transformation among SMEs. ## Conclusion and Future Work The objective of this book chapter was to understand the enablers of digital transformation in SMEs. To analyze this phenomenon, in this book chapter, the authors conducted a systematic review of 71 papers in multiple disciplines that discussed digital transformation in SMEs. The literature analysis allowed the revelation of internal and external enablers for digital transformation in SMEs. Specifically, internal factors such as facilitating organizational strategy, sustainable technology capabilities, flexible organizational controls, accessibility to partnerships, agile business processes, skilled people, and supportive organizational culture facilitate digital transformation in SMEs. External factors including available digital technologies, digital customer behavior, and digital competition are important considerations for SMEs in their digital transformation journey. This research provides both theoretical and practical contributions to the field. Theoretically, this book chapter opens new pathways for researchers to better understand digital transformation in SMEs. While much research has been done to unpack the black box of digital transformation process in large organizations [1, 2, 3], limited insights have been provided on understanding digital transformation process in SMEs. The advent of digital technologies has offered SMEs the opportunity to initiate digital transformation in a similar fashion as large-sized organizations. SMEs with unique characteristics, however, warrant the need for a separate investigation. This study hence contributes by providing insights into digital transformation in SMEs through the identification of key enablers. Next, this study provides two frameworks to summarize these key enablers for facilitating digital transformation in SMEs. Such frameworks serve as the basis for future research to be tested and validated to provide empirical evidence. Practically, the findings of this study will be important for policy makers, and managers of SMEs to understand and determine the factors for facilitating successful digital transformation in SMEs. SMEs have been playing a key role in the national economy, a profound understanding of the digital transformation process in SMEs and its enablers will provide enormous benefits for SMEs to bring in digital technologies for their organizations. There are certain limitations for this study. Since this research is at the conceptual level, empirical research is required to further establish the findings. For example, while there can be more new enablers of digital transformation in SMEs, the findings are limited to the existing studies. Future research can focus on validating the suggested enablers and thereby assisting SMEs to design effective digital strategies. Next, the authors acknowledge the size of the literature sample as the authors only considered journals for our analysis. Recent quality publications such as conference proceedings can be included in future research to extend the scope of the review and enrich the findings. In addition, there can be associations among the enablers. Such nuance associations can only be identified through an explorative study. The authors encourage future studies to be conducted on this line to contribute to both academia and practice.
2306.17189
Steganographic Capacity of Deep Learning Models
As machine learning and deep learning models become ubiquitous, it is inevitable that there will be attempts to exploit such models in various attack scenarios. For example, in a steganographic-based attack, information could be hidden in a learning model, which might then be used to distribute malware, or for other malicious purposes. In this research, we consider the steganographic capacity of several learning models. Specifically, we train a Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), and Transformer model on a challenging malware classification problem. For each of the resulting models, we determine the number of low-order bits of the trained parameters that can be altered without significantly affecting the performance of the model. We find that the steganographic capacity of the learning models tested is surprisingly high, and that in each case, there is a clear threshold after which model performance rapidly degrades.
Lei Zhang, Dong Li, Olha Jurečková, Mark Stamp
2023-06-25T13:43:35Z
http://arxiv.org/abs/2306.17189v1
# Steganographic Capacity of Deep Learning Models ###### Abstract As machine learning and deep learning models become ubiquitous, it is inevitable that there will be attempts to exploit such models in various attack scenarios. For example, in a steganographic-based attack, information could be hidden in a learning model, which might then be used to distribute malware, or for other malicious purposes. In this research, we consider the steganographic capacity of several learning models. Specifically, we train a Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), and Transformer model on a challenging malware classification problem. For each of the resulting models, we determine the number of low-order bits of the trained parameters that can be altered without significantly affecting the performance of the model. We find that the steganographic capacity of the learning models tested is surprisingly high, and that in each case, there is a clear threshold after which model performance rapidly degrades. ## 1 Introduction Steganography, or information hiding, consists of embedding information in another message or physical object [10]. While cryptography also hides information, it does so by converting the information into a human unreadable form [5]. The main difference between these two techniques is that cryptography alters the structure of the secret information but does not hide the fact that communication is taking place, while steganography hides the information in another medium that is not intended for such communication [21]. Modern steganographic techniques have been developed for a wide range of data types, including text, images, audio, video, and even networking data [1]. Machine learning (ML), which can be considered as a subfield of artificial intelligence, enables computers to learn important information from training data [24]. Today, ML models are widely used to deal with a vast array of problems, including speech recognition, image recognition, sentiment analysis, language translation, malware detection, and so on, with new applications being constantly developed. Deep learning (DL) models are the subset of ML models that are based on neural networking techniques. Machine learning models are a plausible cover media in steganography for the following reasons. 1. Machine learning models are rapidly becoming ubiquitous. For example voice-activated search assistants were used by approximately 3.25 billion people worldwide in 2021, out of a world population of 7.9 billion [2]. 2. It is likely that the information hiding capacity of most machine learning models is substantial. Machine learning models typically include a large number of weights or other trained parameters, and it is known that learning models typically do not require high precision in their trained parameters. For example, the most popular algorithm used to train Support Vector Machines (SVM) relies on the fact that limited precision is sufficient [24]. As another example, in neural networking-based models, many--if not most--neurons tend to atrophy during training, and such weights contribute little to the trained model. By relying on such redundant neurons, the authors of [28] show that they can hide 36.9MB of malware within a 178MB AlexNet architecture, with only a 1% degradation in performance. These changes did not affect the structure of the model and the embedded malware was not detected by any of the anti-virus systems tested. 3. Machine learning models may be an ideal cover media for malicious attacks. For example, as in [28], malware could be embedded in a learning model. It is even conceivable that a specific predetermined input to the model could be used to trigger an embedded malware-based attack. In this research, we focus on the fact that, in general, learning models do not require high precision in their trained parameters. Therefore, as a measure of the inherent steganographic capacity of learning models, we determine the number of low-order bits in each weight that can be used for information hiding purposes. We embed information in the \(n\) low-order bits of the weights of trained models, and graph the model accuracy as a function of \(n\). We analyze three DL models: Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), and a Transformer architecture. We train and test each of these models on a dataset that contains 10 different malware families, with a total of 15,356 samples. The remainder of the paper is organized as follows. Section 2 gives relevant background information on steganographic techniques and the various machine learning models used in this research. Section 3 provides details on the dataset employed in our experiments, along with a high-level view of our experimental design. Our results are presented and discussed in Section 4. Finally, Section 5 gives our conclusions, as well as outlining potential avenues for further research. Background In this section, we discuss several relevant background topics. First, we consider steganography, then we introduce the learning models that are used in this research. We conclude this section with a discussion of related work. ### Steganography The word "steganography" is a combination of two Greek roots: _steganos_, which means "concealed or hidden", and _graphein_, which translates as "drawing or writing" [7]. Thus, steganography is the art and science of embedding secret information inside unremarkable cover media that does not raise suspicions [25]. In modern practice, steganography consists of concealing information or messages within seemingly innocuous data or media, such as images, audio, video, or network communication, among many other possibilities [1]. Steganography involves embedding secret data into a cover media in a way that is imperceptible to human senses and difficult to detect without specialized tools and knowledge. Such techniques have been used throughout history for various purposes, including espionage, communication in times of war, and digital watermarking. With the advancement of digital technology, steganography has found applications in modern information security, digital forensics, and multimedia communications, among others. It is an evolving field with ongoing research and development of new techniques to enhance its security and application in various domains. Cryptography protects a secret message by transforming it into an unintelligible format to hide the meaning of the message, while steganography aims to hide the presence of the original message [23]. Steganography dates at least to ancient Greece and, in fact, it predates cryptography as a means of secret communication [23]. An historical example of steganography was the use of invisible ink during the American Revolutionary War to pass messages. As another example, during World War II, photosensitive glass [8] and microdots [4] were used to embed information in other messages. Today, hiding information in image files on computing systems is the most common method of steganography. A textbook example of a modern steganographic application consists of hiding information in the low order RGB bits of an uncompressed image file, such as a bmp image [23]. Since the RGB color scheme uses a byte for each of the R (red), G (green), and B (blue) color components, there are \(2^{24}>16\),000,000 colors available. However, many of the colors are indistinguishable to the human eye, and hence there are a large number of redundant bits in an uncompressed image. The low-order RGB bits of each byte can be used to hide information in such an image, without changing the carrier image in any perceptible way. Provided that the intended recipient knows which image is used for hiding information, and knows how to extract the information, communication can take place between a sender and receiver, without it being apparent that such communication has even occurred. The steganographic capacity of an uncompressed image file is surprisingly large; for example, in [23, Section 5.9.3] it is shown that the entire _Alice's Adventures in Wonderland_ book can be hidden in the low order RGB bits of an image of Alice from the _Alice_ book itself. The image-based steganographic system described in the previous paragraph is not robust, that is, it is trivial to disrupt the communication, without the disruption affecting the non-steganographic use of such images: If we suspect that the low-order RGB bits of bmp files are being used for steganographic purposes, we can simply randomize the low-order bits of all bmp images. For any such images that were being used for information hiding, the information would be lost, and for any innocent images that were not used for information hiding, the image would not be affected in any perceptible way. Much of the modern research into information hiding revolves around creating more robust steganographic techniques. Steganography can be characterized by three important aspects, namely, perceptual transparency, robustness, and capacity. * Perceptual transparency -- This refers to the ability of the steganographic process to hide the secret information in a way that is imperceptible to human senses. This is a critical characteristic of steganography, which ensures that it is not obvious that the cover medium is being used for surreptitious communication. * Robustness -- By definition, robustness is the ability to tolerate perturbations of a system without adversely affecting its initial stable configuration [29]. In image-based steganographic techniques, the perturbations could be transformation, sharpening, filtering, scaling, cropping, and so on. * Capacity -- The amount of information that can be hidden in the cover medium is the capacity, which is related to the practical redundancy in the cover media. The larger the capacity, the more information that can be hidden; equivalently, the smaller the cover medium that is needed. Achieving an optimal balance among these characteristics is a crucial consideration in the design and implementation of a steganographic technique, as it determines the effectiveness of the communications, and the security of the concealed information. In this research, we are interested in the steganographic capacity of machine learning models. Specifically, we hide information in the low-order bits of the weights of learning models. While such a scheme is not robust, our work does provide an intuitive and practical means for information hiding. We show that learning models have considerable redundancy, which is the basis for more advanced steganographic techniques, with the analogy to uncompressed image files being obvious. ### Learning Models Machine learning (ML) and deep learning (DL) can be viewed as branches of artificial intelligence (AI). In general, ML refers to the use of statistical models and algorithms to enable machines to learn from data and improve their performance on a specific task. DL is the subset of machine learning that focuses on training Artificial Neural Networks (ANN)--generally with multiple hidden layers, which is the "deep" part of deep learning--to identify patterns and relationships in data. DL algorithms, which are designed to (loosely) mimic the structure and functioning of the human brain, have proven to be very effective in solving complex problems such as image and speech recognition, natural language processing, and even playing complex games. ML enables computers to learn important information, and improve from experience, which saves humans from the work of extracting useful information from seemingly inscrutable data [24]. The process of machine learning begins with observations derived from datasets. The primary goal of machine learning is to make computers learn with minimal human intervention or assistance [22]. ML is applied in a wide and ever-growing range of important fields, including data security, finance, healthcare, fraud detection, and so on. In addition, DL techniques have been used to successfully deal with such problems as speech recognition, image classification, sentiment analysis, and language translation, among many others [6]. Deep learning has gained significant attention and success in recent years due to its ability to automatically extract complex patterns and representations from raw data without extensive feature engineering. Through the process of training, deep learning models learn to recognize patterns, features, and relationships in data, enabling them to often perform tasks at a higher level than had previously been achieved using classic machine learning models. Machine learning algorithms can be subdivided into three primary categories: supervised machine learning, unsupervised machine learning, and semi-supervised machine learning. Supervised machine learning uses labeled datasets to train the model. Support Vector Machine, Multilayer Perceptron, \(k\)-Nearest Neighbors, Decision Trees, Random Forest, and Linear Regression are popular examples of supervised machine learning algorithms. In contrast, unsupervised machine learning techniques can be applied to unlabeled data. Clustering techniques, such as the well-known \(K\)-means algorithm, are examples of unsupervised learning. Semi-supervised machine learning can be viewed as a hybrid approach that combines aspects of supervised and unsupervised algorithms. In this paper, we only consider supervised learning techniques; specifically, we train models to classify malware from several different families. Next, we discuss each of the learning techniques that are employed in the experiments in Section 4. Here, we introduce the DL techniques of Multilayer Perceptron, Convolutional Neural Networks, and Transformer models. #### 2.2.1 Overview of Multilayer Perceptrons Multilayer Perceptrons (MLP) are a popular class of feedforward neural network architectures that are widely used for supervised learning tasks, including classification and regression [26]. MLPs consist of multiple layers of interconnected nodes, where each node receives input from the previous layer and produces output that is passed to the next layer. The input layer of an MLP receives the input data, and the output layer produces the final prediction. In between these layers, there can be one or more hidden layers that help to learn complex patterns in data. Each node in the hidden layers applies a nonlinear activation function to the weighted sum of its inputs, which helps to capture non-linear relationships in the data. MLPs are trained using backpropagation, which is an optimization algorithm that adjusts the weights of the network based on the difference between the predicted output and the actual class label. The weights are updated using gradient descent, which iteratively adjusts the weights to minimize the error. One of the main advantages of MLPs is their ability to learn complex patterns in the data, making them suitable for high-dimensional and non-linear datasets. However, MLPs can be computationally expensive to train, and they require a large amount of labeled data to achieve high accuracy. #### Overview of Convolutional Neural Networks Convolutional Neural Network (CNN) are one of the most popular DL techniques. CNNs were designed for efficient training on images, where local structure dominates, but they have proven surprisingly useful for a wide range of problems--any problem domain where local structure is most important is a good candidate for a CNN. The CNN architecture is composed of convolution layers, pooling layers, and a fully connected layer (or, possibly, multiple fully connected layers). A convolution layer performs a discrete convolutional operation on the output of the previous layer. This can be viewed as applying a filter, where the parameters of the filter are learned. The first convolutional layer is applied to the input data, and in the case of images it learns basic structure, such as edges. Subsequent convolutional layers learn higher-level and more abstract features. The purpose of a pooling layer, which usually follows a convolution layer, is to reduce the dimensionality of the problem, and thereby speed up the training process. Pooling may also serve to increase translation invariance, which is highly desirable for image analysis. #### Overview of Transformer Models Transformers are a type of deep learning architecture that have revolutionized the field of natural language processing (NLP). Transformers were introduced in [27], and are currently the state-of-the-art architecture for many NLP tasks, including machine translation, sentiment analysis, and question answering. They have also been successfully applied to other tasks, such as image classification and speech recognition. The key innovation of Transformers is the self-attention mechanism, which allows the model to selectively attend to different parts of the input sequence when making predictions. All models use attention to some degree, but Transformer model in [27] showed that explicit attention is far more powerful than had been previously realized. Transformers consist of an encoder and a decoder module. The encoder takes an input sequence and generates a hidden-state representation that is designed to capture the meaning of the input. The decoder takes the hidden-state representation and generates the output one token at a time. One of the key advantages of Transformers is their ability to handle variable-length input sequences without the need for padding or truncation. They also require less training time compared to traditional Recurrent Neural Networks, and can be parallelized more easily. ### Related Work As of this writing, the only closely related work that the authors are aware of is [28], in which a technique dubbed "EvilModel" is developed and analyzed. EvilModel hides malware inside of a neural network model. As an example, when a 36.9MB malware is embedded in a specific model, the accuracy of the model is only reduced by about 1%. The authors of [28] embed the malware sample in a learning model by carefully selecting weights that have atrophied during training, and thus have little or no effect on model performance. They then overwrite these entire weights. The approach used in [28] is considerably different from the experiments that we conduct in this paper. Here, we do not analyze models to find insignificant weights, but instead we overwrite the least-significant bits of all weights in the various layers of a model. Our approach is much simpler, in the sense that it requires no detailed analysis of the model, yet we are able to embed signifcant amounts of data in all of the models that we consider. Our results point to a more generic issue with respect to learning models, namely, that a large steganographic capacity is inherent in models that use a relatively large number of bits to store weights. As a byproduct of this research, we verify that only limited accuracy is required of the weights for the models that we consider. ## 3 Implementation In this section, we first discuss the malware dataset used to train our learning models. Then we provide details on the training of each of the models considered in this paper. The steganographic capacity of these models is analyzed in Section 4, below. ### Dataset Malware families can be difficult to define precisely because they can vary in terms of their size, scope, and specific features. However, a family generally refers to a group of malware samples that have similarities in terms of their functionality, behavior, and code structure. Although the specific details of each sample may differ, members of a given family typically share a core code base that contains common functions, routines, and behaviors. This allows security researchers to identify and track specific malware families over time, even as the individual samples within the family continue to evolve and change. By analyzing these shared characteristics, researchers can develop more effective detection and mitigation strategies to protect against the threat of malware. In this research, we consider a malware dataset from VirusShare [9]. This dataset contains more than 500,000 malware executables, which occupy more than 500GB of storage. Among the 500,000 malware executables, we have extracted the top 10 families, in terms of the number of samples available per family. Specifically, we consider the malware families listed Table 1, which are given in descending order, based on the number of samples. Next, we describe each of these families. Note that several different classes of malware are represented, including viruses, worms, and Trojans. **VBinject,**: short for "Visual Basic Injection", is a general technique that is applied by malware author to inject malicious program into legitimate Windows processes [15]. This technique is commonly used by malware to evade detection by antivirus software and other security measures. Once the malware is injected, it can carry out a variety of malicious actions, such as stealing sensitive information, downloading additional malware, or taking control of the infected system. **Winwebsec**: is designed to trick users into purchasing fraudulent security software or services by displaying false alerts and warnings about supposed security threats on their computers. Once installed on a user's computer, Winwebsec will typically display fake warnings claiming that the system is infected with viruses, spyware, or other malicious software. These warnings \begin{table} \begin{tabular}{c|c c} \hline \hline Family & Samples & Fraction of total \\ \hline VBinject & 2689 & 0.1751 \\ Winwebsec & 2303 & 0.1500 \\ Renos & 1567 & 0.1020 \\ OnLineGames & 1511 & 0.0984 \\ BHO & 1412 & 0.0920 \\ Startpage & 1347 & 0.0877 \\ Adload & 1225 & 0.0798 \\ VB & 1110 & 0.0723 \\ Vobfus & 1108 & 0.0721 \\ Ceeinject & 1084 & 0.0706 \\ \hline Total & 15,356 & 1.0000 \\ \hline \hline \end{tabular} \end{table} Table 1: Malware families are often accompanied by instructions to download and install a security program or pay for a service to remove the alleged threats [17]. Winwebsec is often distributed through social engineering tactics such as spam emails, malicious websites, and file-sharing networks. **Renos**: is similar to Winwebsec, in that it is designed to trick users into purchasing fraudulent security software or services [13]. Like other types of fake antivirus malware, Renos typically display fake warnings claiming that the system is infected with viruses, spyware, or other malicious software, and these warnings are often accompanied by instructions to download (and pay for) a supposed anti-virus program. Renos is distributed in the same manner as Winwebsec. **OnLineGames,**: as its name suggests, is a Trojan that mimics an online game, but is is actually designed to steal user information. This malware is often distributed through malicious websites, peer-to-peer networks, or email attachments. OnLineGames may be particularly dangerous because it targets a vulnerable population of online gamers who may be less aware of the risks associated with downloading and installing unknown software. Additionally, this type of malware can be difficult to detect and remove because it often operates in the background and can evade detection by antivirus software [19]. **BBO,**: which is short for "Browser Helper Object", is a type of add-on or plugin for web browsers, such as Internet Explorer. Legitimate BHOs provide additional functionality or modify the behavior of the browser; however, this BHO malware can be used by to perform unwanted actions, such as redirecting web traffic or displaying unwanted ads [14]. Because a BHO has deep access to the browser's functionality, it can be difficult to remove once installed. In some cases, a malicious BHO may be bundled with legitimate software and installed without the user's knowledge or consent. **Startpage**: is a family is Trojans that modifies a user's web browser settings, such as the homepage and search engine, without the user's consent [18]. Once installed, it changes the browser settings to redirect the user's searches to a specific search engine or homepage that may contain advertisements or other unwanted content. In some cases, this browser hijacker may also install additional unwanted software or collect information about the user's browsing habits. **Adload**: is an adware program that displays unwanted advertisements that the user cannot control as they browse the web [20]. This malware may also collect information about the user's browsing habits and use this data to display targeted advertisements. Adload can be difficult to remove and may continue to display unwanted advertisements even after the user has attempted to uninstall the software. In some cases, it may also install additional malware or compromise the security of the victim's computer. * is short for "Visual Basic", and it is a simple Trojan. It spreads a worm by copying itself to removable drives, network shares, and other accessible file systems. Once installed on a victim's computer, VB may perform a variety of malicious actions, such as stealing sensitive information, logging keystrokes, downloading additional malware, or using the victim's computer to participate in botnets or distributed denial-of-service (DDoS) attacks. It is particularly dangerous as it spreads rapidly and may infect a large number of computers before it is detected [11]. * is a malware family that downloads other malware onto a victim's computer, including Beebone, Fareit, and Zbot. It spreads through infected USB drives, network shares, and malicious URLs, and is known for its ability to mutate and evade detection by security software [16]. Vobfus is dangerous, in part, because it can propagate rapidly and silently, making it difficult to detect and contain. It can also disable or bypass security software, making it challenging to remove. * injects itself into legitimate processes running on a Windows operating system, allowing it to execute its malicious code undetected. It is often used in conjunction with other malware, such as banking Trojans, to steal sensitive information from victims. This particular threat employs obfuscation techniques to conceal its true intentions, making it more difficult for security software to detect its malicious activities [12]. For our feature vectors, we extract a relative byte histogram from each sample. That is, given a sample \(S\) in the form of an exe file, we count the number of times that each byte value \(0\) through \(255\) occurs in \(S\), and then divide each of these counts by the total number of bytes in \(S\). Note that this implies that our feature vectors are all of length \(256\). Also, if \(s_{i}\) is the \(i^{\text{th}}\) component of the feature vector for the sample \(S\), then \(s_{i}\) can be interpreted as the probability of drawing byte value \(i\), when randomly selecting a byte from \(S\). These feature vectors are efficient to generate, and require no costly disassembly or dynamic analysis. ### Model Training Analogous training and testing procedures were used for all learning models considered. For the first step, we train each model with labeled data and test the model, which establishes a baseline level of performance. We use accuracy as our measure of performance. After the initial training and testing, data is inserted into the low-order \(n\) bits of the weights, which, on average, changes about half of the bit values. For each \(n\), the performance of the model is re-evaluated using the same data and accuracy metric as previously used, which allows for a direct comparison of the results for each \(n\). We then graph these accuracy results as a function of \(n\). Steganographic Capacity Experiments In this section, we consider the steganographic capacity of each of the models discussed in Section 2.2. To measure the steganographic capacity, we embed information in the low-order \(n\) bits of the model weights. For each model, we consider the following three cases. 1. Only the output layer weights are modified 2. Only the weights of the hidden layer (or layers) are modified 3. All of the model weights are modified In each case, we graph the model accuracy as a function of \(n\). Also, we discuss the total capacity, that is, the total number of model bits that are available for this form of information hiding. In each case, the information that we hide is extracted from the book _Alice's Adventures in Wonderland_[3]. ### Mlp The MLPClassifier() from the sklearn.neural_network module was used to train and test our MLP model. The hyperparameters tested are listed in Table 2, with the selected values in boldface. Note that a model with two hidden layers, with 128 and 10 neurons, respectively, was best. Also, the logistic function was selected as our activation function, and so on. The results obtained when hiding information in the low order bits of the output layer weights of our trained MLP model are summarized in Figure 1(a). We observe that the original accuracy for the model is approximately 0.8417, and the performance of the model exceeds 0.8119, until the low-order 26 bits of the output weights are overwritten, which causes the accuracy to drop dramatically to 0.3830. Overwriting more bits causes the accuracy to fluctuate, but it remains very low. In Figures 1(b) and (c) we give the results when information is hidden in the hidden layer weights, and when information is hidden in all of the weights of our trained MLP model, respectively. The results in these two cases are analogous \begin{table} \begin{tabular}{c|c} \hline \hline Hyperparameter & Values tested \\ \hline hidden\_layer\_sizes & (64, 10), (96, 10), **(128, 10)** \\ activation & identity, logistic \\ alpha & 0.0001, **0.05** \\ random\_state & 30, **40**, 50 \\ solver & adam \\ learning\_rate\_init & **0.00001** \\ max\_iter & **10000** \\ \hline \hline \end{tabular} \end{table} Table 2: MLP model hyperparameters tested to the results for the output layer weights, although in both of these latter cases, only 21 bits can be overwritten before the accuracy drops below 0.80. There are 100 weights in the output layer, and 34,048 weights in the hidden layer, which makes the total number of weights 34,148 in this particular MLP model. As shown in the results in Figure 1(a), we can overwrite the low-order-25 bits of each weights in the output layer with minimal loss of accuracy, which gives the model a steganographic capacity of 2.44KB,1 just in the output layer. The results in Figure 1(b) show that inserting information into the low-order-21 bits weights in the internal layers does not have a major negative impact on the model accuracy, which gives the a steganographic capacity of slightly more than 698KB. With all weights in the model considered, as shown in Figure 1(c), again the low-order-21 bits are available for information hiding, which give the MLP model a steganographic capacity that is slightly in excess of 700KB. Footnote 1: Note that we follow the standard convention in computing whereby 1KB represents \(2^{10}\) bytes, while 1MB is \(2^{20}\) bytes, and 1GB is \(2^{30}\) bytes. Figure 1: MLP model performance with low-order bits of weights overwritten ### Cnn Our CNN model was implemented using torch in PyTorch, which provides support for tensor computation, deep neural networks, and many other useful machine learning utilities. The model architecture selected consists of two convolutional layers, each utilizing ReLU activation functions, with one and six input channels, as well as six and 12 output channels, respectively. Following the convolutional layers, there are two fully connected linear layers, again with ReLU activation functions. The input sizes of these fully connected layers are \(12\times 256\) and \(512\), respectively. The final layer of the model is a fully connected output layer with an input size of 100 and an output size of 10, utilizing a linear activation function. The hyperparameters tested (via grid search) are listed in detail in Table 3, with the selected values in boldface. Since this model has a large number of hyperparameters and training is relatively costly, only two of the hyperparameter values are varied As with the previous models, process of analyzing the impact of hiding information in the output layer weights on the accuracy of the CNN model was carried out systematically. The model was initially trained with the preprocessed malware family dataset, and its accuracy was evaluated on the testing data. The accuracy was found to be 0.7354 in this unmodified case, which serves as the baseline for subsequent analysis. Next, the output layer weights were systematically overwritten with data, starting from the low-order bits and increasing towards the high-order bits. A total of 32 bits are present in each weight, and the resulting accuracy was recorded after the \(n\) low-order bits had been overwritten, for each \(n\in\{0,1,2,\ldots,32\}\). Figure 2(a) summarizes the accuracies obtained for the model in each case. \begin{table} \begin{tabular}{c|c} \hline \hline Hyperparameter & Values tested \\ \hline pad-size & **256** \\ batch size & 128, **64** \\ max-epoch & **20** \\ lr & 0.0005, **0.00005** \\ momentum & **0.9** \\ hidden-size & **512** \\ output & **10** \\ bptt & **256** \\ ntoken & **256** \\ d\_model & **128** \\ d\_hid & **128** \\ nlayers & **2** \\ nhead & **1** \\ dropout & **0.5** \\ \hline \hline \end{tabular} \end{table} Table 3: CNN model hyperparameters tested We observe that overwriting the low-order 21 bits of the output layer weights does not have any significant effect on the accuracy. However, when the \(22^{\text{nd}}\) bit is overwritten, the accuracy drops from 0.7468 to 0.7070, and a large drop to 0.5576 occurs when the low-order 24 bits are overwritten. Finally, another large drop is accuracy is observed when the 27 low-order bits are overwritten, resulting in an accuracy of only 0.2507, and when 29 low-order bits are overwritten, the accuracy is comparable to guessing the labels at random. In Figures 2(b) and (c) we give the results when information is hidden in the hidden layer weights, and when information is hidden in all of the weights of our trained CNN model, respectively. These results are analogous to the output layer case, but with 22 low-order bits available for information hiding in both, and a sharper drop in accuracy from that point. In this particular CNN model, there are 1000 weights in the output layer, and 1,624,142 weights in the internal layer, and hence the total number of weights is 1,625,142. As shown in the results in Figure 2(a), we can change the low-order-21 bits of each weight in the output layer without significantly affecting the model performance, which gives the model a steganographic capacity of slightly more than 20.5KB, just in terms of the output layer. The results in Figure 2(b) show Figure 2: CNN model performance with low-order bits of weights overwritten that inserting information into the low-order-22 bits of the weights in the internal layers does not have a negative impact on the model accuracy, which gives the model a steganographic capacity of about 34.0MB in terms of the internal weights. With all weights of the model considered, as shown in Figure 2(c), the low-order 22 bits are again available for information hiding, which give the MLP model a total steganographic capacity of about 34.1MB. ### Transformer Model Our Transformer model was implemented using the PyTorch modules including TransformerEncoder, TransformerEncoderLayer, TransformerDecoderLayer, TransformerDecoder, and LayerNorm. The model consists of an embedding layer, a positional encoding layer, a Transformer encoder layer, a Transformer decoder layer, and two linear layers. The input first passes through an embedding layer, which maps each token in the input sequence to a vector in a high-dimensional space. Then, a positional encoding layer is applied to the embedded input sequence to add positional information to the embeddings. The Transformer encoder layer serves to encode the input sequence and create a representation of it in a high-dimensional space. The encoder layer is composed of a self-attention mechanism and a feedforward neural network layer. The resulting vectors are then passed through a feedforward neural network layer. Similarly, the Transformer decoder layer takes the encoded input sequence and generates a prediction for each output token. The decoder layer is also composed of self-attention and feedforward neural network layers. However, it additionally received inputs from the encoder layer through a multi-head attention mechanism. Finally, the output of the Transformer decoder layer is passed through two linear layers, where the first layer maps the output to a lower-dimensional space, and the second layer maps this lower-dimensional representation to the output classes. The model also employs layer normalization and dropout for regularization. The hyperparameters tested via a grid search are listed in Table 4, with the values selected in boldface. Since there are a large number of hyperparameters in this model, seven of the hyperparameters in Table 4 are fixed values. Our trained transformer model achieved perfect accuracy on the test dataset. The weights of the output layer were manipulated to explore the effect of overwriting the low-order bits. The resulting accuracies--as a function of the number of bits overwritten--can be seen in Figure 3(a). We observe that up to 26 low-order bits can be overwritten with no adverse effect on the model accuracy. A drop in accuracy from 1.00 to 0.9833 occurs when the low-order-27 bits are overwritten. When the low-order 28 bits of the output layer weights are overwritten, the accuracy of the model drops to 0.8307, and the accuracy plummets thereafter. In Figures 3(b) and (c) we give the results when information is hidden in the \begin{table} \begin{tabular}{c|c} \hline \hline Hyperparameter & Values tested \\ \hline pad-size & **256** \\ batch size & 128, **64** \\ max-epoch & **20** \\ lr & 0.0005, **0.00005** \\ momentum & **0.9** \\ hidden-size & 256, **512** \\ output & **10** \\ bptt & **256** \\ ntoken & **256** \\ d\_model & **128**, 256, 512 \\ d\_hid & **128**, 1024 \\ nlayers & **2**, 12 \\ nhead & **1**, 8 \\ dropout & **0.5** \\ \hline \hline \end{tabular} \end{table} Table 4: Transformer model hyperparameters tested Figure 3: Transformer model performance with low-order bits of weights overwritten hidden layer weights, and when information is hidden in all of the weights, of our trained Transformer model, respectively. In both of these cases, we are free to hide information in the low-order 24 bit positions with no negative effect on the model, but when we use the low-order 25 bits, model accuracy is severely affected. In the transformer model, the number of weights in output layer and internal layers are 1280 and 175,681,024, respectively, giving a total of 175,682,304 weights. For the output layer weights, as shown in Figure 3(a), we can overwrite the low-order 26 bits with minimal loss in accuracy, giving a steganographic capacity of 32.5KB, just in terms of the output layer. Considering either the internal weights or all weights, we can hide information in the low 24 bits without any ill effect on the model, giving us a steganographic capacity in excess of 3.92GB for both cases. ## 5 Conclusion The primary goal of this research was to determine a reasonable lower bound the stenographic capacity of various learning models. Specifically, we tested Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), and Transformer models, which were each trained on a dataset of more than 15,000 malware executables from 10 families, with more than 1000 samples for each family. All of the trained deep learning models underwent the same testing procedure: We first determined the accuracy of each model on the test set, then we embedded information in the \(n\) low-order bits of the weights, recomputing the classification accuracy for each \(n\). We experimented with just the output layer weights, just the hidden layer weights, and all of the weights. The results were consistent across all models, in the sense that at least 20 bits per weight can be used to hide information, with minimal effect on the accuracy. In addition, at some point shortly beyond 20 bits, model accuracy deteriorates dramatically. These results hold whether considering the output layer weights, the hidden layer weights, or all weights. Our experimental results show that the steganographic capacity of the deep learning models we tested is surprisingly high. This is potentially a significant security issue, since such models are ubiquitous, and hence it is to be expected that attackers will try to take advantage of them. Embedding, say, malware in a learning model offers an attack vector that is practical, and could be highly effective in practice. It would be wise to reduce the steganographic capacity of learning models. Our results indicate that 32-bit weight do not yield a significant improvement in accuracy over what could be achieved with, say, 16-bit weights. With additional work, for specific models, it should be feasible to use even smaller weights--this would be an interesting and potentially valuable area for additional research. Further research into other popular deep learning models would also be worthwhile. In particular, it would be interesting to determine the steganographic capacity of pre-trained models, such as VGG-19 and any of the popular ResNet models (e.g., ResNet18, ResNet34, ResNet50, and ResNet101). Further, it would seem that the steganographic capacity of pre-trained models could be greatly reduced, and the creation of "thin" pre-trained models would be of value. It would also be interesting to determine whether more challenging classification problems tend to affect the steganographic capacity of inherently "fat" models. Intuitively, more challenging problems should require more learning to be embedded in the weights, and hence the steganographic capacity might be somewhat lower. Another area for further investigation would be to combine some aspects of the steganographic capacity work presented in this paper with the work in [28], where information is hidden in weights that are (essentially) unused by the model. By combining both of these approaches, we could obtain a larger steganographic capacity of learning models, with the goal of obtaining a reasonably tight upper bound.
2307.11809
Physical Characteristics and Maximum Allowable Mass of Hybrid Star in the Context of $f(Q)$ Gravity
In this study, we explore several new characteristics of a static anisotropic hybrid star with strange quark matter (SQM) and ordinary baryonic matter (OBM) distribution. Here, we use the MIT bag model equation of state to connect the density and pressure of SQM inside stars, whereas the linear equation of state $p_r =\alpha \rho-\beta$ connects the radial pressure and matter density caused by baryonic matter. The stellar model was developed under a background of $f(Q)$ gravity using the quadratic form of $f(Q)$. We utilized the Tolman-Kuchowicz ansatz to find the solutions to the field equations under modified gravity. We have matched the interior solution to the external Schwarzschild spacetime in order to acquire the numerical values of the model parameters. We have selected the star Her X-1 to develop various profiles of the model parameters. Several significant physical characteristics have been examined analytically and graphically, including matter densities, tangential and radial pressures, energy conditions, anisotropy factor, redshirt, compactness, etc. The main finding is that there is no core singularity present in the formations of the star under investigation. The nature of mass and the bag constant $B_g$ have been studied in details through equi-mass and equi-$B_g$ contour. The maximum allowable mass and the corresponding radius have been obtained via $M-R$ plots.
Piyali Bhar, Sneha Pradhan, Adnan Malik, P. K. Sahoo
2023-07-21T17:42:57Z
http://arxiv.org/abs/2307.11809v1
Physical Characteristics and Maximum Allowable Mass of Hybrid Star in the Context of \(f(Q)\) Gravity ###### Abstract In this study, we explore several new characteristics of a static anisotropic hybrid star with strange quark matter (SQM) and ordinary baryonic matter (OBM) distribution. Here, we use the MIT bag model equation of state to connect the density and pressure of SQM inside stars, whereas the linear equation of state \(p_{r}=\alpha\rho-\beta\) connects the radial pressure and matter density caused by baryonic matter. The stellar model was developed under a background of \(f(Q)\) gravity using the quadratic form of \(f(Q)\). We utilized the Tolman-Kuchowicz ansatz [R. C. Tolman, Phys. Rev. 55 (1939) 364-373; B. Kuchowicz, Acta Phys. Pol. 33 (1968) 541] to find the solutions to the field equations under modified gravity. We have matched the interior solution to the external Schwarzschild spacetime in order to acquire the numerical values of the model parameters. We have selected the star Her X-1 to develop various profiles of the model parameters. Several significant physical characteristics have been examined analytically and graphically, including matter densities, tangential and radial pressures, energy conditions, anisotropy factor, redshift, compactness, etc. The main finding is that there is no core singularity present in the formations of the star under investigation. The nature of mass and the bag constant \(B_{g}\) have been studied in details through equi-mass and equi-\(B_{g}\) contour. The maximum allowable mass and the corresponding radius have been obtained via \(M-R\) plots. ## I Introduction The spatial structure of the universe's rapid expansion has drawn a lot of emphasis in the latest developments of cosmology and astronomical physics [1; 2]. Modern innovations in this cosmological period have shown novel ways to familiarise the essential and empirical changes for the fast evolution of the galaxy. Various findings could offer persuasive evidence of the rapid growth caused by extreme redshift supernova observations [3], whereas massive formations [4] and changes in the celestial microwave radiation [5] present implicit support. An unidentified aspect known as dark energy (\(DE\)), which sustains an intense adverse force, is responsible for the universe's accelerated expansion. Also, unexplained DE is believed to include around \(68\%\) of the universe's overall energy. Therefore, it is necessary to make certain adjustments to the conventional theory in order to evaluate the occurrence of rapid growth. These sorts of trials encourage researchers to explore possibilities for modified or expanded theories of gravity that may be capable of illustrating scenarios when the general theory of relativity (\(GR\)) generates unacceptable conclusions. Due to the constraints of \(GR\), cosmologists are curious about analyzing modified gravitational theories. Some of these theories are \(f(R)\), \(f(G)\), \(f(Q)\), \(f(T)\), \(f(R,G)\), \(f(R,T)\), and \(f(R,\phi)\) gravitational theories [6]-[20]. The alterations of \(GR\) seem enticing to explain the late-time of cosmic evolution and \(DE\) difficulties. In addition, the various astronomical perspectives and concepts offered by these theories assist in elucidating the mysteries underlying the occurrence of the galaxy's rising expansion [21]. Scientists need to verify the reliability of these kinds of modified theories of gravity in all scales, like cosmological scales and astrophysical ones. It is reasonable to assume that altering the gravitational field action will have an impact on the astrophysical point of view. In the weak field limit, modified theories of gravity reduce to GR, whereas the strong field regimes may be able to distinguish between GR and its potential extensions. It is commonly known that relativistic compact objects (neutron stars) live in strong gravitational fields, so this kind of astrophysical object can be studied to check the possible deviation of the newly proposed modified gravity theory from Einstein's GR. Additionally, new phenomena that Einstein could not explain can be discovered in stellar astrophysics through this modified theory of gravity. A hybrid star is an assumed particular kind of star in which a neutron star that is located at the center of a red giant or red supergiant, produced through an explosion of the massive with neutron star and extremely high density. Hybrid stars are yielded as an outcome of gravitational deformation when the nucleus of a star loses out of energy and is unable to sustain its weight despite the force of gravity. One of the universe's strongest and weirdest things is the hybrid object. They are typically connected to phenomena like eruptions of supernovae, cosmic rays, and bursts of gamma radiation. Researching dense stars may assist scientists in better understanding the properties of matter at very high densities and how energies and matter behave under very intense fields of gravity. According to the altered ideas, a hybrid stellar is an exclusive type of celestial object that occurs by the collapse of matter against the pressure of powerful gravitational forces, also defined by modified equations. One of the important characteristics of modified gravity is the ability to accommodate non-singular hybrid stars, which does not anticipate by the standard \(GR\). The core of these non-singular giant stars is uniform and smooth and it is linked to the external geometry. According to research on the behavior of hybrid stars in modified gravity, the features of these structures can be quite distinct from those believed by \(GR\). One of the important characteristics of modified gravity is the ability to accommodate non-singular hybrid stars, which does not anticipate by the standard \(GR\). The core of these non-singular giant stars is uniform and smooth and it is linked to the external geometry. According to research on the behavior of compact stars in modified gravity, the features of these structures can be quite distinct from those believed by \(GR\). Plenty of researchers have implemented some important refinements to GR in the last couple of decades. In these beneficial amendments, one of the most intuitive and prominent theory is obtained by replacing the expression of Ricci scalar \(R\) with an arbitrary function \(f(R)\)[15]. Such different models of gravity serve as essential for the accelerating proliferation of space give better explanation for the enigmatic composition of the cosmos. The fascinating theory that gained prominence in recent decades is symmetric teleparallel gravity [22], acknowledged as the \(f(Q)\) theory. Jimenez et al. [23] proposed the idea of \(f(Q)\), in which the nonmetricity \(Q\) essentially initiates the gravitational attraction. Studies into \(f(Q)\) gravity are progressing efficiently, as have empirical obstacles to compare it to the conventional \(GR\) interpretation. Lazkoz et al. [24] established an intriguing collection of limitations on \(f(Q)\) gravity by defining the \(f(Q)\) Lagrangian as polynomial equations of the redshift \(z\). According to these investigations, feasible \(f(Q)\) models have coefficients similar to the \(GR\) model namely \(\Lambda CDM\) model. They have checked the validity of these models at the background level to see if this new formalism offers any viable alternatives to explain the late-time acceleration of the universe. For this verification, they have used a variety of observational probes, such as the expansion rate data from early-type galaxies, Type Ia Supernovae, Quasars, Gamma Ray Bursts, Baryon Acoustic Oscillations data, and Cosmic Microwave Background distance priors. This innovative method offers an alternative viewpoint on developing a modified, observationally trustworthy gravity model. Apart from this, there is some work [25; 26] based on the observational constraints in the background of \(f(Q)\) gravity which gives the strong motivation to explore stellar models in this \(f(Q)\) theory. Mandal et al. [27] investigated energy parameters for the power-law and nonlinear \(f(Q)\) models that describe the visible behavior of the cosmos. Jimenez et al. [28] discussed the modified gravity theories built on nonlinear extensions of the nonmetricity scalar, and investigated several intriguing baseline cosmologies (such as accelerating solutions relevant to inflation and dark energy), and examined the response of cosmic disturbances. By giving the evolution equations and enforcing certain functional forms of the functions, such as power-law and exponential dependence of the nonminimal couplings, Harko et al. [29] investigated a number of cosmological applications. Mandal et al. [30] reconstructed the appropriate structure of the \(f(Q)\) function in \(f(Q)\) gravity by employing cosmographic factors and also studied the different sorts of energy constraints for the exploration of logarithmic and polynomial functions in the \(f(Q)\) gravity. Khyllep [31] explored the cosmic nature of power-law structure and the rapid evolution of matter perturbation in the modified \(f(Q)\) gravity. Anagnostopoulos, et al. [32] proposed a novel model in the framework of \(f(Q)\) gravity, which has the same number of free parameters to those of \(\Lambda CDM\), however at a cosmological framework it gives rise to a scenario that does not have \(\Lambda CDM\) as a limit. Frusciante [33] focused on a specific model in \(f(Q)\) gravity which is indistinguishable from the \(\Lambda\)-cold-dark-matter model at the background level, while showing peculiar and measurable signatures at linear perturbation level. Lin and Zhai [34] explored the application of \(f(Q)\) gravity to the spherically symmetric configurations and demonstrated the effects of\(f(Q)\) by considering the external and internal solutions of compact stars. Ambrosio, et al., [35] constructed several perturbative corrections to the Schwarzschild solution for different choices of \(f(Q)\), which in particular include a hair stemming from the now dynamical affine connection. De and Loo [36] proved that the energy conservation criterion is equivalent to the affine connection's field equation of \(f(Q)\) theory. Astronomers have observed that the Tolman-Kuchowicz metric to be quite intriguing topic for studying the evolution of astronomical formations. Jasim et al. [37] investigated a singularity-free model for spherically symmetric anisotropic peculiar stars using the Tolman-Kuchowicz metric. In the setting of modified \(f(R,G)\) gravity, Javed et al. [38] studied a variety of anisotropic star spheres and developed equations of motion that take into account anisotropic matter distribution and Tolman-Kuchowicz spacetime. Shamir and Naz [39] examined certain relativistic stellar object configurations for static spherically symmetric structures under modified gravity using the Tolman-Kuchowicz spacetime. Biswas et al. [40] offered a relativistic model of a static, spherically symmetric, anisotropic odd star based on Tolman-Kuchowicz metric potentials and they further employed the most basic version of the phenomenological MIT bag equation of state to characterize the distribution of SQM across the star system. Majid and Sharif [41] created an anisotropic model of strange stars in the context of massive Brans-Dicke gravity and used the MIT bag model to obtain the field equations for the Tolman-Kuchowicz ansatz. Within the context of Einstein-Gauss-Bonnet gravity in five dimensions, Bhar et al. [42] studied the distribution of anisotropic compact matter by solving the corresponding field equations using the inner geometry of Tolman-Kuchowicz spacetime. Naz and Shamir [43] explored the effect of electric charge on static spherically symmetric star models in the presence of anisotropic matter distribution using the Tolman-Kuchowicz space-time and the simplified phenomenological MIT bag equation of state. Zubair et al. [56] introduced stellar models for anisotropic matter distribution under \(f(T)\) gravity and generated matching conditions by combining the interior geometry of Tolman-Kuchowicz spacetime with exterior spacetimes. Saklany et al. [57] provided a simple description for modeling the coupling of dark energy with OBM by employing the super-dense pulsar PSRJ1614-2230 as the model star, and the field equations are solved in the stellar interior using the generalized framework of Tolman-Kuchowicz spacetime metric. The authors of the article [44] examine the anisotropic stellar solutions admitting Finch-Skea symmetry (viable and nonsingular metric potentials) in the presence of some exotic matter fields. In the work [45], authors derived the exact solutions for the relativistic compact stars in the presence of two fields axion (Dante's Inferno model) and with/without the complex scalar field (with the quartic self-interaction) coupled to gravity. Recently, Astashenok, et al. [46] investigated the Chandrasekhar mass limit of white dwarfs in various models of \(f(R)\) gravity by taking two equations of state for stellar matter: the simple relativistic polytropic equation with polytropic index and the realistic Chandrasekhar equation of state. Astashenok along with his collaborators [47] investigated the upper mass limit predictions of the baryonic mass for static neutron stars in the context of \(f(R)\) gravity by using the most popular \(R^{2}\) gravity model. Astashenok and Odintsov [48] investigated realistic neutron stars in axion \(R^{2}\) gravity and obtained the increase of star mass independent from central density for wide range of masses. The same authors [49] investigated the equilibrium configurations of uniformly rotating neutron stars in \(R^{2}\) gravity with axion scalar field for GM1 equation of state for nuclear matter. Some interesting work related to the stellar structures can be seen in [50]-[55]. Many researchers proposed the model of compact star in modified theory of gravity which has been discussed earlier. In this paper our goal is to obtain a hybrid star model in f(Q) gravity which can include the recent observation of different compact star. From our analysis, with the help of the mass radius profile we are able to attain the mass of different compact star in the f(Q) gravity which has been discussed in this paper and it is one of the most positive outcome of our present paper. To the best of our knowledge, this is first attempt to discuss the physical characteristics and maximum allowable mass of hybrid star in the background of \(f(Q)\) gravity. The arrangement of the current manuscripts is as follows: Section II deals with the basic formalism of \(f(Q)\) theory of gravity. In Section III, we discuss the Tolman-Kuchowicz ansatz and MIT bag model equation of state. Matching condition has been investigated in Section IV. Section V deals with the mass, surface redshift and compactness factor. Mass radius relationship is presented in Section VI with details. The mass and bag constant by using colored plots are represented in Section VII. Sections VIII deals with the details discussion of physical analysis of considered stellar structures. Lastly, we conclude the outcome of our findings. ## II Construction of \(f(Q)\) gravity Now, we introduce the action for \(f(Q)\) gravity given by [58], \[S=\int\left[\frac{1}{2}f(Q)+\mathcal{L}_{m}\right]\sqrt{-g}d^{4}x, \tag{1}\] where \(f(Q)\) is a general function of \(Q\), \(g\) represents the determinant of the metric \(g_{\mu\nu}\) and \(\mathcal{L}_{m}\) is the matter Lagrangian density. The non-metricity tensor is given as, \[Q_{\alpha\mu\nu}=\nabla_{\alpha}g_{\mu\nu}=-L^{\rho}_{\alpha\mu}g_{\rho\nu}-L ^{\rho}_{\alpha\nu}g_{\rho\mu}, \tag{2}\] where the following equations serve as representations for the non-metricity tensor's two independent traces: \[Q_{\alpha}=Q^{\ \beta}_{\ \alpha\ \beta},\ \ \tilde{Q}_{\alpha}=Q^{\beta}_{\ \ \alpha\beta}, \tag{3}\] and the deformation term is given by, \[L^{\alpha}_{\mu\nu}=\frac{1}{2}Q^{\alpha}_{\mu\nu}-Q_{(\mu\nu)}^{\ \ \ \ \alpha}, \tag{4}\] whereas \(Q\) is given as, \[Q=-g^{\mu\nu}(L^{\alpha}_{\beta\nu}L^{\beta}_{\mu\alpha}-L^{\beta}_{\alpha\beta }L^{\alpha}_{\mu\nu})=-P^{\alpha\beta\gamma}Q_{\alpha\beta\gamma}. \tag{5}\] Here, \(P^{\alpha\beta\gamma}\) is the non-metricity conjugate and the corresponding tensor is written as \[P^{\alpha}_{\ \ \mu\nu}=\frac{1}{4}\left[-Q^{\alpha}_{\ \ \mu\nu}+2Q^{\alpha}_{( \mu\nu)}-Q^{\alpha}g_{\mu\nu}-\tilde{Q}^{\alpha}g_{\mu\nu}-\delta^{\alpha}_{( \mu}Q_{\nu)}\right]. \tag{6}\] The field equation of \(f(Q)\) gravity is obtained if we vary (1) with respect to \(g_{\mu\nu}\) and it takes the following form: \[-\frac{2}{\sqrt{-g}}\nabla_{a}(\sqrt{-g}f_{Q}P^{\alpha}_{\mu\nu})+f_{Q}(P^{ \alpha\beta}_{\nu}Q_{\mu\alpha\beta}-2P^{\alpha\beta}_{\ \ \mu}Q_{\alpha\beta\nu})+\frac{1}{2}g_{\mu\nu}f=\kappa T_{\mu\nu} \tag{7}\] where \(f_{Q}=\frac{\partial f}{\partial Q}\) and the energy-momentum tensor \(T_{\mu\nu}\) is given by \[T_{\mu\nu} = -\frac{2}{\sqrt{-g}}\frac{\delta\sqrt{-g}\mathcal{L}_{m}}{\delta \sqrt{g_{\mu\nu}}}, \tag{8}\] Now, by altering the action in relation to the affine connection, the following equation can be obtained: \[\nabla_{\mu}\nabla_{\nu}(\sqrt{-g}f_{Q}P^{\mu\nu}_{\ \ \ \alpha})=0. \tag{9}\] Within the framework of \(f(Q)\) gravity, the field equations guarantee the conservation of the energy-momentum tensor, and given the choice of \(f(Q)=Q\), the Einstein equations are retrieved. ## III Modified field equation in \(f(Q)\) gravity We have considered the following line element as: \[ds^{2}-=e^{\nu}dt^{2}-e^{\lambda}dr^{2}-r^{2}(d\theta^{2}+\sin^{2}\theta d \phi^{2}), \tag{10}\] where, \(\lambda\) and \(\nu\) are functions of 'r' and \(0\leq r<\infty\). The metric co-efficients \(\lambda\) and \(\nu\), only depend on \(r\). If both \(\nu(r)\) and \(\lambda(r)\) tend to \(0\) as \(r\rightarrow\infty\), the spacetime will be asymptotically flat. In the present article we have described a model of the hybrid star which is made up of normal baryonic matter having density \(\rho\) along with the strange quark matter having density \(\rho_{q}\) and for the sake of simplicity we have not considered the interaction between these two matters. For the presence of these two types of matter, the energy-momentum tensor is changed as follows: \[T^{0}_{0}=\rho^{\rm eff}=\rho+\rho_{q}, \tag{11}\] \[T^{1}_{1}=-p^{\rm eff}_{r}=-(p_{r}+p_{q}),\] (12) \[\text{and}\quad T^{2}_{2}=T^{3}_{3}=-p^{\rm eff}_{t}=-(p_{t}+p_{q}). \tag{13}\] In the present scenario, \(\rho\), \(p_{r}\), and \(p_{t}\) refer to the matter density, radial pressure, and transverse pressure generated by traditional baryonic matter, while \(\rho_{q}\) and \(p_{q}\) refer to the matter density and pressure developed by quark matter, respectively. Bhar [59] also used the same technique to model a compact star in GR. Abbas and Nazar [60] recently used the same approach to model a hybrid star in minimally coupled \(f(R)\) gravity. In our present article, our goal is to study the effect of the coupling parameter of \(f(Q)\) gravity on the model of a hybrid star. A crucial factor in the composition of ultra-dense strange quark particles is the incorporation of SQM in the fluid distribution. It has been hypothesized that the neutrons' phase change into bosons, hyperons, and SQM may occur at the core of the neutron star due to the immense pressure and density present there. According to Cameron's analysis [61], the hyperon must be produced inside the neutron star. Some nucleons may be converted into hyperons, which are more supportive energetically, as a result of extremely massive density and weak interaction. Quark matter, however, may also be present in the neutron star's interior. Due to the massive density and high central momentum conversion in the neutron star's core, the quarks become free of interaction. According to a review of the literature, the (u) and (d) quarks are currently undergoing strange matter transformations, and the entire quark matter also undergoes strange matter transformations [62; 63; 64; 65; 79]. As a result, the neutron star as a whole gets converted into a strange quark object [67]. Some other work related to the hybrid star can be found in [72; 73; 74]. We have the following field equations for a hybrid star in \(f(Q)\) gravity using all the aforementioned expressions: \[\kappa(\rho+\rho_{q}) = \frac{e^{-\lambda}}{2r^{2}}\Big{[}2rf_{QQ}Q^{\prime}(e^{\lambda}- 1)+f_{Q}\Big{(}(e^{\lambda}-1)(2+r\nu^{\prime})+(1+e^{\lambda})r\lambda^{ \prime}\Big{)}+fr^{2}e^{\lambda}\Big{]}, \tag{14}\] \[\kappa(p_{r}+p_{q}) = -\frac{e^{-\lambda}}{2r^{2}}\Big{[}2rf_{QQ}Q^{\prime}(e^{\lambda }-1)+f_{Q}\Big{(}(e^{\lambda}-1)(2+r\lambda^{\prime}+r\nu^{\prime})-2r\nu^{ \prime}\Big{)}+fr^{2}e^{\lambda}\Big{]},\] (15) \[\kappa(p_{t}+p_{q}) = -\frac{e^{-\lambda}}{4r}\Big{[}-2rf_{QQ}Q^{\prime}\nu^{\prime}+f _{Q}\Big{(}2\nu^{\prime}(e^{\lambda}-2)-r\nu^{\prime 2}+\lambda^{\prime}(2e^{ \lambda}+r\nu^{\prime})-2r\nu^{\prime\prime}\Big{)}+2fre^{\lambda}\Big{]}. \tag{16}\] where \(\kappa=8\pi\) and \((^{\prime})\) represents the derivative with respect to the radial co-ordinate '\(r\)'. Now, let us choose a linear function for \(f(Q)\) gravity, which is expressed as: \[f(Q)=mQ+n, \tag{17}\] where '\(m\)' and '\(n\)' are characteristics without dimensions. The expression of \(Q\) is described by [75], \[Q=\frac{1}{r}(\nu^{\prime}+\lambda^{\prime})(e^{-\lambda}-1). \tag{18}\] ## IV Model of hybrid star in \(f(Q)\) gravity To obtain the model of the hybrid star, let us use the well-known Tolman-Kuchowicz _ansatz_[76; 77] given by, \[\nu(r) = Br^{2}+2\ln D, \tag{19}\] \[\lambda(r) = \ln(1+ar^{2}+br^{4}), \tag{20}\] where \(D\) is a free of dimensions parameter and \(a\), \(B\), and \(b\) are parameter values that are constant having units of km\({}^{-2}\), km\({}^{-2}\), and km\({}^{-4}\), respectively. The metric potentials chosen in this paper are well-motivated since they provide a model which does not suffer from any kind of singularity. To close the system we have to choose one extra constraint, i.e., a well-motivated relation between the radial pressure \(p_{r}\) and density \(\rho\) of normal baryonic matter is needed. There are several choices to describe a relation between \(p_{r}\) and \(\rho\). For our present model, we have chosen a linear equation of state given by \[p_{r} = \alpha\rho-\beta, \tag{21}\] where \(0<\alpha<1\) with \(\alpha\neq 1/3.\) and \(0<\beta\). Many authors have used this EoS to model the compact star which can be found in Refs. [60; 68; 69; 70]. Our work is well motivated by these articles. Let's further assume that the MIT bag model equation of state provides the pressure-matter density relation for quark matter as follows: [78; 79], \[p_{q} = \frac{1}{3}(\rho_{q}-4B_{g}), \tag{22}\] where \(B_{g}\) is the bag constant of units MeV/fm\({}^{3}\)[80]. Now solving the equations (14)-(16) with the help of (17)-(22), we obtain: \[\rho = \frac{1}{4\pi(3\alpha-1)(ar^{2}+br^{4}+1)^{2}}\Big{[}a^{2}(r^{4} (12\pi\beta+16\pi B_{g}-n)-2mr^{2})+a(m(-4br^{4}+3Br^{2}-3)+3B(bmr^{4}+m) \tag{23}\] \[-2r^{2}(br^{4}+1)(n-4\pi(3\beta+4B_{g})))-br^{2}(br^{4}+2)(r^{2}(n -16\pi B_{g})+2m)+12\pi\beta(br^{4}+1)^{2}+16\pi B_{g}-n\Big{]},\] \[p_{r} = \frac{1}{4\pi(3\alpha-1)(ar^{2}+br^{4}+1)^{2}}\Big{[}a^{2}(r^{4} (4\pi(\beta+4\alpha B_{g})-\alpha n)-2\alpha mr^{2})+ \tag{24}\] \[\alpha a(m(-4br^{4}+3Br^{2}-3)-2r^{2}(br^{4}+1)(n-16\pi B_{g}))+8 \pi a\beta r^{2}(br^{4}+1)+\] \[\alpha(3B(bmr^{4}+m)-br^{2}(br^{4}+2)(r^{2}(n-16\pi B_{g})+2m)+1 6\pi B_{g}-n+4\pi\beta(br^{4}+1)^{2}\Big{]},\] \[p_{t} = \frac{1}{8\pi(3\alpha-1)(ar^{2}+br^{4}+1)^{2}}\Big{[}-a^{2}r^{2}(2r^{ 2}(\alpha n-4\pi(\beta+4\alpha B_{g}))+(\alpha+1)m)+a(-2\alpha(2r^{2}(br^{4}+1)(n -16\pi B_{g}) \tag{25}\] \[+m(br^{4}+3))-2bmr^{4}+16\pi\beta r^{2}(br^{4}+1)+(3\alpha-1)B^{2 }mr^{4}+(3\alpha+1)Bmr^{2})\] \[-b^{2}r^{6}(2r^{2}(\alpha n-4\pi(\beta+4\alpha B_{g}))+(\alpha+1)m )+br^{2}(Bmr^{2}((3\alpha-1)Br^{2}+2)\] \[+4r^{2}(4\pi(\beta+4\alpha B_{g})-\alpha n)-11\alpha m+m)+8\pi \beta+3\alpha B^{2}mr^{2}-B^{2}mr^{2}+6\alpha Bm+32\pi\alpha B_{g}-2\alpha n \Big{]},\] and the anisotropic factor \(\Delta\) can be gained as, \[\Delta=p_{t}-p_{r}=\frac{mr^{2}(a^{2}+ar^{2}(2b+B^{2})-aB+b(r^{4}(b+B^{2})-2 Br^{2}-1)+B^{2})}{8\pi(ar^{2}+br^{4}+1)^{2}}. \tag{26}\] Consequently, the components related to the SQM are as follows: \[\rho_{q}=\frac{1}{16\pi(3\alpha-1)(ar^{2}+br^{4}+1)^{2}}\Big{[}a^{ 2}r^{2}(r^{2}(3(\alpha+1)n-16\pi(3\beta+4B_{g}))+6(\alpha+1)m)+2a(m(9\alpha+6 (\alpha+1)br^{4}\] \[-6Br^{2}+3)+r^{2}(br^{4}+1)(3(\alpha+1)n-16\pi(3\beta+4B_{g})))-64 \pi b^{2}B_{g}r^{8}+6\alpha b^{2}mr^{6}+6b^{2}mr^{6}+3\alpha b^{2}nr^{8}+3b^{2 }nr^{8}\] \[-12B(bmr^{4}+m)-128\pi bB_{g}r^{4}+30\alpha bmr^{2}+6bmr^{2}+6 \alpha bnr^{4}+6bnr^{4}-48\pi\beta(br^{4}+1)^{2}-64\pi B_{g}+3\alpha n+3n\Big{]}, \tag{27}\] \[p_{q}=\frac{1}{16\pi(3\alpha-1)(ar^{2}+br^{4}+1)^{2}}\Big{[}(a^{ 2}r^{2}(r^{2}((\alpha+1)n-16\pi(\beta+4\alpha B_{g}))+2(\alpha+1)m)+2a(m(3 \alpha+2(\alpha+1)br^{4}\] \[-2Br^{2}+1)+r^{2}(br^{4}+1)((\alpha+1)n-16\pi(\beta+4\alpha B_{g })))-64\pi\alpha b^{2}B_{g}r^{8}+2\alpha b^{2}mr^{6}+2b^{2}mr^{6}+\alpha b^{2} nr^{8}+b^{2}nr^{8}-4B(bmr^{4}+m)\] \[-128\pi\alpha bB_{g}r^{4}+10\alpha bmr^{2}+2bmr^{2}+2\alpha bnr^{ 4}+2bnr^{4}-16\pi\beta(br^{4}+1)^{2}-64\pi\alpha B_{g}+\alpha n+n)\Big{]}. \tag{28}\] Our next objective is to use various physical acceptance tests to examine the current model's reliability. Those will be discussed in the coming sections. ## V Exterior spacetime and boundary conditions The material content that threads the star's interior must be confined between the centre and the boundary. The so-called junction conditions at the surface of the structure must be examined in order to ensure the restriction of this matter distribution. This process is carried out in GR by using the well-known Israel-Darmois [86; 87] matching requirements. The vacuum Schwarzschild solution [88] is used to characterize external spacetime in this case as we are working with the uncharged fluid sphere and it is given by the following line element: \[ds^{2} = (1-\frac{2M}{r})dt^{2}-(1-\frac{2M}{r})^{-1}dr^{2}-r^{2}(d\theta^{ 2}+\sin^{2}\theta d\phi^{2}), \tag{29}\] where '\(M\)' denotes the total mass within the boundary of the compact star. The continuations of the first and second fundamental forms at the boundary give the following relations: \[1-\frac{2M}{R} = e^{BR^{2}+2\ln D}, \tag{30}\] \[(1-\frac{2M}{R})^{-1} = 1+aR^{2}+bR^{4},\] (31) \[\frac{M}{R^{2}} = BRe^{BR^{2}+2\ln D}, \tag{32}\] and \[p_{r}(r=R)=0. \tag{33}\] Resolving the aforementioned mathematical equations (30)-(33), we get the following relations: \[B = \frac{M}{R^{3}}(1-2\frac{M}{R})^{-1}, \tag{34}\] \[D = e^{-BR^{2}/2}\sqrt{(1-2\frac{M}{R})},\] (35) \[a = \frac{1}{R^{2}}((1-2\frac{M}{R})^{-1}-1-bR^{4}),\] (36) \[\beta = \frac{1}{4\pi(1+aR^{2}+bR^{4})^{2}}\Big{(}n-16Bg\pi+bR^{2}(2m+(n- 16Bg\pi)R^{2})(2+bR^{4})-3B(m+bmR^{4})\] (37) \[+a^{2}(2mR^{2}+(n-16Bg\pi)R^{4})+a(2(n-16Bg\pi)R^{2}(1+bR^{4})+m(3- 3BR^{2}+4bR^{4}))\Big{)}\alpha\] ## VI Mass, surface redshift and compactness The mass function \({\bf m}(r)\) is defined as \[{\bf m}(r) = \int_{0}^{r}4\pi\rho(x)x^{2}dx, \tag{38}\] \[, = \frac{1}{6(3\alpha-1)}\Big{[}\frac{9\sqrt{2}m(B(\sqrt{a^{2}-4b}-a )+b)\tan^{-1}(\frac{\sqrt{2}\sqrt{b}r}{\sqrt{a-\sqrt{a^{2}-4b}}})}{\sqrt{b} \sqrt{a-\sqrt{a^{2}-4b}}\sqrt{a^{2}-4b}}+\frac{3mr}{ar^{2}+br^{4}+1}\] \[-\frac{9\sqrt{2}m(b-B(\sqrt{a^{2}-4b}+a))\tan^{-1}(\frac{\sqrt{2} \sqrt{b}r}{\sqrt{\sqrt{a^{2}-4b}+a}})}{\sqrt{b}\sqrt{\sqrt{a^{2}-4b}+a}\sqrt{ a^{2}-4b}}-2r^{3}(n-4\pi(3\beta+4B_{g}))-12mr\Big{]}\] Fig. 1 displays the mass function profile. It is evident from the figure that there are no singularities in the mass function, which increases monotonically having the value zero at its centre. The surface redshift \(z_{s}\) is a crucial observable parameter that links the mass and the radius of a compact star and it is defined by the following formula: \[z_{s}=(1-2{\bf m}(r)/r)^{-1/2}-1 \tag{39}\] The surface redshift \(z_{s}\) in Fig. 1 exhibits a monotonic increasing behavior towards the boundary, reaching its maximum value at the boundary of the object. The values stated for \(z_{s}\) in this paper are below the maximum values, despite the fact that Ivanov's research [90] shows that the value of \(z_{s}\) in the presence of anisotropic fluids exceeds the Buchdahl constraint [89]. For our current model, the compactness factor is calculated as \(u(r)={\bf m}(r)/r\). To categorize compact objects as (i) regular stars (\(u\sim 10^{-5}\)), (ii) white dwarfs (\(u\sim 10^{-3}\)), (iii) neutron stars (\(0.1<u<0.25\)), (iv) ultra-compact star (\(0.25<u<0.5\)), and (v) black holes (\(u=0.5\)), the compactness factor is crucial. Fig. 1 depicts the compactness profile for our current model, which is a monotonically increasing function of 'r'. \begin{table} \begin{tabular}{c c c c c c c c} \hline Star & Observed mass & Observed radius & Estimated & Estimated & \(a\) & \(B\) & \(D\) \\ & \(M_{\odot}\) & km. & mass (\(M_{\odot}\)) & radius (km.) & \(km^{-2}\) & \(km^{-2}\) & \\ \hline Her X-1 [99] & \(0.85\pm 0.15\) & \(8.1\pm 0.41\) & 0.85 & 8.5 & 0.00576265 & 0.00289578 & 0.756246 \\ EXO 1785-248 [100] & \(1.3\pm 0.2\) & \(8.849\pm 0.4\) & 1.4 & 8.85 & 0.0111404 & 0.00558588 & 0.586810 \\ Vela X-1 [96] & \(1.77\pm 0.08\) & \(9.56\pm 0.08\) & 1.77 & 9.5 & 0.0134864 & 0.00676124 & 0.494630 \\ PSR J1614-2230 [94] & \(1.97\pm 0.04\) & \(9.69\pm 0.2\) & 1.97 & 9.7 & 0.0158465 & 0.00794205 & 0.435751 \\ LMC X-4 [96] & \(1.04\pm 0.09\) & \(8.301\pm 0.2\) & 1.04 & 8.3 & 0.00848444 & 0.00425600 & 0.685691 \\ SMC X-4 [96] & \(1.29\pm 0.05\) & \(8.831\pm 0.09\) & 1.29 & 8.8 & 0.0098081 & 0.00491954 & 0.622699 \\ PSR J1903+327 [101] & \(1.667\pm 0.021\) & \(9.438\pm 0.03\) & 1.67 & 9.4 & 0.012428 & 0.00623168 & 0.523832 \\ 4U 1538-52 [96] & \(0.87\pm 0.07\) & \(7.866\pm 0.21\) & 0.87 & 7.8 & 0.00803612 & 0.00403023 & 0.724610 \\ 4U 1820-30 [95] & \(1.58\pm 0.06\) & \(9.316\pm 0.086\) & 1.58 & 9.3 & 0.0115823 & 0.00580843 & 0.549392 \\ Cen X-3 [96] & \(1.49\pm 0.08\) & \(9.178\pm 0.13\) & 1.49 & 9.2 & 0.0107751 & 0.00540449 & 0.574909 \\ \hline \end{tabular} \end{table} Table 1: The corresponding numerical values of \(a\), \(B\) and \(D\) for some discriminate stellar spheres by undertaking \(b=0.04\times 10^{-5}\) km\({}^{-4}\). ## VII Mass Radius Relationship In this section, we are interested to find the maximum allowable mass for different values of \(m\). As \(m\) increases, the predicted masses cover a wider range of observed values which can be shown in fig. 2. An increase in \(m\) is accompanied by a decrease in mass and radii, which is clear from the figure. From literature, we have chosen four different compact stars GW 190814 with mass 2.50-2.67 \(M_{\odot}\), PSR J0952-0607 with mass \((2.35\pm 0.17)M_{\odot}\), PSR J0740+6620 with mass \((2.08\pm 0.07)M_{\odot}\) and 4U 1608-52 with mass \((1.74\pm 0.14)M_{\odot}\). It is possible to generate stellar structures with masses closer to the above compact star for different values of \(m\) which has been presented in Table 2. ## VIII Measurements of Mass and Bag Constant with the Help of Contour Plots From Fig. 3 to Fig. 6, we analyzed the variation of mass and the bag constant with the help of contour plots. * The equi-mass contours are shown in the \(m-\beta\) plane in Fig. 3 by keeping \(\alpha,\,n,\,r\) and \(B_{g}\) fixed. The figure indicates that for a fixed value of \(\beta\), the value of mass increases for an increasing value of \(m\). In contrast, with a constant \(m\), the value of mass falls as \(\beta\) grows. * The equi-mass contours are displayed in the \(\alpha-m\) plane in the left panel of Fig. 4 by retaining the variables \begin{table} \begin{tabular}{c c c c} \hline \(m\) & Maximum mass \(M(M_{\odot})\) & Corresponding radius (in km.) & Matched with the mass of the compact star \\ \hline 0.2 & 2.62 & 9.18 & GW 190814 [97] \\ 0.3 & 2.4 & 8.6 & PSR J0952-0607 [93] \\ 0.4 & 2.09 & 7.3 & PSR J0740+6620 [98] \\ 0.5 & 1.8 & 6.2 & 4U 1608-52 [91] \\ \hline \end{tabular} \end{table} Table 2: Maximum mass and the corresponding radius for different values of \(m\) Figure 1: The graphical analysis of \(\mathbf{m}(r)\), \(z_{s}\), and \(u(r)\) against ‘\(r\)’ \(\beta\), \(n\), \(r\) and \(B_{g}\) fixed. According to the picture, with a constant value of \(\alpha\), the value of mass rises as \(m\) increases. With a fixed amount of \(m\), however, the value of mass grows as \(\alpha\) increases. In the right panel of Fig. 4, we have drawn the equi-mass contours in \(r-\alpha\) plane taking \(\beta\), \(m\), \(n\) and \(B_{g}\) fixed. It can be seen that for a fixed value of \(r\), the value of mass rises as \(\alpha\) increases. Also, for a fixed value of \(\alpha\), the value of mass increases as \(r\) increases. * In the left panel of Fig. 5, the equi-mass contours are displayed in the \(B_{g}-m\) plane by keeping the variables \(\beta\), \(n\), \(r\) and \(\alpha\) fixed. According to the figure, with a constant value of \(B_{g}\), the value of mass grows as \(m\) increases. However, with a given quantity of \(m\), the value of mass decreases as \(B_{g}\) increases. We can see that, the mass takes a higher value for the lower value of the bag constant \(B_{g}\). In the right panel of Fig. 5, the equi-mass contours are shown in the \(B_{g}-\alpha\) plane by keeping the variables \(\beta\), \(n\), \(r\) and \(m\) fixed. One can see that, with a constant value of \(B_{g}\), the value of mass grows as \(\alpha\) increases. However, given a constant amount of \(\alpha\), the value of mass falls as \(B_{g}\) grows. * The left panel of Fig. 6 we show the equi-\(B_{g}\) contours in the \(m-\alpha\) plane by keeping the variables \(\beta\), \(n\), \(r\) and \(m\) fixed. This figure implies that with a constant value of \(m\), the value of the bag constant increases as \(\alpha\) increases. Similarly, for a fixed value of \(\alpha\), the value of \(B_{g}\) increases as \(m\) grows. On the other hand, the right panel of Fig. 6 shows the equi-\(B_{g}\) contour in the \(R-m\) plane. Keeping \(R\) fixed, the value of bag constant \(B_{g}\) increases as \(m\) grows, and by keeping \(m\) fixed, the value of \(B_{g}\) decreases as \(R\) increases. Interestingly, one can note that for our chosen range of \(m\) and \(\alpha\) in the left figure and for a chosen range of \(R\) and \(m\) in the right figure we have achieved very interesting and physically reasonable values for the bag constant \(B_{g}\) which is very much consistent with the CERN data about quark-gluon plasma (QGP) as well as compatible with the RHIC preliminary results [102; 103]. Witten's conjecture successfully explains the non-interacting, mass-less quarks with \(B_{g}\) values between 57 and 94 \(MeV/fm^{3}\), which has already been demonstrated by Farhi and Jaffe [104]. ## IX Physical analysis We have discussed the analysis of the hybrid star model for a specific range of \(m\) by fixing \(n\) in this section. To check the behavior of the physical parameters and ensure the viability of the solution, we have chosen \(m\) lies between 10 to 15 for our current article. The acquired solutions for the hybrid star model need to be put to the test under a number of different physical conditions, each of which will be addressed separately in this section. To create all of the curves of different model parameters, we utilized the stellar structures whose mass and radius are shown in Table. 1. Figure 2: Mass-Radius relationship are shown Figure 5: (Left) equi-mass in \(B_{g}-m\) and (right) equi-mass in \(B_{g}-\alpha\) Figure 3: Equi-mass in \(m-\beta\) plane ### Metric Potentials Both metric potentials are singularity-free within the boundary of the star. Additionally, \(e^{\nu(0)}=D^{2}\), a non-zero constant, and \(e^{-\lambda(0)}=1\) for our current stellar model. The derivative of the metric coefficients results in the expressions \((e^{\lambda})^{\prime}=2ar+4br^{3}\), \((e^{\nu})^{\prime}=2BD^{2}re^{Br^{2}}\). At the core of the star, the derivative of the metric potentials equals zero. Additionally, they are continuous and monotonic increasing inside the star as shown in Fig. 7. At the boundary, the metric components of the external Schwarzschild line element are perfectly aligned to the interior metric potentials, which will be addressed later. ### Nature of pressure, density and anisotropic factor The behavior of the three most important significant features of the model --matter density, radial pressure, and tangential pressure--is examined and analyzed in this subsection. We additionally examine the function that the anisotropy factor Delta plays inside the stellar sphere. It is well established that any compact object describing the interiors of stars should not have any physical or mathematical singularities in its main physical characteristics. The maximum values of matter density and pressure should also be associated at the centre of the configuration and should be monotonically decreasing functions of the radial coordinate towards its surface. These novel characteristics are required to explain some real objects such as white dwarfs, neutron stars, and even quark stars. In addition, there are additional components that are as important to the study of compact structures and that offer a more accurate picture of the behavior of celestial bodies. Anisotropies, for instance, might be present in the material composition of the fluid sphere. In this context, anisotropy refers to the fact that the pressure in the radial direction and the pressure in the angular directions are not equal, or \(p_{r}\neq p_{t}\). Therefore, \(\Delta=p_{t}-p_{r}\) is used to define the anisotropy factor. Figure 7: Metric coefficients All thermodynamic observables \(\rho\), \(p_{r}\) and \(p_{t}\) along with the anisotropy factor \(\Delta\) are depicted in Fig. 8. For a broad range of \(m\), we may observe the behavior of matter density, radial pressure, and tangential pressure. It is important to see that these physical quantities monotonically decrease with increasing radial coordinates, with the highest values at the centre of the configuration. This graphic also depicts the behavior of the anisotropy factor \(\Delta\). It behaves positively throughout the star, disappearing in the centre and increasing function of 'r'. The central values of density and pressure can be obtained as, \[\rho(r=0) = \frac{-3am+12\pi\beta+3Bm+16\pi B_{g}-n}{4\pi(3\alpha-1)}, \tag{40}\] \[p_{t}(r=0) = p_{r}(r=0)=\frac{3am\alpha-3Bm\alpha+n\alpha-16B_{g}\pi\alpha-4 \pi\beta}{4\pi-12\pi\alpha}. \tag{41}\] The following two formulas will be utilized to determine the numerical values of the core density and central pressure for our current model, and they are shown in tabular form in our study. Next, we are interested to find out the nature of the density and pressure gradients. Due to the complexity of the expressions of density and pressure gradients, we have taken the help of a graphical representation which has been shown in Fig. 9. In the interior, all gradients had negative values, as depicted in the diagram. ### Energy conditions All four of the energy conditions--the null energy condition (NEC), the weak energy condition (WEC), the strong energy condition (SEC), and the dominant energy condition (DEC)--are claimed to be met for a physically conceivable model if the parameters of the model, such as \(\rho\), \(p_{r}\), and \(p_{t}\) satisfy the aforementioned expressions. * NEC: \(\rho+p_{r}\geq 0\), \(\rho+p_{t}\geq 0\); * WEC: \(\rho+p_{r}\geq 0\), \(\rho+p_{t}\geq 0\), \(\rho\geq 0\); Figure 8: Matter density, radial pressure, transverse pressure, and anisotropic factor are shown against \(r\) * SEC: \(\rho+p_{r}\geq 0\), \(\rho+p_{t}\geq 0\), \(\rho+p_{r}+2p_{t}\geq 0\); * DEC: \(\rho-p_{r}\geq 0\), \(\rho-p_{t}\geq 0\), \(\rho\geq 0\) It plays an essential role in comprehending the nature of matter as well [85]. In the context of GR, the wormhole model was considered as a way to explain how the energy criteria would be violated if exotic matter is present within the object. If these conditions are satisfied, it is shown that ordinary stuff exists. For \(m\in[0.2,\,0.5]\), we graphically verified the validity of these conditions in Fig. 10, and we can observe that the previously stated energy requirements are all satisfied by the suggested hybrid star model in \(f(Q)\) gravity. ### Equation of state Another crucial step is finding the equation of state, i.e., a link between pressure and density. The radial pressure and matter density are assumed to be linearly related in the model by solving the field equations; however, the relationship between the transverse pressure and matter density is still uncertain. The equation of state parameters, usually denoted by \(\omega_{r}\) and \(\omega_{t}\), are two dimensionless quantities that can be used to characterize the relationship between matter density and pressure. For our current model, the equations of state parameters \(\omega_{r}\) and \(\omega_{t}\) are defined \begin{table} \begin{tabular}{l c c c} \hline m & \(\rho_{c}\) & \(\rho_{s}\) & \(p_{c}\) & \(\beta\) \\ & \(gm./cm.^{3}\) & \(gm./cm.^{3}\) & \(dyne/cm.^{2}\) \\ \hline 0.2 & \(1.0016\times 10^{15}\) & \(2.1428\times 10^{14}\) & \(2.12577\times 10^{35}\) & 0.0000476416 \\ 0.3 & \(1.44825\times 10^{15}\) & \(2.67264\times 10^{14}\) & \(3.18866\times 10^{35}\) & 0.0000594217 \\ 0.4 & \(1.89489\times 10^{15}\) & \(3.20248\times 10^{14}\) & \(4.25154\times 10^{35}\) & 0.0000712018 \\ 0.5 & \(2.34154\times 10^{15}\) & \(3.73232\times 10^{14}\) & \(5.31443\times 10^{35}\) & 0.0000829819 \\ \hline \end{tabular} \end{table} Table 3: The numerical values of central density, surface density, central pressure, \(\beta\), for the compact star Her X-1 for different values of ‘m’ by taking \(b=0.04\times 10^{-5}\), n=0.005, \(\alpha=0.3\) Figure 9: The density and pressure gradients are shown against ‘r’ as follows: \[\omega_{r}=\frac{p_{r}}{\rho},\omega_{t}=\frac{p_{t}}{\rho}. \tag{42}\] For a particular range of \(m\), we have drawn the profiles of both \(\omega_{r}\) and \(\omega_{t}\) in Fig. 11. The results clearly show that these two traits were most valuable near the star's center and decreased toward the edge. Furthermore, they fall inside the range of radiation era, i.e., \(0<\omega_{r},\omega_{t}<1\)[81]. ## X Stability analysis of the present model In this part, we will examine the stability of our current model using (i) the causality condition, (ii) the adiabatic index, and (iii) the TOV equation which will be explained separately. Figure 10: All the energy conditions are shown against ‘r’ ### Velocity of sound and cracking method It is important to verify the causality requirement, which states that the speed of sound inside the compact object must be subluminal, in order to generate a physically accurate model. The following formula can be used to calculate a stellar fluid's sound speed. \[V_{r}^{2}=\frac{dp_{r}}{d\rho},V_{t}^{2}=\frac{dp_{t}}{d\rho}. \tag{43}\] We have chosen a linear equation of state between the radial pressure \(p_{r}\) and the matter density \(\rho\) for our current model. As a result, the speed of sound in the radial direction for our current model is simply set at \(\alpha\) and does not vary on \(m\). The tangential component, however, is dependent on the behavior of the anisotropy factor. Fig. 12 illustrates the variation of the square of the radial and transverse velocity, and it can be seen that the tangential velocity is increasing outward and less than 1 for all values of \(m\) throughout the star. As a result, we may assert that our model meets the causality constraint. In a series of lectures [105; 106; 107], Herrera and colleagues in-depth examined the idea of cracking for stellar structures by taking into account anisotropic matter structures. The idea of cracking (or overturning) was first suggested in 1992. This method is beneficial for identifying potentially unstable anisotropic matter structures. They looked at the possibility of stability in the region of the star interior where the radial velocity of sound is greater than the transverse velocity of sound. We have generated the profile of \(V_{r}^{2}-V_{t}^{2}\) in Fig. 12 to confirm this criterion, and the profile guarantees the potential stability of the current model. ### Adiabatic Index In this paragraph, we will analyze a crucial and important ratio of the two specific temperatures offered by \(\Gamma\) in order to examine the area of stability of the hybrid star model. Chan et al. [82] proposed the concept of the adiabatic index for an isotropic fluid sphere, however, Chandrasekhar [83] was one of the first in this age to examine using the adiabatic index to look at the zone of stability for spherical stars. The expression for the adiabatic index changes as follows in the presence of pressure anisotropy: \[\Gamma_{r}=\frac{\rho+p_{r}}{p_{r}}\frac{dp_{r}}{d\rho}, \tag{44}\] \[\Gamma_{t}=\frac{\rho+p_{t}}{p_{t}}\frac{dp_{t}}{d\rho}. \tag{45}\] The circumstances of stability are satisfied by the stellar object when the above two expressions take a value of more than 4/3 according to Heintzmann and Hillebrandt's study [84]. Since it is impossible to verify this requirement analytically for the complexity of the expressions. We have drawn the profiles of \(\Gamma_{r}\) and \(\Gamma_{t}\) for various values in Fig. 13. The graphic shows that both \(\Gamma_{r}\) and \(\Gamma_{t}\) take values greater than 4/3 across the fluid sphere, which ensures that the stability criterion is fully met. ### The equilibrium under different forces This subsection will examine the equilibrium of the model under various forces that are currently acting on the system. The four forces that constitute the equilibrium equation are the hydrostatic force (\(F_{h}\)), gravitational force (\(F_{g}\)), anisotropic force (\(F_{a}\)), and lastly the force associated with quark matter (\(F_{q}\)). Additionally, the explicit form of these forces is as follows: \[F_{g} = -\frac{\nu^{\prime}}{2}(\rho+p_{r}),\] \[F_{h} = -\frac{dp_{r}}{dr},\] \[F_{a} = \frac{2}{r}(p_{t}-p_{r})=\frac{2}{r}\Delta\] \[F_{q} = -\frac{\nu^{\prime}}{2}(\rho_{q}+p_{q})-\frac{d}{dr}(p_{q}),\] The Tolman-Oppenheimer-Volkoff (TOV) equation for our present model can be written as, \[-\frac{\nu^{\prime}}{2}(\rho+p_{r})-\frac{dp_{r}}{dr}+\frac{2}{r}(p_{t}-p_{r}) -\frac{\nu^{\prime}}{2}(\rho_{q}+p_{q})-\frac{d}{dr}(p_{q})=0, \tag{46}\] Now the above equation can be denoted by, \[F_{g}+F_{h}+F_{a}+F_{q}\ =\ 0. \tag{47}\] Fig. 14 shows the formulation of various forces acting on our system for different values of the coupling parameter \(m\). From the figure, we can see that the combined effects of all four different forces make our model stable. ## XI Discussion In the present work, we propose a model of a hybrid star in the realm of \(f(Q)\) modified gravity. We have chosen the Tolman-Kuchowicz metric potential to solve the field equations. The obtained model has been matched successfully to Figure 14: The different forces acting on the system are shown against \(r\) the exterior spacetime. The most significant findings include the following: Our results show that the energy density \(\rho\), pressures \(p_{r}\), \(p_{t}\), of the investigated compact star approach their greatest value near the core, while they are at their minimum at the surface. It is crucial to note that the radial pressure \(p_{r}\) at the surface of the star vanishes. The central density \(rho_{c}\) approaches a significantly enormous value when we are dealing with the core of the star, and it makes the stars very compact. The high compactness offers a proper justification for the validation of the \(f(Q)\)-model that we propose. The numerical values of central density, surface density, and central pressure have been calculated for various values of \(m\), and it is clear that as \(m\) rises, all three variables take on increasing values. At the same time, the \(\beta\) increases as \(m\) grows. The relevance of the surface redshift is increased by the existence of anisotropies in the stellar content, which improves the stability and balancing processes. The contribution that it will make to the equilibrium mechanism, however, relies on the sign, or whether it is positive or negative, according to \(p_{t}>p_{r}\) or \(p_{t}<p_{r}\). In the first scenario, the system experiences a repulsive force that reduces the gravitational gradient, whereas in the second scenario, the force conveyed by anisotropy contributes to the gravitational force compressing the star. The structure will eventually keep collapsing till its Schwarzschild radius if the pressure of nuclear force is insufficient to push against gravity. The object then generates a black hole with a variety of peculiar characteristics. This indicates that the equilibrium and stability of the configuration are affected by the presence of an attracting force caused by anisotropies. We developed a graphical diagram to illustrate the anisotropic behavior. The anisotropic force shown in Fig 8 is repulsive in nature for our present model. Taking into account the hybrid star, we also found that a number of energy conditions are satisfied, which further shows that there is no exotic matter present and that the underlying matter distribution is completely non-exotic matter. It should be noted that stability analysis is crucial for modeling any compact object. The causality requirement is met by the current model. In this case, stability is investigated using cracking methods. Our recommended models are conceivably reliable against the variations, according to the stability study proposed by Herrera. The relativistic adiabatic indices \(\Gamma_{r}\) and \(\Gamma_{t}\) are shown, and they both assume values greater than \(4/3\), satisfying the stability requirement. Two different EoS parameters, \(\omega_{r}\) and \(\omega_{t}\), are involved in the anisotropy investigation. The range of realistic and normal distribution of matter is determined by these two EoS parameters. The maximum allowable mass and the corresponding radius are obtained and it relates to the mass of compact stars found in the literature. Another crucial point is that the measurements of mass and bag constant \(B_{g}\) have been studied in detail via contour plots. From our analysis, we have obtained the range of bag constant \(B_{g}\) as \(55-95~{}MeV/fm^{-3}\) which is very much compatible with CERN data about quark-gluon plasma (QGP) as well as compatible with the RHIC preliminary results [102; 103] and the observational result by Farhi and Jaffe [104]. Many stellar solutions has been obtained in \(f(R)\), \(f(R,T)\) gravity, etc. to verify the reliability of these types of modified gravity. These types of gravity are based on the Riemannian geometry, where torsion and nonmetricity are zero. Within this framework, the Ricci scalar curvature works as a building block of space-time. But here, we represent the work to see the behavior of the stellar model when the gravitational interaction between two particles in space-time is described by the nonmetricity \(Q\), upon which \(f(Q)\) gravity theory is established. We have used \(f(Q)\) gravity to verify whether it gives the same physical properties of the stellar model as the previous result, like realistic gravity. There are a number of works on compact stars in the framework of Einstein's GR as well as in modified gravity. To compare our results with those types of realistic gravity like \(f(R)\), \(f(R,T)\) gravity etc. one can see the references [108; 109; 110]. The success of our recommended model was confirmed throughout the study in conjunction with a proper contrast of a large number of compact star candidates. As a result, the implications of our chosen methodologies provide a better justification for compact objects. As a result, we draw the conclusion that our suggested hybrid star model behaves successfully and adequately explains the physical characteristics in the circumstances of \(f(Q)\) gravity. ## Acknowledgements P.B. is thankful to the Inter-University Centre for Astronomy and Astrophysics (IUCAA), Pune, Government of India, for providing visiting associateship. PB also acknowledges that this work is carried out under the research project Memo No: 649(Sanc.)/STBTBT-11012(26)/23/2019-ST SEC funded by the Department of Higher Education, Science & Technology and Bio-Technology, Government of West Bengal. SP & PKS acknowledges the National Board for Higher Mathematics (NBHM) under the Department of Atomic Energy (DAE), Govt. of India for financial support to carry out the Research project No.: 02011/3/2022 NBHM(R.P.)/R & D II/2152 Dt.14.02.2022. A. Malik acknowledges the Grant No. YS304023912 to support his Postdoctoral Fellowship at Zhejiang Normal University, China.
2308.08505
Test-Time Poisoning Attacks Against Test-Time Adaptation Models
Deploying machine learning (ML) models in the wild is challenging as it suffers from distribution shifts, where the model trained on an original domain cannot generalize well to unforeseen diverse transfer domains. To address this challenge, several test-time adaptation (TTA) methods have been proposed to improve the generalization ability of the target pre-trained models under test data to cope with the shifted distribution. The success of TTA can be credited to the continuous fine-tuning of the target model according to the distributional hint from the test samples during test time. Despite being powerful, it also opens a new attack surface, i.e., test-time poisoning attacks, which are substantially different from previous poisoning attacks that occur during the training time of ML models (i.e., adversaries cannot intervene in the training process). In this paper, we perform the first test-time poisoning attack against four mainstream TTA methods, including TTT, DUA, TENT, and RPL. Concretely, we generate poisoned samples based on the surrogate models and feed them to the target TTA models. Experimental results show that the TTA methods are generally vulnerable to test-time poisoning attacks. For instance, the adversary can feed as few as 10 poisoned samples to degrade the performance of the target model from 76.20% to 41.83%. Our results demonstrate that TTA algorithms lacking a rigorous security assessment are unsuitable for deployment in real-life scenarios. As such, we advocate for the integration of defenses against test-time poisoning attacks into the design of TTA methods.
Tianshuo Cong, Xinlei He, Yun Shen, Yang Zhang
2023-08-16T17:00:32Z
http://arxiv.org/abs/2308.08505v1
# Test-Time Poisoning Attacks Against Test-Time Adaptation Models ###### Abstract Deploying machine learning (ML) models in the wild is challenging as it suffers from distribution shifts, where the model trained on an original domain cannot generalize well to unforeseen diverse transfer domains. To address this challenge, several test-time adaptation (TTA) methods have been proposed to improve the generalization ability of the target pre-trained models under test data to cope with the shifted distribution. The success of TTA can be credited to the continuous fine-tuning of the target model according to the distributional hint from the test samples during test time. Despite being powerful, it also opens a new attack surface, i.e., test-time poisoning attacks, which are substantially different from previous poisoning attacks that occur during the training time of ML models (i.e., adversaries cannot intervene in the training process). In this paper, we perform the first test-time poisoning attack against four mainstream TTA methods, including TTT, DUA, TENT, and RPL. Concretely, we generate poisoned samples based on the surrogate models and feed them to the target TTA models. Experimental results show that the TTA methods are generally vulnerable to test-time poisoning attacks. For instance, the adversary can feed as few as 10 poisoned samples to degrade the performance of the target model from 76.20% to 41.83%. Our results demonstrate that TTA algorithms lacking a rigorous security assessment are unsuitable for deployment in real-life scenarios. As such, we advocate for the integration of defenses against test-time poisoning attacks into the design of TTA methods.1 Footnote 1: Our code is available at [https://github.com/tianshuocong/TePA](https://github.com/tianshuocong/TePA). ## 1 Introduction In recent years, machine learning (ML) has achieved remarkable performance [23]. Nevertheless, deploying these ML models in the real world poses a significant challenge, as distribution shifts may occur when the training and test datasets come from different distributions [29, 66]. Take image classification as an example; the data used to train the ML models are often carefully curated, e.g., selecting the object images with a clean background and cropping the center area of the objects. However, those models must deal with the test images coming from a different distribution in the real world (See Figure 1), which usually leads to degraded model performance [40]. Prior approaches to enhancing the ML model's generalization under distribution shifts have focused on the training process to prompt the target model to learn more distribution types in advance, such as leveraging a large number of labeled data [49], novel data augmentation [26], etc. However, test data usually comes from an unseen distribution. Consequently, target models with fixed parameters trained on the original domain will no longer be applicable to the diverse transfer domains, leading to an increasing interest in the dynamic adaptation of ML models during inference. Test-time adaptation (TTA) is an emerging technique to tackle distribution shifts and has been leveraged in several real-world security-sensitive scenarios, such as autonomous driving [33], medical diagnosis [28], etc. In TTA settings, the test data from the transfer domain is delivered as a data stream and the target model is updated online. In other words, the target model only has access to the current test data instead of the whole test dataset. This is particularly relevant in latency-sensitive scenarios, such as autonomous driving, which necessitate immediate prediction of arrival data. To address these realistic constraints, various TTA methods [33, 39, 47, 52] have been proposed to enhance the performance of prediction by fine-tuning the model's parameters based on the current test data before making predictions. Though proven successful in improving the generalization of ML models, TTA paradigms may introduce a new attack surface for adversaries to tamper with the parameters of a target model at test time by fine-tuning it with potential malicious samples. This can directly impact the predictions for benign samples. To explore this possibility, in this work, we propose the first untargeted _test-time poisoning attacks (TePAs)_ against TTA models, i.e., an adversary aims to degrade a TTA model's performance at test time. Our approach is drastically different from previous poison Figure 1: Targeting the challenge of “distribution shifts,” test-time adaptation (TTA) methods can aid in the identification of traffic signs across diverse weather conditions. ing attacks that are executed during the model's training process, i.e., _training-time poisoning attacks (TrPAs)_[7, 17, 43]. Compared to TrPAs, TePAs face the following non-trivial challenges: (i) TrPAs require modification access to the target model's training dataset, while TePAs do not poison the training dataset nor control the training process of the target model. (ii) For TrPAs, poisoned samples are mixed with clean training samples where they can be learned in multiple epochs by the model and become more memorable. However, considering effectiveness and efficiency, TTA methods usually update the model using _one_ epoch based on each arrival of test data hence a different setting for TePAs. (iii) In TePAs, poisoned and benign samples are in the same pipeline, and the model is in a state of dynamic adjustment. Therefore, the poisoning effectiveness is also affected by the benign samples. (iv) Since TePAs are test-time attacks, the adversary must take the query budget into account to maintain the attack's stealthiness. (v) To avoid the target models "forgetting" the original task, TTA methods usually only update part parameters of the model. However, for TrPAs, the poisoned samples are used to update the whole model parameters. In summary, these differences make TePAs harder to succeed than TrPAs. **Our Work.** In this paper, we take the first step toward understanding the TePAs against TTA models and the plausible countermeasures. Our study aims to demonstrate that the current TTA methods are prone to TePAs. Considering their use in safety-critical applications where a deterioration in their efficacy could result in severe consequences, exposing the model modification right to the adversaries is irresponsible, and taking into account TePAs during TTA methods design becomes crucial. Surprisingly, to the best of our knowledge, no prior research has investigated the vulnerability of TTA models with respect to TePAs. We first systematically define the threat model of TePAs against TTA models. The goal of the adversary is to launch an indiscriminate poisoning attack against the target model, resulting in its performance degradation. The adversary's ability is limited to the query access to the target model, meaning that they are unable to access important details (such as the loss value, gradients, and parameters) nor the outputs (such as posterior or predicted label) of the target model. Additionally, we assume that the adversary has background knowledge of the distribution of the target model's training dataset. This knowledge allows the adversary to construct a surrogate model with a similar distribution dataset. They can later generate poisoned samples based on the surrogate model and feed them to the target model. To better demonstrate the vulnerability of the TTA techniques to TePAs, we consider four prominent TTA methods in our paper, including Test-time training (TTT) [47], Dynamic unsupervised adaptation (**DUA**) [33], Test entropy minimization (**TENT**) [52], and Robust pseudo-labeling (**RPL**) [39]. We launch TePAs against the above TTA methods. Specifically, we propose a poisoned sample generation framework, PoGen, which creates poisoned samples based on a surrogate model and transfer-based adversarial attacks. Our experimental results indicate that only 10 poisoned samples or a small poisoning ratio of 0.1 can cause a 34.37% drop or a 6.13% drop in the target TTT-model's performance, respectively. To mitigate TePAs, we investigate several defense mechanisms such as adversarial training [31], bit-depth reduction [58], etc. However, our experiments show that these defenses are not effective against TePAs, which prompts the need for more effective defense mechanisms. In summary, we make the following contributions: * We propose the first test-time poisoning attacks against four TTA methods, including TTT, DUA, TENT, and RPL. * Empirical evaluations show that our attacks are effective in degrading the target model's performance even with limited poisoned samples and small fractions of poisoned data. * To mitigate the attacks, we investigate four defense mechanisms and find that none of them are effective to defend against the proposed TePAs. ## 2 Background ### Preliminaries **Notations.** We use \(f:x\in[0,1]^{D}\rightarrow\mathbb{R}^{C}\) to denote a \(C\)-class classification model, where \(x\) is the input (such as an image) and \(D\) is the input size. \(f(x)=[f_{1},...,f_{C}]\) is the output logits vector. \(p(x)=[p_{1},...,p_{C}]=\sigma(f(x))\) is the confidence vector where \(\sigma(\cdot)\) is the softmax function and \(p_{j}=p(j|x)\) is the prediction probability on the \(j\)-th class. Then the final prediction can be calculated by \(z=\arg\max_{j=1,...,C}p_{j}\). **Overall Goal of TTA.** An illustration of TTA methods is shown in Figure 2. A target model \(f\) which is trained on the original domain \(\mathcal{D}_{\textit{ori}}=\{x\sim P(x)\}\) does not generalize well to a different transfer domain \(\mathcal{D}_{\textit{train}}=\{x\sim Q(x)\}\) (\(Q(x)\neq P(x)\)) due to distribution shifts [29]. TTA methods aim to improve the performance of the target model \(f\) by updating its parameters at test time to cope with such distribution shifts. In essence, \(f\) can be split into a feature extractor \(e(\cdot)\) and a linear classifier \(\pi(\cdot)\), i.e., \(f(x)=\pi(e(x))\) where Figure 2: Overview of TTA methods. (a) A target model with fixed parameters cannot cope with distribution shifts. (b) TTA methods can improve the target model’s performance by adjusting the target model’s parameters. \(h=e(x)\in\mathbb{R}^{d}\) is the feature vector. Different TTA methods update different parts of the target model to attain the above goal. For instance, TTT updates the feature extractor \(e(\cdot)\) at test time. DUA, TENT, and RPL update the parameters in the batch normalization (BN) layers [27]. More details can be found in Section 2.2. **Test-Time Behavior of TTA.** It is important to note that the target model does not have access to the whole test samples which are from the transfer domain. Note that under TTA assumptions, the test samples come in sequential order, i.e., \(x^{0}\leftarrow\cdots x^{t}\leftarrow\cdots\) where \(x^{t}\) denotes the test data at timestamp \(t\). The target model \(f\) will be adapted by \(x^{t}\) to \(f^{t}\) using TTA methods (See Figure 3). That is, it can only process the current arrived test data instead. For TTT and DUA, the test data come in a "point-by-point" manner, i.e., \(x^{t}\) stands for a single image. For TENT and RPL, the test data come in a "batch-by-batch" manner, i.e., \(x^{t}=\{x^{t}_{i}\}_{i=1}^{B}\) stands for a batch of test samples where \(B\) is the batch size. The technical details are outlined below. ### TTA Methods **TTT [47].** As a classical TTA method, TTT has been widely used in the real world [18, 4, 28]. Briefly, TTT updates the feature extractor based on a self-supervised learning (SSL) task at test time, in turn, adapting to the distribution shifts on-the-fly. The overview of TTT is shown in Figure 15. During training time, TTT requires a Y-structured model as the target model. For instance, the training process of TTT can be considered as a multi-task learning problem [8], which jointly learns from two tasks (i.e., a main task and an auxiliary task). The main task is a classification task, and the cross-entropy loss is applied (denoted as \(\mathcal{L}_{m}\)). The auxiliary task is an SSL task, i.e., rotation prediction [13]. Concretely, it rotates each training image into 0, 90, 180, and 270 degrees first and then predicts the rotation angle (i.e., a 4-class classification problem). We denote the SSL task loss function as \(\mathcal{L}_{s}\). As shown in Figure 15, the TTT model has a Y-structure architecture with a shared feature extractor \(e(x;\theta_{e})\) and two branches \(\pi_{m}(x;\theta_{m})\) and \(\pi_{s}(x;\theta_{s})\), where \(\pi_{m}\) is used for the main task and \(\pi_{s}\) is used for the auxiliary task. Given a training sample, TTT first feeds it into \(e(x;\theta_{e})\) to obtain its feature vector \(h\). Then, \(h\) is fed into \(\pi_{m}\) and \(\pi_{s}\) to calculate the \(\mathcal{L}_{m}\) and \(\mathcal{L}_{s}\), respectively. The total loss function for training the Y-structured target model \(f(x;\theta)\) can be thus defined as \[\min_{e,\pi_{s},\pi_{m}}\frac{1}{N}\sum_{i=1}^{N}(\mathcal{L}_{m}(x_{i},y_{i} ;e,\pi_{m})+\mathcal{L}_{s}(x_{i};e,\pi_{s})), \tag{1}\] where \(\{(x_{i},y_{i}),i\in N\}\) is the training data, and \(\theta^{*}=(e^{*},\pi_{s}^{*},\pi_{m}^{*})\) are the optimized parameters of Equation 1. During inference, TTT adapts the model based on the test data first and then makes a prediction using the updated model. Concretely, TTT updates \(e(x;\theta_{e})\) and \(\pi_{s}(x;\theta_{s})\) based on the SSL task \(\mathcal{L}_{s}\), and \(\pi_{m}(x;\theta_{m})\) is fixed throughout. At \(t=0\), the model's initial state is \(\theta^{*}\). Given \(x^{0}\), TTT first fine-tunes its feature extractor and auxiliary branch by minimizing \(\mathcal{L}_{s}\) as \[e^{0},\pi_{s}^{0}=\min_{e^{*},\pi_{s}^{0}}\mathcal{L}_{s}(x^{0};e^{*},\pi_{s}^ {*}). \tag{2}\] Once getting the optimized \(\theta^{0}=(e^{0},\pi_{s}^{0},\pi_{m}^{*})\), TTT then makes a prediction with the updated parameters as \(z^{0}=\pi_{m}^{*}(e^{0}(x^{0}))\). Since TTT updates the model in an online manner, the model first is initialized with \((e^{t-1},\pi_{s}^{t-1},\pi_{m}^{*})\) at time \(t\) (\(t>0\)), and then uses the updated parameters \(\theta^{\prime}=(e^{t},\pi_{s}^{t},\pi_{m}^{*})\) to make a prediction.2 Footnote 2: The inference process we introduce here is the TTT-online version. Besides, the TTT-offline version always initializes the model with \(\theta^{*}\) when meeting each test data. We focus on the online version, whose performance has been proven that is much better than the offline version [47]. **DUA [33].** DUA is a newly proposed TTA method. Compared to TTT, DUA is more lightweight because it requires no back-propagation process and only updates \(<1\%\) parameters of the target model. Specifically, DUA aims to update the normalization statistics of the BN layers in an unsupervised manner and fix all remaining parameters of the target model. Here we first introduce the BN layers and then explain the detailed updating rule of DUA. Batch normalization (BN) layers are widely used components in modern deep neural networks. They are applied to stabilize the training process by reducing internal covariate shift [27]. In particular, once the training process is finished, the output of BN layers can be formulated as \[\text{BN}(x;\mu_{ori},\sigma_{ori}^{2},\gamma_{ori},\beta_{ori})=\frac{x-\mu_{ ori}}{\sqrt{\sigma_{ori}^{2}+\varepsilon}}\cdot\gamma_{ori}+\beta_{ori}, \tag{3}\] where \(\mu_{ori}=\mathbb{E}[\mathcal{D}_{ori}]\) and \(\sigma_{ori}^{2}=\text{Var}[\mathcal{D}_{ori}]\) are normalization statistics of the original domain, \(\gamma_{ori}\) and \(\beta_{ori}\) are the affine transformation parameters learned via back propagation during training process. These parameters are all fixed at test time in the traditional inference paradigm. However, recent work has found that recalculating normalization statistics in the transfer domain (e.g., test-time normalization (TTN) [41]) can improve the robustness of the target model. Therefore, DUA continues to update the normalization statistics in a momentum-updating manner. An illustration of DUA is shown in Figure 16. The main intuition of DUA is to adapt the target model by aligning the activation distribution between the original domain and the transfer domain. The adaptation rule of DUA is as follows: Given a test sample \(x^{t}\), DUA first expands it to a small batch \(x^{t}=\{x^{t}_{i}\}_{i=1}^{B_{data}}\) through data augmentation including random horizontal flipping, random cropping, and rotation, where \(x^{t}_{i}\) is an augmented version of \(x^{t}\). Then, DUA updates the values of the normalization statistics using Equation 4. \[\begin{split}\hat{\mu}_{t}&=(1-(\rho_{t}+\xi))\cdot \hat{\mu}_{t-1}+(\rho_{t}+\xi)\cdot\mu_{t},\\ \hat{\sigma}_{t}^{2}&=(1-(\rho_{t}+\xi))\cdot\hat{ \sigma}_{t-1}^{2}+(\rho_{t}+\xi)\cdot\sigma_{t}^{2},\end{split} \tag{4}\] where \(\mu_{t}=\mathbb{E}[x^{t}]\), \(\sigma_{t}^{2}=\text{Var}[x^{t}]\) are the current running normalization statistics. We use \(\hat{\mu}_{t}\), \(\hat{\sigma}_{t}\) to denote the updated statistics where \(\hat{\mu}_{0}=\mu_{ori}\), \(\hat{\sigma}_{0}^{2}=\sigma_{ori}^{2}\). In addition, \(\rho_{t}\) is a decaying momentum term defined in Equation 5. \[\rho_{t}=\rho_{t-1}\cdot w,\;\rho_{0}=0.1. \tag{5}\] There are two hyperparameters in DUA: \(w\in(0,1)\) controls the decay of \(\rho\), and \(\xi\in(0,\rho_{0})\) defines the lower bound of the momentum. **TENT [52].** Compared to TTT, TENT does not require an auxiliary task but regards prediction confidence as a self-supervision signal. Similar to DUA, TENT also only adjusts the parameters in the BN layers, and all other parameters of the target model are frozen. However, besides updating the normalization statistics \(\mu\) and \(\Theta^{2}\), TENT updates the affine parameters, \(\gamma\) and \(\beta\), as well. Figure 17 shows an illustration of TENT. The intuition behind TENT is straightforward. Regularizing entropy during training can assist domain adaptation [21], TENT demonstrates that minimizing entropy during inference can further improve the model's adaptability. Concretely, given a batch of test samples \(x^{i}=\{x^{i}_{i}\}_{i=1}^{B_{\text{train}}}\), TENT updates \(\gamma\) and \(\beta\) by minimizing the Shannon entropy [45] as \[\begin{split}\gamma_{t}&\leftarrow\gamma_{t-1}- \partial\mathcal{L}_{\text{{test}}}(x^{i})/\partial\gamma_{t-1},\\ \beta_{t}&\leftarrow\beta_{t-1}-\partial\mathcal{L }_{\text{{test}}}(x^{i})/\partial\beta_{t-1},\end{split} \tag{6}\] where \((\gamma_{0},\beta_{0})=(\gamma_{ori},\beta_{ori})\), and \(\mathcal{L}_{\text{{test}}}(\cdot)\) is defined in Equation 7. \[\mathcal{L}_{\text{{test}}}(f(x^{i}))=-\frac{1}{B_{\text{{test}}}}\sum_{i=1}^{ B_{\text{{test}}}}\sum_{j=1}^{C}p(j|x^{i}_{i})\log p(j|x^{i}_{i}). \tag{7}\] Specifically, TENT combines entropy minimization with test-time normalization [41]. It replaces the normalization statistics of the training data with the current statistics as \(\mu_{t}=\mathbb{E}[x^{i}]\), \(\sigma^{2}_{t}=\text{Var}[x^{i}]\). Then, it uses the updated parameters \(\{\mu_{t}\), \(\sigma^{2}_{t}\), \(\gamma_{t-1}\), \(\beta_{t-1}\}\) to make a prediction on \(x^{i}\). Note that TENT uses one forward process for efficiency, in turn, \(\gamma_{t}\), \(\beta_{t}\) will be used for predicting \(x^{i+1}\). **RPL [39].** As Figure 17 shows, RPL improves upon TENT by updating the affine parameters based on the prediction confidence, which is treated as the self-supervision label. However, the entropy-based loss functions are sensitive to label noise [61, 64]. Therefore, RPL uses generalized cross entropy (GCE) to adapt the target model on the transfer domain. Concretely, given a batch of test data \(x^{i}=\{x^{i}_{i}\}_{i=1}^{B_{\text{{test}}}}\), RPL updates the affine parameters using Equation 8. \[\begin{split}\gamma_{t}&\leftarrow\gamma_{t-1}- \partial\mathcal{L}_{\text{{rel}}}(f(x^{i}))/\partial\gamma_{t-1},\\ \beta_{t}&\leftarrow\beta_{t-1}-\partial\mathcal{L }_{\text{{rel}}}(f(x^{i}))/\partial\beta_{t-1},\end{split} \tag{8}\] where \(\mathcal{L}_{\text{{rel}}}\) is formulated by Equation 9. \[\mathcal{L}_{\text{{rel}}}(f(x^{i}))=\frac{1}{B_{\text{{rel}}}}\sum_{i=1}^{B_{ \text{{rel}}}}q^{-1}(1-p(\Psi|x^{i}_{i})^{q}). \tag{9}\] Here \(\Psi=\arg\max_{j=1,\ldots,C}p(j|x^{i}_{i})\), and \(q\in(0,1]\) is a hyperparameter. From Equation 9 we can observe that \(\lim_{q\to 0}\mathcal{L}_{\text{{rel}}}(\cdot)\) is the cross entropy loss (which has implicit weighting scheme [64]) and \(\lim_{q\to 1}\mathcal{L}_{\text{{rel}}}(\cdot)\) is the MAE loss (which is noise-robustness [19]). ### Poisoning Attacks **Overview.** Poisoning attacks are one of the most dangerous threats to the ML models [7, 59]. These attacks assume that the adversary can inject poisoned samples into the ML model's training dataset. The assumption is reasonable as the training datasets of ML models are usually collected from the Internet and it is hard to detect the poisoned samples manually given the size of the dataset. In poisoning attacks, the adversary's goal is to degrade model performance on a validation dataset \(\mathcal{D}_{\text{{val}}}\) through some malicious modifications \(\mathcal{A}\) to the training data \(\mathcal{D}_{\text{{train}}}\) as \[\begin{split}\max_{\mathcal{A}}\mathcal{L}(\mathcal{D}_{\text{{ val}}};\Theta^{*}),\\ \text{where }\Theta^{*}=\underset{\emptyset}{\text{argmin}} \mathcal{L}(\mathcal{A}(\mathcal{D}_{\text{{train}}});\Theta).\end{split} \tag{10}\] After being trained on the poisoned dataset \(\mathcal{A}(\mathcal{D}_{\text{{train}}})\), the model's performance degrades at test time [37]. **Goal.** Poisoning attacks can be broadly grouped into two categories - _untargeted poisoning attacks_[59, 35] and _targeted poisoning attacks_[43, 6]. The goal of untargeted poisoning attacks is to decline the overall performance of the target model. The goal of targeted poisoning attacks is to force the target model to perform abnormally on a specific input class. Backdoor attacks [36] are a special case of targeted poisoning attacks where the poisoned target models only misclassify samples containing specific triggers. **Note.** Our work is substantially different from previous poisoning attacks. We conduct the poisoning attack during the _inference process_ while previous work only conducts the poisoning attacks in the _training process_. Note that we focus on the _untargeted_ poisoning attacks in this paper. ### Adversarial Attacks **Overview.** Adversarial attacks aim to find a perturbed example \(x^{adv}\) around \(x\) which can be misclassified by the model. Such \(x^{adv}\) is called an adversarial example. Find such adversarial examples can be formulated as the following constrained optimization problem: \[\begin{split} x^{adv}&=\arg\max_{x^{i}}\mathcal{L}(x^ {i},y;\theta),\\ s.t.&||x^{i}-x||_{p}\leq\epsilon,\end{split} \tag{11}\] where \(y\) is the ground-truth label, \(||\cdot||_{p}\) is the \(\ell_{p}\)-norm, and \(\mathcal{L}(\cdot)\) is usually the cross-entropy loss. Fast Gradient Sign Method (FGSM) [20] is a widely used method to find adversarial examples, it can be formulated by \[x^{adv}=x+\epsilon\cdot\text{sign}(\nabla_{x}\mathcal{L}(f(x),y)). \tag{12}\] **DIM [56].** As Equation 12 shows, FGSM needs white-box access to the model to find adversarial examples. However, the adversaries may only have black-box access. Therefore, transfer-based adversarial attacks are proposed to generate adversarial examples against a surrogate model which can also misclassify the remote target model [14, 56, 15]. Among them, Diverse Input-FGSM (DIM) [56] is the state-of-the-art attack method. In brief, DIM applies random resizing with probability \(p\) to the input \(x\) to alleviate the overfitting of the adversarial examples on the surrogate model to improve the transferability (i.e., \(T(x,p)\) in Algorithm 1). In our paper, we integrate DIM to generate our poisoned samples. Note that any advanced transfer-based attacks can be integrated into our algorithm. ## 3 Threat Model **Adversary's Goal.** We assume that the target models (i.e., the models which the adversaries aim to attack) make predictions following the online TTA paradigm. For example, if the target model uses TTT to adjust the parameters, then we denote the target model as TTT-model. The adversary's goal is to degrade the target model's performance by nudging the model in a "wrong direction" by feeding poisoned samples at test time. Meanwhile, the benign samples uploaded by legitimate users and the poisoned samples fed by the adversaries are in the same pipeline, which means multiple users concurrently use and update the parameters of the target model. We use a fixed evaluation dataset to monitor the changes in model performance. **Adversary's Knowledge.** We assume that the adversary has three pieces of knowledge: (i) They know which TTA method the target model uses. This assumption is realistic since TTA methods should be publicly available so that they can be rigorously vetted for security before deployment like cryptanalysis. In addition, systems may eventually converge towards certain SOTA public TTA methods. (ii) The adversary knows the API where the legitimate users upload the benign samples, hence they can upload the poisoned samples to covertly poison the target model. (iii) They may collect a surrogate dataset that comes from a similar distribution of the target model's training dataset. Notably, different from the previous poisoning attacks, the adversaries do not know the architecture or training hyperparameters of the target model. (iv) They are unable to tamper with the training data or intervene in the target model's training process. (v) They also do not have access to the model parameters of the target model at any time. (vi) They cannot control the order of the poisoned samples reaching the target model, e.g., the target TTA model may have been updated by an unknown number of test samples. **Adversary's Capability.** The surrogate dataset enables the adversary to train a surrogate model, which can then be utilized to generate poisoned samples. However, it should be noted that the adversary cannot obtain information about the gradient of the loss from the target model and can only resort to transfer-based adversarial attacks, as demonstrated in previous works such as [32, 56]. That is, they can only feed these poisoned samples to the target TTA-online model. Moreover, since the test data come to the target TTA models "point-by-point" or "batch-by-batch", the adversaries can set up the poisoned samples in advance to mix them with the benign samples. **Attack Challenge.** TePAs lead to the following non-trivial challenges. Previous poisoning attacks assume that the target model is trained on fixed poisoned training data or its training process is controlled by the adversary. However, none of the assumptions are valid in the case of TTA. First, the adversary cannot poison the training data and does not control the training process of the target TTA model. They only have query access to the target TTA model. Secondly, a TTA model updates its parameters for each query sample once deployed. Even if the adversaries possess knowledge of the training data and process, such as hyperparameters like training epochs and batch size, they cannot assume that the target model is newly trained. Finally, the adversaries must take the budget (i.e., the number of poisoned samples) into consideration to stay stealthy and avoid detection. Nevertheless, we show in our evaluation (see Section 5) that our attacks are effective with a limited amount or limited fraction of poisoned samples. ## 4 Attack Methodology ### Attack Overview In general, TePAs consist of three steps - surrogate model training, poisoned sample generation, and target model poisoning. The overall workflow of TePAs is illustrated in Figure 3. * **Surrogate Model Training.** The goal of this step is to construct a surrogate model \(f_{s}\) with the surrogate dataset as a stepping stone to launch the attack. It is essential to note that the adversary operates under the assumption that the target model's architecture is unknown and the distribution of the shadow dataset resembles that of the target model's training dataset. Moreover, the surrogate model's training process is independent of the target model and does not need any supervision information from the target model, such as query results. Figure 3: Workflow of TePA. The adversary uses PoiGen to generate poisoned samples which will be fed into the test data stream (the yellow indicating arrow). The target model \(f\) will be updated via TTA methods to \(f^{\prime}\) (the blue indicating arrow) according to the arrived test data. When meeting benign samples, the performance of \(f^{\prime}\) (Acc) will be improved. However, the poisoned samples could degrade the prediction ability of \(f^{\prime}\). * **Poisoned Sample Generation.** In this step, we introduce PoiGen, a poisoned sample generation framework. The details of PoiGen are summarized in Algorithm 1. The goal of PoiGen is to create a poisoned sample \(x^{\prime}\) from a clean seed image \(x_{in}\) that aims to decrease the inference performance of the target model. Depending on the target TTA method \(\mathcal{A}\), PoiGen uses different generation strategies (e.g., different loss function \(L_{poi}\)) to generate poisoned samples with stronger transfer properties. We stress that the poisoned sample generation process does not interact with the target model, which enhances the stealthiness of our attack. Also, PoiGen allows the attacker to plug in different advanced transfer-based adversarial attack algorithms. * **Target Model Poisoning.** The goal of this step is to employ different poisoning strategies to deliver the poisoned samples to the target model. In this step, the adversary must take various factors, such as the budget (i.e., the number of poisoned samples) and the order (i.e., how the poisoned samples and the clean samples are mixed at inference time) into consideration to stay stealthy and avoid detection. In conclusion, the core process of TePAs is PoiGen (poisoned sample generation). To attack different TTA models, PoiGen chooses different loss functions \(\mathcal{L}_{poi}\) and attack strategies. We outline how PoiGen generates poisoned samples for four different TTA models in the rest of this section. Note that since poisoning strategies are tightly coupled with performance evaluation, we defer the description of poisoning strategies in Section 5.5. ### TePA Against TTT Recall that in the inference process, TTT fine-tunes the feature extractor \(e(\cdot)\) and the SSL task branch head \(\pi_{s}(\cdot)\) through the rotation prediction loss \(\mathcal{L}_{s}\). Our intuition is that if \(e(\cdot)\) and \(\pi_{s}(\cdot)\) learn the wrong information about the rotation from the test samples together, the feature extractor will be guided in incorrect directions, causing the model to lose the information learned from the training data. Previous work [13] also shows that the rotation prediction accuracy is strongly linked to the classification accuracy of the primary task. Therefore, the adversary can generate poisoned samples according to the auxiliary loss \(\mathcal{L}_{s}\) by adversarial attacks. Specifically, the generated noise should maximize \(\mathcal{L}_{s}\) in each angle. Inspired by Universal Adversarial Perturbations [34], given one original sample, the adversary may find a universal perturbation for all its rotations. Specifically, when attacking TTT-models (Line 17-22), PoiGen first sets \(\mathcal{L}_{poi}\) as \(\mathcal{L}_{s}\) and generates adversarial perturbation for each rotation \(x_{rot}\). Here \(\mathrm{rot}90(x,j)\) stands for rotating the image \(x\) by \(90\times j\) degrees (Line 21). Given an image rotated by 0 degrees and its corresponding rotation label \(y^{\prime}\) is 1, PoiGen first computes the loss \(\mathcal{L}_{poi}\) and backpropagate the gradient that maximizes the loss to the original image (Line 8). Based on this gradient, PoiGen obtains the generated noise \(\delta\) (Line 12), which is added to the clean image \(x_{in}\), and produces a new poisoned sample \(x^{\prime}\) (Line 13). This sample is then rotated by 90 degrees (with a corresponding label of 2), and a new perturbation is generated to fool the model on rotation prediction. The same procedure is followed for rotations of 180 and 270 degrees. After that, four perturbations are added to the image for four rotations. To make the generated perturbation more robust, PoiGen introduces a hyperparameter \(I_{iter}\) to repeat the whole process. After generating perturbation, we consider the adversarial examples as poisoned examples \(x^{\prime}_{tt}\) against the TTT-models. ``` Input: Seed image \(x_{in}\), surrogate model \(f_{s}\), the target TTA method \(\mathcal{A}\), loss function \(\mathcal{L}_{poi}\), the perturbation budget \(\epsilon\), updating step \(\alpha\); Output: Poisoned sample \(x^{\prime}\); 1Def DIM(\(x\), \(y\), \(f\), \(L\), \(\epsilon\)) : 2\(g=0\); 3\(\mu=1\); 4\(p=0.5\); 5for\(j=1\) to \(L_{adv}\)do 6\(x=T(x,p)\); 7if\(y\) is not None then 8\(g=\mu\cdot g+\frac{\nabla_{x_{in}}L(f(x),y)}{\|\nabla_{x_{in}}L(f(x))\|_{1}}\); 9 10else 11\(g=\mu\cdot g+\frac{\nabla_{x_{in}}L(f(x))}{\|\nabla_{x_{in}}L(f(x))\|_{1}}\); 12 13\(x^{adv}=x_{in}+\alpha\cdot\mathrm{sign}(g)\); 14\(\delta=\mathrm{Clip}(x^{adv}-x_{in};-\epsilon,+\epsilon)\); 15\(x=\mathrm{Clip}(x_{in}+\delta;0,1)\); 16 17return\(x\). 18 19 20Main function PoiGen(\(\mathcal{A}\), \(x_{in}\), \(f_{s}\), \(\mathcal{L}_{poi}\), \(\epsilon\)): 3if\(\mathcal{A}\) is TTT then 3\(x^{\prime}=x_{in}\); 3for\(i=1\) to \(I_{iter}\)do 3for\(y^{\prime}=1\) to \(4\)do 3\(x_{rot}=\mathrm{rot}90(x^{\prime},y^{\prime})\); 3\(x^{\prime}=\mathrm{DIM}(x_{rot},y^{\prime},f_{s},\mathcal{L}_{poi},\epsilon)\); 3 3elseif\(\mathcal{A}\) is TENT or RPL then 4\(x^{\prime}=\mathrm{DIM}(x_{in},y=\mathrm{None},f_{s},\mathcal{L}_{poi},\epsilon)\); 4 4elseif\(\mathcal{A}\) is DUA then 5\(x^{\prime}=x+\epsilon\cdot\mathcal{A}(\mu,\sigma^{2})\) (See Equation 13); 6 7return\(x^{\prime}\). ``` **Algorithm 1**PoiGen ### TePA Against DUA Recall that DUA uses a momentum updating method to fine-tune the statistical parameters in the BN layers. However, the statistical parameters calculated from \(x^{\prime+1}\) may differ significantly from \(x^{\prime}\), and this difference may disrupt the adaptation process of the model parameters. Meanwhile, this disruption is persistent and continues to affect the downstream test data due to the momentum design. We show that a test sample with Gaussian noise can disrupt the updating process of the statistical parameters. Thus, the poisoned sample for DUA is: \[x^{\prime}_{dua}=x+\epsilon_{dua}\cdot\mathcal{N}((\mu_{dua},\sigma_{dua}^{2}), \tag{13}\] where \(\mu_{dua}\) and \(\sigma_{dua}^{2}\) control the perturbation distribution and \(\epsilon_{dua}\) controls noise intensity. **Note.** We can observe that, compared to TePAs against TTT, the generation process of \(x^{\prime}_{dua}\) does not require a surrogate model. Recall that the adaptation process of DUA does not rely on any SSL tasks, nor is it based on a loss function to adjust the target model. Therefore, the adversary does not need a surrogate model to launch the gradient-based adversarial attack for generating poisoned samples, which makes the poisoning attacks cheaper and easier. ### TePA Against TENT & RPL **TePA Against TENT.** Recall that TENT minimizes the entropy of the prediction to adapt the affine parameters of the BN layers, and RPL uses GCE loss instead. We aim to generate such following perturbation to compel the target model to learn "wrong information" from our poisoned samples: \[\Delta=\arg\max_{\delta}\mathcal{H}(f_{s}(x+\delta)),\text{where}\ ||\delta||_{\infty}\leq\epsilon_{ tent}. \tag{14}\] Here \(\mathcal{H}(y)=-\sum_{c}p_{c}\log p_{c}\) is the Shannon entropy. Since TENT uses prediction logits as the self-supervision signal, we aim to generate such adversarial examples to make the entropy of the logits much larger than normal. Therefore, PoiGen first sets \(L_{poi}\) as \(\mathcal{H}\). Meanwhile, compared to TePAs against TTT-models, we do not need a label to attack TENT-models, so PoiGen sets \(y=\)None and uses DIM to maximize \(\mathcal{H}\) (Line 10). Therefore, as shown in Line 24 of Algorithm 1, the final poisoned samples against TENT-models can be formulated as \[x^{\prime}_{tent}=\texttt{DIM}(x_{in},y=\text{None},f_{s},\mathcal{H}, \epsilon_{tent}). \tag{15}\] **TePA Against RPL.** We note that the adversarial examples generated by Equation 15 can also be used as poisoned samples to poison RPL-models. This is because as entropy \(\mathcal{H}\) increases, \(p(\Psi|x)\) decreases, which causes \(L_{rpl}\) to increase as well. Therefore, we set \[x^{\prime}_{rpl}=x^{\prime}_{tent}. \tag{16}\] ## 5 Evaluation ### Experimental Setup **Datasets.** We use 5 datasets to conduct our experiments, including CIFAR-10 [1], CIFAR-100 [1], CIFAR-10-C [2], CIFAR-100C [3], and CINIC-10 [11]. CIFAR-10/100-C are the corrupted datasets of CIFAR-10/100, which contain 5 different levels of corruption, in which level-5 is the highest corruption severity. Specifically, we choose four corruptions to evaluate the target model's performance: Ori, GIs-5, Fog-5, and Con-5. "Ori" means the original dataset (i.e., the images from CIFAR-10/100), "GIs-5" stands for "Glass blur" with corruption severity level 5, and "Fog" ("Con") means the "Fog" ("Contrast") corruption. To train the target models, we choose CIFAR-10 and CIFAR-100 as the training datasets \(\mathcal{D}_{t}\). Meanwhile, the CINIC-10 dataset contains images that are from CIFAR-10 and ImageNet [12]. We use the images from ImageNet as the surrogate dataset \(\mathcal{D}_{t}\) to train the surrogate model, which makes our poisoning attacks more realistic since \(\mathcal{D}_{t}\cap\mathcal{D}_{s}=\phi\). The detailed descriptions of the above 5 datasets are shown in Appendix A. **Target Model.** We use ResNet-18 and ResNet-50 as the architectures of the target models. We use C10-Res18 (C100-Res18) to denote the ResNet-18 model trained on CIFAR-100. Finally, we could get 4 target models: C10-Res18/50 and C100-Res18/50, which will be used as the target models for DUA, TENT, and RPL. We train the above ResNets using public implementations.3 Meanwhile, recall that the training process of TTT needs two learning tasks. Therefore, we transform the ResNets into Y-structure. For instance, we first choose the splitting point in ResNets, the parameters after the splitting point will be copied to form two identical branches: One is used for the main task and the other is for the auxiliary task. We choose the end of the 4th resblock (3rd resblock) in ResNets as the splitting point when \(\mathcal{D}_{t}\) is CIFAR-10 (CIFAR-100). Consequently, we get 4 target models as TTT-models: C10-Res18/50@Y4 and C100-Res18/50@Y3, where Y3/Y4 means the splitting point is the end of the 3rd/4th resblock. We run the official training implementation4 to train the target TTT-models. Meanwhile, we replace BN with Group Normalization (GN) [54] in the ResNets following [47] for better performance. Footnote 3: [https://github.com/huyynphan/PyTorch_CIFAR-10](https://github.com/huyynphan/PyTorch_CIFAR-10). Footnote 4: [https://github.com/yueatsprograms/ttt_cifar_release](https://github.com/yueatsprograms/ttt_cifar_release). Footnote 5: [https://github.com/kuangliu/pytorch-cifar](https://github.com/kuangliu/pytorch-cifar). **Surrogate Model.** As mentioned above, we use the images from ImageNet (resized to \(32\times 32\times 3\)) as our surrogate dataset to train surrogate models. When the target models are TTT-models, we choose Res18@Y3 as the architecture of the surrogate model. Otherwise, when the target models are TENT-/RPL-models, we choose VGG-11 as the surrogate model, which is trained by a different public implementation5 than training the target models. Footnote 6: [https://github.com/yuestprograms/ttt_cifar_release](https://github.com/yuestprograms/ttt_cifar_release). **Hyperparameters of TTA Methods.** TTT uses an SGD optimizer to update the parameters of the target models for 1 epoch, the learning rate is 0.001. For DUA, we set \(\omega=0.94\), \(\xi=0.005\), and \(B_{dua}=64\). For TENT and RPL, the batch size for the coming test samples is \(B_{test}=B_{rpl}=200\), and they both use an SGD optimizer with a momentum factor of 0.9 to update the affine parameters for 1 epoch. Meanwhile, we set \(q=0.8\) for RPL. **Evaluation Metric.** To monitor the prediction ability of the target model \(f^{t}\) promptly, we use an evaluation dataset \(\mathcal{D}_{t}\) which contains 1,000 evaluation samples to evaluate the model's performance. The top-1 prediction accuracy (denoted as Acc) is the performance indicator. We follow the evaluation methods from the official implementation of the TTA methods. For instance, TTT,7 TENT,8 and RPL8 all adapt the models once there comes new evaluation data. Therefore, we input evaluation data to adjust the model first and then make predictions. After each prediction, we reset the model to \(f^{\prime}\) for the next prediction. However, DUA adjusts the model in an online manner with a few unlabeled samples and then freezes the model to make predictions.9 Therefore, we also freeze the model when we evaluate it on our 1,000 evaluation samples. Note that the evaluation samples in \(\mathcal{D}_{\epsilon}\) suffer from the same corruption, and we will use different corrupted \(\mathcal{D}_{\epsilon}\) to evaluate the target model's performance, i.e., Ori, Gls-5, Fog-5, and Con-5. Footnote 9: [https://github.com/jmiemira/UUA](https://github.com/jmiemira/UUA). **Hyperparameters of TePAs.** We set the perturbation budget \(\epsilon=\nicefrac{{32}}{{255}}\) (\(\ell_{ua}\)-norm) for default. For TePA against TTT-models, we set \(I_{iter}=3\) and \(I_{adv}=20\). Meanwhile, we use a staged updated step strategy. For instance, \(\alpha=\nicefrac{{4}}{{255}}\) when \(I_{adv}\in[0,10)\), and \(\alpha=\nicefrac{{2}}{{255}}\) when \(I_{adv}\in[10,15)\), otherwise, \(\alpha=\nicefrac{{1}}{{255}}\). For TePA against DUA-models, we set \(\mu_{data}=0.0\) and \(\sigma_{data}^{2}=0.8\). For TePA against TENT-models and RPL-models, we set \(I_{adv}=200\) and \(\alpha=\nicefrac{{1}}{{255}}\). ### Utility of Frozen Target Model We first evaluate the prediction ability of the frozen target models. We use four kinds of evaluation datasets - Ori, Gls-5, Fog-5, and Con-5 - to evaluate the utility of our eight target models. The results are shown in Table 1. We observe the following two phenomenons: (i) Deep neural networks (DNNs) cannot be robust enough on distribution shifts. Take C10-Res18 as an example; when there is no corruption on evaluation samples, the Acc is 93.00%. However, the model's performance drops to 58.10% (64.80%) when \(\mathcal{D}_{\epsilon}\) is Gls-5 (Fog-5). (ii) Y-structured DNNs are more robust than naive DNNs [25]. For example, the Acc of C10-Res18 on Con-5 is 19.20%, while the Acc is 83.60% for C10-Res18@Y4. However, Y-structured DNNs are still not robust enough on all corrupted \(\mathcal{D}_{\epsilon}\), e.g., the Acc of C10-Res18@Y4 is 61.90% on Gls-5, which is \(\sim 30\%\) lower than that on Ori. Therefore, we need TTA methods to further improve the model's performance on distribution shifts. Note that the results in Table 1 are used as the baseline when discussing the enhancement capabilities of TTA methods and the attack performance of TePAs. ### Utility of TTA Methods We now show that TTA methods can improve the target models' performance on distribution shifts. To better demonstrate the process of improving model performance with the TTA method, we divide the inference phase into two stages: the "warming-up phase" and the "evaluation phase." In the warming-up phase, the model will be updated by the coming test samples from the warming-up dataset \(\mathcal{D}_{w}\) through TTA methods. Assuming the initial state of the target model is \(f^{0}\), through being updated by \(t\) (batches of) test samples, the model comes to \(f^{t}\). Then, if we would like to monitor the performance of \(f^{t}\), we should input \(f^{t}\) to the evaluation phase to calculate Acc. Note that the incoming test samples are independent and identically distributed (i.i.d.) samples as the evaluation dataset \(\mathcal{D}_{\epsilon}\). For instance, if we use Gls-5 as the evaluation dataset, the warming-up samples should also come from Gls-5. Also, we set the \(\mathcal{D}_{w}\cap\mathcal{D}_{\epsilon}=\phi\). Through our setting, we would like to evaluate how much the model will be boosted by learning distributional information from the i.i.d. samples. To fully demonstrate the lifting power of the TTA methods, we adapt our target models by four TTA methods, including TTT, DUA, TENT, and RPL. The evaluation results of these four TTA methods on ResNet-18 with CIFAR-10-C are shown in Figure 4. The results for the ResNet-50 trained on CIFAR-100 are shown in Figure 22. Firstly, we can observe that the performance of the target models can be improved by the TTA methods. Meanwhile, as the amount of i.i.d. samples increases, the model gains more performance improvement. For instance, from the results shown in Figure 3(a) we can observe that the Acc of TTT-0 on Fog-5 and Con-5 are 73.93% and 83.97%, respectively. However, after being updated by 50 i.i.d. samples (the model comes to TTT-50), the performances have been improved to 75.17% and 84.43%. Meanwhile, the performance could be further improved to 81.9% and 88.37% when the model comes to TTT-1000. Secondly, we compare DUA, TENT, and RPL together since they adapt the same target model. Compared to DUA, TENT and RPL both have a greater ability to enhance the \begin{table} \begin{tabular}{l l c c c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Target Model} & \multicolumn{4}{c}{Acc} \\ \cline{3-6} & & Ori & Gls-5 & Fog-5 & Con-5 \\ \hline \multirow{4}{*}{CIFAR-10} & C10-Res18@Y4 & 93.70 & 61.90 & 71.40 & 83.60 \\ & C10-Res50@Y4 & 92.80 & 56.60 & 68.00 & 78.50 \\ & C10-Res18 & 93.00 & 58.10 & 64.80 & 19.20 \\ & C10-Res50 & 94.20 & 62.60 & 70.80 & 24.90 \\ \hline \multirow{4}{*}{CIFAR-100} & C100-Res18@Y3 & 71.40 & 20.90 & 41.40 & 48.70 \\ & C100-Res50@Y3 & 65.20 & 24.70 & 31.40 & 30.80 \\ \cline{1-1} & C100-Res18 & 73.50 & 24.60 & 32.60 & 11.50 \\ \cline{1-1} & C100-Res50 & 76.20 & 25.50 & 38.30 & 12.30 \\ \hline \hline \end{tabular} \end{table} Table 1: The utility of the frozen target model (%). Figure 4: Utility of TTA methods. The target model is ResNet-18 trained on CIFAR-10. The x-axis represents different evaluation datasets. The y-axis represents the prediction accuracy. model. For example, when the target model is C10-Res18 and \(\mathcal{D}_{\epsilon}\) is Fog-5, the Acc of TENT-10 and RPL-10 are both higher than 80.00%, but only 76.50% for DUA-10. This is because DUA processes one test sample at a time while TENT and RPL require the test samples to come in a "batch-by-batch" manner, which makes TENT and RPL learn normalization statistics information quickly. Thirdly, we compare TENT and RPL since they both adapt the affine parameters in the BN layers but use different loss functions. We observe that TENT can achieve better performance than RPL. For instance, when the target model is C10-Res18, the Acc of RPL-40 on Ori, Gls-5, Fog-5 and Con-5 are 91.83%, 64.87%, 83.97%, and 83.57%, respectively, but the Acc of TENT-40 are 92.10%, 68.27%, 85.40%, and 84.27%, respectively. ### TePAs Against TTA Models We here launch TePAs against TTA models. To fully demonstrate the vulnerability of the TTA models against TePAs, we feed poisoned samples to adapt all eight target models and evaluate the impact on the prediction performance. The results are shown in Figure 5, Figure 6, Figure 7, and Figure 8. Firstly, we can observe that regardless of the network architecture or the dataset, our poisoned samples lead to a significant reduction in the prediction abilities of the target models. The performance of the model gradually decreases as the number of poisoned samples increases. For instance, as Figure (a)a shows, when we feed 50 poisoned samples to C10-res18@Y4, the Acc on Gls-5 drops to 30.87%, and the Acc further drops to 26.97% with 100 poisoned samples. Meanwhile, we can also observe that with TePAs, the model's performance decreases on both original and corrupted evaluation datasets. For instance, from Figure (c)c, we can observe that when we feed 40 batches of poisoned samples, the Acc both drop about 20% on Ori and Gls-5. Note that we only need a few points to significantly reduce the target model's performance, e.g., when we feed just 10 poisoned samples, the Acc on Ori drops from 76.20% to 41.83% (Figure (d)d). Secondly, we can observe that even if the surrogate model has a different architecture and is trained on a different sur Figure 5: TePAs Against TTT-models. The left y-axis and the right y-axis represent the prediction accuracy on the original and corrupted evaluation datasets, respectively. The x-axis represents the number of poisoned samples. Figure 6: TePAs Against DUA-models. The left y-axis and the right y-axis represent the prediction accuracy on the original and corrupted evaluation datasets, respectively. The x-axis represents the number of poisoned samples. Figure 7: TePAs Against TENT-models. The left y-axis and the right y-axis represent the prediction accuracy on the original and corrupted evaluation datasets, respectively. The x-axis represents the number of poisoned samples. rogate dataset, TePAs are still effective. Recall that we use a Res18@Y3 as the surrogate model which is pre-trained on the ImageNet (from CINIC-10) to poison TTT-models, whose structure and training dataset are both different from the target C10-Res18@Y4 model. Moreover, when poisoning C100-Res18@Y3, our surrogate dataset (CIFAR-10) only contains part of the distribution information compared to the training dataset of the target model (CIFAR-100). When poisoning TENT- and RPL-models, we use a VGG-11 as the surrogate model. In short, the adversary can always generate powerful poisoned samples based on the surrogate model even though they do not have adequate background knowledge about the target model. Thirdly, through comparing Figure 7 and Figure 8, we find that RPL is more robust against TePAs than TENT. For instance, when the target model is C10-Res18, with 40 batches of poisoned samples, TENT's Acc drops 12.53% and 15.20% on Fog-5 and Con-5, respectively, and RPL's Acc drops 7.73% and 10.07%, respectively. In conjunction with the discussion in Section 5.3, we can conclude that by minimizing entropy instead of using GCE loss, TENT can obtain a greater increase than RPL when the test samples are i.i.d. samples; however, its performance also decreases larger than RPL. Nevertheless, RPL cannot resist TePAs perfectly. For instance, when we feed 40 batches of poisoned samples (Figure (b)b), Acc drops from 81.30% to 68.73% on Con-5. **t-SNE Visualization.** To better demonstrate the effect of TePAs, we feed the evaluation data (Ori and GIs-5) to the poisoned model and visualize the features with t-Distributed Neighbor Embedding (t-SNE) [50]. The results are shown in Figure 9, in which different colors denote samples from different classes. We can observe that when \(t=0\), the evaluation samples on Ori can be well distinguished by the model, and the clustering effect on GIs-5 is weak, i.e., some features are close to each other. However, as poisoned samples are incrementally introduced to the model, the features become increasingly entangled. For instance, at \(t=100\), the model exhibits a significant reduction in distinguishability between different sample categories on GIs-5. ### Impact of the Poisoning Strategies In the threat model, we consider the adversary cannot control the location of the poisoned samples appearing in the test data stream. As such, we here discuss the effect of the poisoned samples' location on TePAs. For instance, we focus on a relatively small attack window, in which the adversary can inject multiple poisoned samples. We consider three scenarios: (1) _Uniformly poisoning_: poisoned samples appear in the test data stream "uniformly." (2) _Warming-up before poisoning_: The target model has been fine-tuned by several i.i.d. samples before the arrival of the poisoned samples. (3) _Warming-up after poisoning_: After the poisoning process, the target model will be further fine-tuned by several i.i.d. samples. In this part, we use the ResNet-18 trained on CIFAR-10 as the target model. **Uniformly Poisoning.** Given a test data stream, we first consider that each test sample has a probability \(P\) to be a poisoned sample, and a probability \(1-P\) to be an i.i.d. sample. For TTT- and DUA-models, we feed 100 test samples in total. For TENT- and RPL-models, we feed 40 batches of test samples. We traverse \(P\) from 0.0 to 1.0. The results of the target model's performance are shown in Figure 10. First, we can observe a general phenomenon: Acc drops as the probability of the poisoned samples increases. Moreover, this trend is reflected in all TTA methods and all evaluation datasets (Original and other corrupted datasets). Concretely, when we poison TTT-models and the evaluation dataset is Ori, Acc is 93.43%, 88.50%, and 86.60% if \(P\) is 0.0, 0.5, and 0.8, respectively (shown in Figure (a)a). Meanwhile, we notice that our poisoning attacks can degrade the target model's performance significantly with a low poisoning ratio. For example, when the target model is a TENT-model and \(P=0.2\) (see Figure (c)c), we can degrade the performance of the target model by 2.90%, 6.60%, 4.90% and 11.60% on Ori, GIs-5, Fog-5, and Con-5, respectively, which further demonstrates the efficacy of our attacks. **Warming-up before Poisoning.** In this part, we aim to evaluate TePAs in the following scenario: the target model has learned distributional information (about the evaluation samples) before the poisoning process. In other words, the target models have received several i.i.d. samples in advance. We feed i.i.d. samples first and then feed poisoned samples to TTT-, DUA-, TENT-, and RPL-models in order, respectively. Then, we evaluate the target model's performance on GIs-5 (Figure 11) and Ori (Figure 23). Take the TTT-model's performance on GIs-5 as an example (Figure (a)a). Firstly, we can observe that given a certain number of i.i.d. samples (\(\#x\)), as the number of poisoned samples (\(\#x^{\prime}\)) increases, Acc is gradually decreasing. For instance, when \(\#x=0\), Acc drops by 36.90% (38.90%) if \(\#x^{\prime}\) is 100 (400). Another observation is that increasing \(\#x\) helps to mitigate the performance decrease. For instance, when \(\#x^{\prime}\)=200, Acc drops by 37.70% if \(\#x\) is 0, and Acc drops only 32.80% if \(\#x\) is 5,000. Secondly, although i.i.d. samples lead to a more robust target model against TePAs, the improved Acc through large amounts of i.i.d. samples can be quickly degraded by Figure 9: The features are obtained from the evaluation dataset (1k evaluation samples) through the target TTT-model C10-Res18@Y4. We project them into a plane using t-SNE and arrange the t-SNE results in time order from left to right. a few poisoned samples. For instance, TTT increases 8.50% Acc by warming up with \(5,000\) i.i.d. samples but drops by 22.60% when we feed only 100 poisoned samples. In general, the TTA models are still vulnerable to TePAs even if being adjusted with i.i.d. samples beforehand. **Warming-up after Poisoning.** Besides feeding the i.i.d. samples in advance, the target model can also continue to receive i.i.d. samples after the poisoning process. In this part, we feed poisoned samples first and then feed i.i.d. samples. We evaluate the model's performance on GIs-5 (Figure 12) and Ori (Figure 24). The trends in experimental results for the four TTA algorithms are generally consistent. Here we take the DUA-model's performance on GIs-5 as an example. First, we observe that, regardless of the number of poisoned samples, the model's utility will recover to the normal level with i.i.d. samples. For instance, when \(\sharp x\) is 500, Acc are 66.60% and 66.17% when \(\sharp x^{\prime}\) is 50 and 500. Therefore, the performance drop caused by the poisoned samples can be largely eliminated. However, we find that the cost required for this resilience is relatively expensive, i.e., the recovery is less sufficient with fewer i.i.d. samples. For instance, if we leverage 20 poisoned samples to launch attacks first, and 100 i.i.d. samples can only recover the model to 64.47%. **Note.** Combining the results in Figure 11 and Figure 12, we notice that TTA methods have "instant response" to the relative location between poisoned samples and i.i.d. samples. Take the DUA-model as an example. Concretely, when \(\sharp x^{\prime}\) and \(\sharp x\) are both 500. If we feed poisoned samples earlier than i.i.d. samples, the Acc will be 66.17%. However, it drops to 44.43% if we feed the i.i.d. samples beforehand. Meanwhile, we notice that attacking TTT is easier than the other three TTA methods by comparing the poisoning percentage and the degree of performance degradation, this is because TTT updates the whole parameters of the feature extractor, but other TTA methods only update the parameters in the BN layers. ### Discussion **Loss Value.** To better explain why TePAs are successful, we visualize the statistics of the loss values, which are shown in Figure 13. For instance, we feed 100 test samples ( poisoned samples or benign samples) to the TTT-model. For TENT- and RPL-models, we feed 40 batches of test samples. First, we can observe that the poisoned samples indeed have greater loss values, which means the poisoned samples that have larger losses on the surrogate model can be transferred to the target model as well. For instance, as shown in Figure 13a, the losses of the benign samples are concentrated Figure 11: Warming-up before Poisoning. The target model is ResNet-18 trained on CIFAR-10. The \(\mathbf{y}\)-axis and the \(\mathbf{x}\)-axis represent the number of the i.i.d. samples and the poisoned samples, respectively. We fix the evaluation dataset to GIs-5. Figure 12: Warming-up after Poisoning. The target model is ResNet-18 trained on CIFAR-10. The \(\mathbf{y}\)-axis and the \(\mathbf{x}\)-axis represent the number of poisoned samples and the i.i.d. samples, respectively. We fix the evaluation dataset to GIs-5. Figure 10: Uniformly Poisoning. The target model is ResNet-18 trained on CIFAR-10. The left \(\mathbf{y}\)-axis and the right \(\mathbf{y}\)-axis represent the prediction accuracy on the original and corrupted evaluation datasets, respectively. The \(\mathbf{x}\)-axis represents the probability \(P\) of being a poisoned sample for each test sample. around 0.0. However, the losses caused by poisoned samples are around 1.5. Second, by comparing the results of TENT and RPL (Figure 12(b) and Figure 12(c)), we observe that RPL takes a smaller range of losses than TENT, which is the reason why RPL has less fluctuation of the performance caused by benign samples or poisoned samples. **Non-i.i.d. Samples.** From Figure 13 we also notice that the corrupted benign samples (non-adversarial) also have larger loss values than the original samples. Therefore, we are curious that what is the impact of using non-i.i.d. samples to warm up the model. In this part, we use one kind of corrupted data (e.g., Gls-5) to update the model first, then we evaluate the model on another kind of corrupted data (e.g., Fog-5). The results are shown in Figure 21. First, we can observe that i.i.d. samples are the most effective samples in improving performance. Second, there is uncertainty about the impact of non-i.i.d. samples (increasing or decreasing), but poisoned samples can consistently reduce the performance to the largest extent. ## 6 Defense As we have shown before, TTA methods are vulnerable to TePAs. To mitigate the attacks, we discuss possible defenses against TePAs. In this part, we take C10-Res18@Y4 adapted with TTT as the target model and take Fog-5 as the evaluation dataset. We feed 100 poisoned samples to the target model, the experimental results are shown in Figure 14. Meanwhile, we discuss these defenses' impact on the benign samples in Figure 25. Other TTA methods show similar trends. **Adversarial Training (AT).** Since TePAs generate poisoned samples with adversarial attacks, one possible defense method is using adversarial training (AT) [31] to improve the target model's robustness on poisoned samples. The detailed AT process for TTT-model is shown in Algorithm 2. For instance, we train the target robust TTT-model \(f_{at}\) using both the original training samples and adversarial examples generated by PoiGen. Meanwhile, we use different perturbation budgets \(\epsilon_{at}\) to generate adversarial examples. From Figure 13(a), we can observe that poisoned samples cannot reduce the target TTT-model's performance after AT. However, AT can cause a reduction in the model's performance, which may be because the adversarial examples introduce a negative impact on the feature extractor of the model. Meanwhile, another disadvantage of AT is its high computational cost. For instance, the training time of 1 epoch for AT is \(\sim\) 13 times more than that of training the model without AT (We set \(I_{iter}=2\) and \(I_{adv}=5\) in AT), which means it is unrealistic to apply AT to larger-scale models. **Bit-depth Reduction (BDR).** Color bit-depths are used to present image pixels. For instance, an 8-bit value can represent a pixel value in [0, 255]. Xu _et al._[58] found that reducing the bit depth is effective against adversarial attacks. To verify if bit-depth reduction (BDR) can defend against TePAs, we reduce the bit-depths of the input poisoned samples to 4 and 2. The results of the target model's performance are shown in Figure 13(b). We note that the evaluation data in calculating Acc are also crafted by BDR as the target model cannot distinguish the poisoned samples and benign samples (The same applies below defense methods). We can observe that as the number of poisoned samples increases, the performance of the target model gradually decreases, which means BDR cannot defend against TePAs effectively. **Random Resizing & Padding (RRP).** Adding a random resizing (RR) layer and a random padding (RP) layer is an effective way to build DNNs that are robust to the poisoned samples [55]. The first RR layer resizes the original input \(x\) with size \(W\times W\times C\) to a newly resized \(x^{\prime}\) with random size \(W^{\prime}\times W^{\prime}\times C\), where \(|W^{\prime}-W|\) should be in a small range. After that, the second RP layer outputs a new padding image \(x^{\prime\prime}\) with padding zero pixels around the resized images. For instance, it pads random \(w\) zeros on the left and \(h\) zeros on the top of \(x^{\prime}\), respectively. We set \(W^{\prime}\) as the random numbers in [32, 40], and \(w,h\) are the random numbers in \([0,W^{\prime}-W]\). We then leverage RRP to the poisoned samples. From Figure 13(c), we can observe that RRP can also not attenuate the impact of TePAs on model performance degradation. **JPEG Compression (JC).** JPEG Compression (JC) is another typical defense method to mitigate poisoning attacks [16]. In this part, we discuss the effect of JPEG quality on TePAs. We choose 90, 50, and 10 (out of 100) as the JPEG quality values. We use JC on the poisoned samples and show the results of the target model's performance in Figure 13(d). We can observe that poisoned samples after JC can still degrade the target model's performance. Meanwhile, as the quality of the compressed image decreases, the performance of the target model decreases. ## 7 Related Work **Domain Adaptation.** Improving the robustness of ML models under shifted distribution data is a longstanding problem [29]. Besides TTA, there are other methods to improve the ML model's robustness. Domain-invariant meth Figure 13: The statistics results of the loss values. The target model is ResNet-18 trained on CIFAR-10. Different colored bars indicate different types of arrived test samples. ods [57, 63] aim to learn embeddings that are invariant across different domains. Transfer learning [60, 65, 51] leverages embeddings from a pre-trained (teacher) model to train a new (student) model, which can work well on the new distribution data. Semi-supervised learning [24, 46, 5] trains the model on a mixed dataset with labeled and unlabeled data, and the use of the unlabeled data can improve the model's performance on shifted distribution data. Self-supervised learning [10, 42, 9, 22] trains models by large-scale unlabeled datasets, and the learned embeddings can be applied to the downstream tasks in different domains. **Poisoning Attacks.** Poisoning attacks are one of the most exploited threats to modern data-driven ML models [38, 30, 53]. The attackers inject a small number of poisoned samples during the training process to sabotage the prediction performance of the target model at test time. Poisoning attacks have been successfully applied to many ML settings, such as supervised learning [43, 59], self-supervised learning [7], federated machine learning [48, 37], etc. The closest work to our attack is poisoning attacks against online learning (where data becomes available in sequential order and is used to update the best predictor for future data at each step) [62, 44]. Though the target model sequentially updates its parameters in both online settings and TTA settings, poisoning attacks against online learning still assume that the adversaries have partial knowledge of testing data and are white-box adversaries. Our poisoning attack assumes neither. We only assume the attackers have query access to the target model. ## 8 Conclusion In this paper, we perform the first untargeted test-time poisoning attacks (TePAs) against four prominent TTA methods - TTT, DUA, TENT, and RPL. Concretely, we propose a poisoned samples generation framework, PoGen, which relies on surrogate models and transfer-based adversarial attacks to build adversarial examples and transfer such samples as poisoned samples to degrade the performance of the target TTA models. Empirical evaluations show that TePAs can successfully break the target TTA models by degrading their performance to a large extent. To mitigate the attacks, we investigate four defense mechanisms, i.e., adversarial training, bit-depth reduction, JPEG compression, and random resizing & padding. We observe that adversarial training (AT) can avoid the effect of poison attacks on target model degradation. However, AT affects the performance of the target model. Also, it enlarges the training effort, making it less possible to train the target models on large-scale datasets. Moreover, we notice that the recovery of the target model's performance is inevitable for our attacks. Even though the performance degradation is sufficient to cause a safety incident. We leave how to irreversibly degrade the target model's performance as an interesting future work. In summary, we demonstrate that the current TTA methods are prone to test-time poisoning attacks, and we advocate for the integration of defenses against test-time poisoning attacks into the design of future TTA methods. ## Acknowledgments We thank all anonymous reviewers and our shepherd for their constructive comments and valuable feedback. This work is partially funded by the National Key R&D Program of China (2018YFA0704701, 2020YFA0309705), Shandong Key Research and Development Program (2020ZLYS09), the Major Scientific and Technological Innovation Project of Shandong, China (2019ZZY010133), the Major Program of Guangdong Basic and Applied Research (2019B030302008), and the European Health and Digital Executive Agency (HADEA) within the project "Understanding the individual host response against Hepatitis D Virus to develop a personalized approach for the management of hepatitis D" (D-Solve) (grant agreement number 101057917). Tianshuo Cong is also supported by Shuimu Ninghua Scholar Program.
2304.08808
A True Random Number Generator for Probabilistic Computing using Stochastic Magnetic Actuated Random Transducer Devices
Magnetic tunnel junctions (MTJs), which are the fundamental building blocks of spintronic devices, have been used to build true random number generators (TRNGs) with different trade-offs between throughput, power, and area requirements. MTJs with high-barrier magnets (HBMs) have been used to generate random bitstreams with $\lesssim$ 200~Mb/s throughput and pJ/bit energy consumption. A high temperature sensitivity, however, adversely affects their performance as a TRNG. Superparamagnetic MTJs employing low-barrier magnets (LBMs) have also been used for TRNG operation. Although LBM-based MTJs can operate at low energy, they suffer from slow dynamics, sensitivity to process variations, and low fabrication yield. In this paper, we model a TRNG based on medium-barrier magnets (MBMs) with perpendicular magnetic anisotropy. The proposed MBM-based TRNG is driven with short voltage pulses to induce ballistic, yet stochastic, magnetization switching. We show that the proposed TRNG can operate at frequencies of about 500~MHz while consuming less than 100~fJ/bit of energy. In the short-pulse ballistic limit, the switching probability of our device shows robustness to variations in temperature and material parameters relative to LBMs and HBMs. Our results suggest that MBM-based MTJs are suitable candidates for building fast and energy-efficient TRNG hardware units for probabilistic computing.
Ankit Shukla, Laura Heller, Md Golam Morshed, Laura Rehm, Avik W. Ghosh, Andrew D. Kent, Shaloo Rakheja
2023-04-18T08:17:31Z
http://arxiv.org/abs/2304.08808v1
A True Random Number Generator for Probabilistic Computing using Stochastic Magnetic Actuated Random Transducer Devices ###### Abstract Magnetic tunnel junctions (MTJs), which are the fundamental building blocks of spintronic devices, have been used to build true random number generators (TRNGs) with different trade-offs between throughput, power, and area requirements. MTJs with high-barrier magnets (HBMs) have been used to generate random bitstreams with \(\lesssim 200\) Mb/s throughput and pJ/bit energy consumption. A high temperature sensitivity, however, adversely affects their performance as a TRNG. Superparamagnetic MTJs employing low-barrier magnets (LBMs) have also been used for TRNG operation. Although LBM-based MTJs can operate at low energy, they suffer from slow dynamics, sensitivity to process variations, and low fabrication yield. In this paper, we model a TRNG based on medium-barrier magnets (MBMs) with perpendicular magnetic anisotropy. The proposed MBM-based TRNG is driven with short voltage pulses to induce ballistic, yet stochastic, magnetization switching. We show that the proposed TRNG can operate at frequencies of about 500 MHz while consuming less than 100 fJ/bit of energy. In the short-pulse ballistic limit, the switching probability of our device shows robustness to variations in temperature and material parameters relative to LBMs and HBMs. Our results suggest that MBM-based MTJs are suitable candidates for building fast and energy-efficient TRNG hardware units for probabilistic computing. True random number generation, Magnetic tunnel junctions, Energy-efficient computing, Probabilistic switch, Process variability, Spintronics ## I Introduction Random number generators (RNGs) are used in a wide variety of applications, such as cryptography, hardware security, Monte-Carlo simulations, and more recently in stochastic computing and brain-inspired computing [1, 2, 3, 4]. Commonly, mathematical or computational algorithms, referred to as pseudo-random number generators (PRNGs), are used to generate random sequences. These sequences are, however, completely deterministic and can be regenerated if the initial state (referred to as "seed") is known. PRNGs are, therefore, not suitable for cryptography or hardware security applications. Also, the generated sequences have long-range correlations, which could lead to errors in Monte-Carlo simulations and thus compromise the integrity of the solution [1, 3]. True random number generators (TRNGs), on the other hand, are hardware elements that utilize non-deterministic phenomena, such as thermal noise, oscillatory jitters, radioactive decay, among others, to generate secure stochastic bitstreams [1, 5]. Most existing TRNGs, however, either require complex and power consuming post-processing circuits to extract the random bits, or are too prone to temperature and device variations. Some of them also suffer from poor throughput, and as a result fail to meet the demands of emerging data-driven applications [1, 5]. Therefore, CMOS-compatible TRNGs that are fast, energy-efficient, compact, tunable, scalable for on-chip integration, and also robust to temperature and fabrication process variations are required. Magnetic tunnel junctions (MTJs) are two-terminal spintronic devices which consist of a thin non-magnetic (NM) layer sandwiched between two ferromagnetic layers--a "pinned-layer" (PL) whose magnetization is fixed, and a "free-layer" (FL) whose magnetization can be reoriented by an input spin current or magnetic field [6]. In spin current-driven MTJs, electrons flowing into the FL can switch the magnetization direction if sufficient angular momentum is transferred to this layer. Electrons flowing from the FL (PL) into the PL (FL) switch the magnetization of the FL antiparallel (parallel) to that of the PL, as shown in Fig. 1(a). The relative magnetic orientations of the PL and FL can then be transduced into an electrical signal, as shown in Fig. 1(b). Parallel (antiparallel) magnetic orientations of the PL and FL corresponds to a low (high) resistance, and therefore a high (low) current is detected across the MTJ. These binary resistance states are separated by an energy barrier \(E_{b}\), as shown in Fig. 1(c), and can be used to represent one bit of information, viz., bit '0' for low resistance and bit '1' for high resistance state, respectively. The energy barrier in a ferromagnet (FM) depends on its material properties and the volume, and determines its thermal stability. If \(E_{b}\) is low and comparable to the thermal energy \(k_{B}T\), where \(k_{B}\) is the Boltzmann constant and \(T\) is the temperature, the magnetization of FL fluctuates freely between the two states [7]. Such FMs are referred to as low-barrier magnets (LBMs). On the other hand, if \(E_{b}>40\)\(k_{B}T\), the two magnetization states are stable and could be used for long-term (more than 10 years) storage [6]. These FMs are referred to as high-barrier magnets (HMBs). Traditionally, MTJs have been used to realize non-volatile (NV) magnetic memory devices, as well as radio-frequency (RF) oscillators. Moreover, they are compatible with CMOS back-end-of-line fabrication processes [6], which makes them an attractive technology for enabling future energy-efficient computing platforms [8]. The MTJ operation is stochastic in nature as the magnetization dynamics of a FM is strongly influenced by thermal effects. On the one hand, these thermal effects lead to challenges such as write-errors in NV memories [6] and line-width broadening in RF oscillators [9]. On the other hand, stochastic MTJ operation can be used for developing electrically tunable TRNGs [5]. For example, by controlling the pulse width and the amplitude of the applied input voltage or current, the switching probability of the FL can be modified [10]. In this paper, we study a new type of fast, energy-efficient, and robust spintronic TRNG, relying on the stochasticity present in medium-barrier magnets (MBMs) with perpendicular magnetic anisotropy (PMA) that are used as the FL of an MTJ [11]. Typically, the energy barrier of MBMs is in the range of 20-40 \(k_{B}T\). The stochastic response of this TRNG is tuned by applying voltage pulses of appropriate amplitude and duration. Our simulations performed using 45-\(\mathrm{nm}\) CMOS PDK [12] show that the TRNG can operate at a frequency of 500 \(\mathrm{MHz}\) with energy consumption of 92 \(\mathrm{fJ/bit}\), while also being relatively insensitive to both process and temperature (PVT) variations. The TRNG presented in this paper offers high robustness, more than 2.0\(\times\) higher frequency while also consuming 3.0\(\times\) lower power compared to previous spintronic TRNG [15]. The key contributions of this work are as follows. 1. Description of the physics and operation and Monte-Carlo simulations of the MBM TRNG. 2. Quantification of the process, voltage, and temperature robustness of the MBM TRNG and performance benchmarking against prior works. 3. MBM TRNG circuit design in 45-\(\mathrm{nm}\) CMOS node for timing-accurate analyses. ## II Prior Work Previously, 1/f noise in transistors, shot noise associated with tunneling current in heavily-doped Zener diodes, accumulated jitter in ring oscillators, and random telegraph noise (RTN) in MOSFETs, have been exploited to build CMOS-compatible TRNGs with different pros and cons. Optics-based sources, such as lasers, entangled photons, and superconducting circuits have also been explored as prospective TRNG candidates. They can be used to generate random sequences at 60 \(\mathrm{Gbps}\) but require complex post-processing and low temperatures [31, 32, 33]. Recently, stochastic switching behavior in oxide-based resistive elements, ferroelectric materials, magnetic materials, and phase-change-based materials, has been leveraged as source of true random bitstreams. Among these emerging devices, magnetic devices have become the front-runner for TRNG technology owing to their high throughput and significantly lower energy cost compared to other materials [5]. In addition, MTJ-based TRNGs can be built using either HBMs [10, 15] or LBMs [7, 22]. Finally, the FL used in both the HBM- or LBM-based MTJs could have either an in-plane anisotropy [7, 15] or a perpendicular anisotropy [10, 22]. These wide range of choices have enabled the development of CMOS-compatible and scalable MTJ-based TRNGs [5]. ### _Non-magnetic TRNGs_ While CMOS-based TRNGs are quite mature, they typically rely on 1/f noise in large number of transistors (more than 100) as entropy sources. However, the output bitstream is correlated, and the scalability of this TRNG is limited. Large number of transistors also lead to area and power overheads. For example, a bistable CMOS-based TRNG fabricated in 45-nm CMOS technology operated at \(2.4~{}\mathrm{GHz}\) but consumed \(3~{}\mathrm{pJ/bit}\) energy and occupied \(4000~{}\mathrm{\mu m^{2}}\) circuit area [28]. Another CMOS ring oscillator-based TRNG that utilized accumulated jitter to make the design more robust to noise injection consumed \(23~{}\mathrm{pJ/bit}\) energy and occupied \(375~{}\mathrm{\mu m^{2}}\) area while generating bits at the rate of \(23~{}\mathrm{MHz}\)[29]. Zener diode-based TRNGs, Fig. 1: (a) Spin-transfer-torque-driven switching in an MTJ. Current \(I_{c}^{\mathrm{P\to AP}}\) (\(I_{c}^{\mathrm{AP\to P}}\)) switches the magnetization of the FL antiparallel (parallel) to that of the PL. (b) Current through the MTJ as a function of the applied voltage. The solid red lines show the behavior of an MTJ in the case of switching from parallel (P) to antiparallel (AP), and vice-versa. On the other hand, the dotted black lines (overlaid on the red lines) show the behavior of the MTJ if no switching had occurred. (c) Macrospin model showing the energy barrier \(E_{b}\) separating the P and AP states. which rely on shot noise associated with tunneling currents, suffer from large area and power overhead since they require post-processing to amplify their weak output signal, and reduce correlation [1]. CMOS-compatible TRNGs utilizing RTN in MOSFETs and metal-insulator-metal memristors have limited bitrate (\(\leq\) 3.33 Hz for a 65-\(\mathrm{nm}\) RTN TRNG [30]), are generally not stable over time due to complex electrostatic interactions between trapped charges and ions, and are highly sensitive to electrode/insulator materials. Also, the measured RTN signal is quite weak. Non-magnetic oxide-based resistive elements, ferroelectrics, and phase-change-based TRNGs typically require energy-intensive programming operations, while their throughput is limited to 40 \(\mathrm{Mb/s}\)[34]. Besides, the entropy of phase-change TRNGs is highly sensitive to the stoichiometry crystal quality and temperature [35]. The stochastic relaxation dynamics in metal-insulator transition oxides (e.g., VO\({}_{2}\)) has been used to generate random signals but the signals are extremely sensitive to the device temperature and the materials are not generally CMOS compatible [36]. ### _MTJs with high-barrier magnets_ Previous works on spintronic TRNGs have largely focused on HMBs. In [10, 13], antidamping-like torque was used to realize a TRNG with HBMs, while in [14, 15], precessional magnetization dynamics are excited to generate random data stream. In these TRNGs, input spin current disturbs the FL from its stable state to a metastable state. An equal switching probability from the metastable state to one of the two stable equilibria points generates random bits [5]. TRNGs based on the antidamping switching mechanism are tunable but extremely sensitive to the amplitude and the duration of the input as well as to temperature variations [10, 11]. To address these challenges different circuit level modifications have been suggested. For example, in [5, 13], a 10-bit counter feedback circuit, which counts the number of '1's' in a random sequence and adjusts the input current pulse duration in real time to achieve 50% switching probability, was presented. This was followed by other feedback circuit implementations which replaced previously used high-gain analog amplifiers by their digital counterparts for better CMOS integration [5, 16, 17]. Here, either the current amplitude or its duration were adjusted to tune the switching probability close to 50%. These feedback circuits, however, introduce unwanted correlations in the bitstream. Recent, simulation results have showed that parallely connected MTJs could generate unbiased bitstreams while also avoiding complex feedback circuitry [19, 20]. TRNGs based on the precessional switching mechanism are faster and require lower energy, but have low tunability. They are robust to small variations in input current amplitude and pulse duration, if the amplitude is above a certain threshold [5, 14, 15, 21]. However, process and voltage variations in such TRNGs could lower the current amplitude below the threshold. It was shown through simulations that a feedback circuit with two counters could help in this regard [5, 18], but at the cost of unwanted correlations. In all the above designs, post-processing using XOR gates would reduce bias in the bitstreams, however, it would incur additional area and power overhead, and lower the throughput [10, 15, 21]. ### _MTJs with low-barrier magnets_ Recently, LBMs in the superparamagnetic limit have been used as energy-efficient sources of random numbers. In superparamagnets, thermal noise passively flips the magnetization state of the FL from one stable state to the another without any input power. These fluctuations can then be sampled (by a small external read current) as a random bitstream [7, 22]. An LBM-based TRNG with an in-plane magnetized FL was shown to consume only 2 \(\mathrm{fJ/bit}\) of energy during the read process [7]. However, the generated bitstreams were biased and required 8 XOR operations to improve randomness. This post-processing increased the energy cost to 20 \(\mathrm{fJ/bit}\) and required a circuit area of \(2~{}\mathrm{\mu m^{2}}\), while also lowering the bitrate (1.66 \(\mathrm{kHz}\)) [7]. Another TRNG with a perpendicularly magnetized FL consumed about 600 \(\mathrm{fJ/bit}\) energy in order to generate random bits at 45 \(\mathrm{kHz}\)[22]. Though further post-processing to improve the quality of bitstream increased power overhead, it was still 3\(\times\) lower compared to CMOS designs, while the total circuit area of the TRNG was orders of magnitude smaller [22]. In LBM-based TRNGs, the bitstream generation rate could be increased by either increasing the temperature or reducing the energy barrier between the two states [7]. However, robust LBMs with barrier heights of just a few \(k_{B}T\) are very difficult to reliably fabricate because in-plane magnetized FLs require near-perfect circular cross-sections, while perpendicularly magnetized FLs require precise cancellation of magnetic anisotropy and demagnetization fields. LBMs are also highly sensitive to process, temperature, and magnetic field variations, which reduces their robustness at the circuits and systems level [11]. ## III Stochastic Magnetic Actuated Random Transducer (SMART) TRNGs In this work, we present a TRNG based on Stochastic Magnetic Actuated Random Transducer (SMART) devices which utilize MBMs with perpendicular anisotropy as the FL within an MTJ structure [11]. SMART TRNGs are fast, have an active write scheme, and are significantly less sensitive to PVT variations [11], unlike LBM-based TRNGs that are slow, sensitive to variations and difficult to realize in practice [7, 22]. The actuation signal to the SMART device is provided by applying voltage pulses of specific amplitude and duration [11]. We demonstrate that by controlling the characteristics of the input signal, the switching probability of our TRNG can be tuned to 50%. Its stochastic response is also found to be more robust to temperature fluctuations. ### _Numerical Model_ To obtain the switching statistics and infer the relationship between switching probability and the characteristics of the input voltage pulses applied to the SMART device, the MBM with PMA is treated within the monodomain limit, wherein it has two stable states, as shown in Fig. 1(c). The probability of switching between these two states can be tuned to 50% via the spin-transfer-torque (STT) [23, 24] resulting from the input voltage pulses. Therefore, the magnetization dynamics of the monodomain is numerically solved via the Landau-Lifshitz-Gilbert (LLG) equation, which includes the effect of thermal noise [25, 26]. It is given as \[\begin{split}\frac{\partial\mathbf{m}}{\partial t}& =-\frac{\gamma\mu_{0}}{1+\alpha^{2}}\bigg{[}(\mathbf{m}\times \mathbf{H}_{\mathrm{eff}})+\alpha\mathbf{m}\times(\mathbf{m}\times\mathbf{H} _{\mathrm{eff}})\\ &\quad+\frac{\hbar}{2e}\eta\frac{G_{\mathrm{P}}V_{\mathrm{ap}}}{ \mu_{0}M_{s}\mathcal{V}}(\mathbf{m}\times(\mathbf{m}\times\mathbf{n}_{p})- \alpha(\mathbf{m}\times\mathbf{n}_{p}))\bigg{]},\end{split} \tag{1}\] where \(\mathbf{m}\) is the magnetization of the MBM normalized to the saturation magnetization \(M_{s}\), \(\alpha\) is the Gilbert damping constant, \(V_{\mathrm{ap}}\) is the input voltage across the MTJ, \(\eta\) is the spin polarization efficiency factor, \(\mathbf{n}_{p}\) is the unit vector along the direction of the spin polarization, and \(G_{\mathrm{P}}\) is the conductance of the P state. Here, \(\mathcal{V}\) is the volume of the FL layer, \(\hbar\) is the reduced Planck's constant, \(\mu_{0}\) is the permeability of free space, \(e\) is the elementary charge, and \(\gamma=17.6\times 10^{10}\) T\({}^{-1}\) s\({}^{-1}\) is the gyromagnetic ratio. The effective magnetic field, \(\mathbf{H}_{\mathrm{eff}}\), includes contributions from the effective anisotropy field, \(H_{k}\), acting along the easy axis, and a thermal field, \(\mathbf{H}_{\mathrm{th}}\). Therefore, \[\mathbf{H}_{\mathrm{eff}}=H_{k}m_{z}\mathbf{z}+\mathbf{H}_{\mathrm{th}}(t), \tag{2}\] where \(H_{k}=\left(\frac{2K_{s}}{\mu_{0}M_{s}}-M_{s}\right)\), and the easy axis is assumed to be along the \(\mathbf{z}\) direction. The effective anisotropy field, \(H_{k}\), depends on the field due to uniaxial magnetocrystalline anisotropy, \(\frac{2K_{s}}{\mu_{0}M_{s}}\), and the field due to shape anisotropy, which depends on \(M_{s}\). The thermal field is assumed to be a Langevin field that is spatially isotropic and uncorrelated in space and time [27]. The monodomain is assumed to be in equilibrium with a thermal bath at temperature \(T\). The energy barrier of the MBM in the SMART device, can be normalized to the thermal energy as, \(\Delta_{0}=\frac{\mu_{0}M_{s}H_{k}\mathcal{V}}{2k_{B}T}\). ### _Effect of the input voltage on switching probability_ Equation (1) is numerically solved for an ensemble of \(10^{5}\) spins using the Heun integration scheme with a 10 fs integration time step. Simulations are implemented in CUDA and run in parallel on GPUs for faster solutions. The temperature of the system is assumed to be \(T=300\)\(\mathrm{K}\), unless otherwise stated. Material parameters used for simulations are listed in Table I. In the absence of any input current and external field, the Boltzmann distribution of the magnetization angle, \(\rho(\theta)\), of Fig. 2(a) is obtained. Numerical results are represented by the histogram whereas the red overlaid curve is given as [27] \[\rho(\theta)=\frac{\sqrt{\Delta_{0}}}{Z}\exp^{-\Delta_{0}\sin^{2}\theta}|\sin \theta|, \tag{3}\] where \(\theta=\cos^{-1}(m_{z})\), and \(\int_{0}^{\pi/2}\rho(\theta)d\theta=1\). This leads to \(Z=F(\sqrt{\Delta_{0}})\), the Dawson's integral. The threshold voltage to switch the MBM deterministically, in the absence of any external field, is given as \(V^{\mathrm{th}}=\frac{1}{G_{\mathrm{ap}}}\alpha\frac{2e}{\hbar}\mu_{0}M_{s}H_{ k}\mathcal{V}\)[26], which is equal to 0.374 V for the material parameters chosen. Here, we explore the dependence of the magnetization switching probability, \(P_{\mathrm{sw}}\), on the amplitude and the pulse width of the applied voltage. As shown in Fig. 3, the amplitude of the applied voltage, \(V_{\mathrm{ap}}\), is varied from 0.214 V to 0.854 V, while its pulse width, \(t_{\mathrm{pw}}\), is varied from 0.1 ns to 1.6 ns. The overlaid dotted black line in Fig. 3 represents the combination of \(V_{\mathrm{ap}}\) and \(t_{\mathrm{pw}}\) required to achieve \(P_{\mathrm{sw}}=0.5\), the ideal value for TRNG operation. It can be observed that lower pulse widths require higher pulse amplitudes, and vice-versa. For voltage pulses leading to \(P_{\mathrm{sw}}=0.5\), the Boltzmann distribution of Fig. 2(a) is driven out of equilibrium into the bimodal distribution of Fig. 2(b). Here, the overlaid red curve is given by (3) with \(\int_{0}^{\pi}\rho(\theta)d\theta=1\), which leads to \(Z=2F(\sqrt{\Delta_{0}})\). Figure 4 shows that \(P_{\mathrm{sw}}\) is a monotonically increasing function of \(V_{\mathrm{ap}}\) and \(t_{\mathrm{pw}}\). Furthermore, both Figs. 3 and 4(a) indicate that the switching probability at \(P_{\mathrm{sw}}=0.5\) is more Fig. 3: Switching probability for pulse amplitude, \(V_{\mathrm{ap}}\), ranging from 0.214 V to 0.854 V and pulse widths, \(t_{\mathrm{pw}}\), ranging from 0.1 ns to 1.6 ns. The dotted black line corresponds to \(P_{\mathrm{sw}}=0.5\) Fig. 2: (a) Equilibrium distribution of magnetization in the P state. The overlaid red curve corresponds to (3) with \(Z=F(\sqrt{\Delta_{0}})\). (b) Steady-state bimodal distribution corresponding to a switching probability of 0.499. The overlaid red curve corresponds to (3) with \(Z=2F(\sqrt{\Delta_{0}})\). In both the cases, the histograms consists of \(10^{5}\) data points and were obtained from the solution of (1). sensitive to the pulse amplitude for larger pulse widths. On the other hand, it is more sensitive to the pulse widths for larger pulse amplitudes, as observed from Figs. 3 and 4(b). Hence, a voltage pulse with an optimal \(V_{\mathrm{ap}}\) and \(t_{\mathrm{pw}}\) that minimizes this sensitivity is required for a robust TRNG circuit. In order to build a fast and energy-efficient TRNG, both the energy consumed and the delay, or equivalently the energy-delay product (EDP) required to create the bimodal distribution should be minimal. Figure 5 shows the energy consumed and the corresponding EDP required to switch the FL from P to AP state with \(P_{\mathrm{sw}}=0.5\). The minimum energy is approximately 9 fJ, and corresponds to \(V_{\mathrm{ap}}=0.48~{}\mathrm{V}\) and \(t_{\mathrm{pw}}=0.6~{}\mathrm{ns}\), in the ballistic regime [27]. The EDP at this operating point, therefore, is \(5.45\times 10^{-24}\) J.s. Generally, the EDP decreases as the pulse amplitude increases (Fig 5(a)), while it increases as the pulse duration increases (Fig 5(b)). For EDP calculations, the delay is assumed to be the same as \(t_{\mathrm{pw}}\). ### _Effect of temperature on switching probability_ The current flowing through the tunnel junction can lead to an increase in its temperature due to Joule heating. In addition, the ambient temperature of the chip that the TRNG is fabricated on could be different from that of the MTJ due to differential heating. This would lead to design and reliability issues for the TRNG because the initial Boltzmann distribution changes with temperature. As a result, \(P_{\mathrm{sw}}\) for a given applied voltage pulse would be different from the expected value of 0.5, as shown in Fig. 6(a). It can be observed that in all the cases, \(P_{\mathrm{sw}}\) increases above 0.5 for temperatures above 300 K while it decreases below 0.5 for temperatures below 300 K. Crucially, the change in \(P_{\mathrm{sw}}\) with temperature is more prominent as \(t_{\mathrm{pw}}\) increases. This suggests that the change in switching probability with temperature is higher for longer pulse widths (and lower pulse amplitude), the thermally-assisted switching regime [27]. This regime is, therefore, avoided in SMART TRNGs by operating the devices in the short-pulse ballistic limit. As shown in Fig. 6(b), the maximum switching probability change, \(\left|\frac{\partial P_{\mathrm{sw}}}{\partial T}\times\Delta T\right|_{ \mathrm{max}}\), for a temperature variation of \(\Delta T=\pm 30~{}\mathrm{K}\) around room temperature, is small in the short-pulse limit while it increases as the log of the pulse width in the long-pulse regime [11]. More specifically, \(\left|\frac{\partial P_{\mathrm{sw}}}{\partial T}\right|\sim\frac{\log 2}{2T}\) in the ballistic limit, as shown by the dotted blue line in Fig. 6(b). On the other hand, in the long-pulse diffusive limit \(\left|\frac{\partial P_{\mathrm{sw}}}{\partial T}\right|\sim\frac{\log 2}{2T} \left(\log\left(\frac{\log 2}{\Gamma_{0}t_{\mathrm{pw}}}\right)\right)\), as shown by the red dashed line in Fig. 6(b). Here, \(\Gamma_{0}=1\) GHz is the attempt frequency for nanomagnets [10]. The strong temperature sensitivity of \(P_{\mathrm{sw}}\) in the diffusive limit stems from its double exponential dependence on energy barrier and temperature [10, 11]. Fig. 4: Switching probability as a function of the input voltage (a) pulse amplitude \(V_{\mathrm{ap}}\), and (b) pulse width \(t_{\mathrm{pw}}\). Probability of switching increases with both the amplitude and width of the pulse. Fig. 5: Switching energy on the left axis and energy-delay product (EDP) on the right axis for \(P_{\mathrm{sw}}=0.5\) as a function of (a) Pulse amplitude (b) Pulse width. Both the switching energy and EDP are minimized in the ballistic limit for short pulses. Fig. 6: (a) Switching probability as a function of temperature for different pulse widths. Increase in temperature leads to drift in the switching probability from the desired value of 0.5. This effect becomes prominent for higher pulse widths (b) Maximum change in switching probability from the mean value of 0.5, for \(T=300\pm 30~{}\mathrm{K}\), vs. pulse width. SMART TRNG will be operated in the short-pulse width regime (i.e., \(t_{\mathrm{pw}}\leq 1~{}\mathrm{ns}\), also shown by the shaded region) where the switching probability is less dependent on temperature. Overall, our Monte Carlo simulations confirm that, the operation of the SMART TRNG would be optimal for \(V_{\text{ap}}\in[0.41~{}\text{V},0.54~{}\text{V}]\) and the corresponding \(t_{\text{pw}}\in[0.48~{}\text{ns},0.88~{}\text{ns}]\). Herein, the variation in switching probability is small with both the pulse amplitude and duration, the energy of operation and the EDP are both low, and the effect of temperature variation is small. ## IV Statistical and NIST Test For Randomness To evaluate the statistical quality and randomness of the bitstream generated by the SMART TRNG at \(T=300~{}\text{K}\), firstly, we use our stochastic LLG code to generate 10 different bitstreams of \(2\times 10^{6}\) bits each, corresponding to switching the FL from P to AP state with \(P_{\text{sw}}\approx 0.502\). Secondly, we choose one of the 10 bitstreams and create \(2\times 10^{4}\) non-overlapping random samples with \(N=100\) bits, each. It is expected that the number of switched states, out of 100, in each sample would follow a binomial distribution with mean \(\mu=NP_{\text{sw}}=50.2\) and standard deviation \(\sigma=\sqrt{NP_{\text{sw}}(1-P_{\text{sw}})}=5\). This is shown in Fig. 7(a) by the histograms while the red solid curve represents a normal distribution, \(\mathcal{N}(\mu,\sigma)\). Thirdly, we consider a set of eight contiguous bits of the \(2\times 10^{6}\) bits, and map them to a number between 0 and 255. Since the switching probability is almost 0.5, the probability of a '0' or '1' in the bitstream is almost the same, and, therefore, the probability of any number between 0 and 255 should follow uniform distribution, \(\mathcal{U}(0,255)\). Figure 7(b) indicates that the numerically generated data (scattered symbols) roughly follow a uniform distribution (dashed red line). Finally, we pass each bitstream through the National Institute of Standards and Technology (NIST) test suite [38], which consists of several frequency and non-frequency related tests. We find that 2 out of the 10 raw bitstreams pass all 188 tests, whereas the rest 8 bitstreams pass 184 or 185 out of the 188 tests. The bitstream under consideration is said to have passed a certain test if the p-value of the testing operation is greater than 0.01 [38]. Failing a certain test implies that the bitstream is not truly random from the perspective of that test. The common failed tests included the (monobit) frequency test, the forward and the reverse cumulative sum tests, and the run test. On whitening the bitstream with just one XOR operation, 42 out of the 45 post-processed bitstreams pass all the 188 NIST tests while the other 3 pass 187 tests. This is because an XOR operation enhances the entropy of the system by reducing the bias and tuning the switching probability closer to the ideal value of 0.5 [10, 15]. It must be noted that we used PRNGs to generate random white Gaussian noise when solving (1), which could have led to some correlations in the raw bitstreams generated by the SMART TRNG. An ideal switching probability of 0.5 can only be achieved if the input voltage pulse has exact amplitude and duration. However, due to PVT variations, the switching probability could be different, therefore, post-processing the raw bitstream with at least one XOR operation might be a necessity. Figure 6(b) suggests that changes in the switching probability in the ballistic region is small, but present nonetheless. Therefore, we check the randomness of the raw and XORed bitstreams generated by SMART TRNGs at different temperatures ranging from 270 \(\text{K}\) to 330 \(\text{K}\). Figure 8 summarizes the results of our analysis. We find that the raw bitstreams fail several NIST tests at temperatures other than \(300~{}\text{K}\) since the switching probability has changed from its ideal value. However, when they are XORed their entropy is enhanced such that the resulting random output bitstream fails fewer NIST tests. Further XORing the bitstream increases the randomness and the processed bitstream pass all NIST tests. ## V TRNG Circuit Design A full SMART TRNG circuit, designed using 45-\(\mathrm{nm}\) CMOS PDK [12], is presented in Fig. 9(a). It is composed of three main parts: write, reset, and read sub-circuits that are controlled by two external signals, viz. RS and CLK [39]. Here, BSIM4 Level 54 models were used for the transistors while a circuit-compatible SPICE model of a FM [40], subject to thermal effects and spin-torque, was used for the FL of the SMART device. As shown in Fig. 9(b), the magnetization of the FL, \(\mathbf{m}\), in any direction, \(j(=x,y,z)\), is modeled as the node voltage of a capacitor, whereas the effective field Fig. 8: XOR tree structure of TRNGs. Each \(\mathrm{r}_{\mathrm{i}}\) is an individual TRNG. The number of passed NIST tests at each level for three different temperatures is listed. Here, we have reported the worst case scenarios. Fig. 7: (a) Distribution of the switched events from a sample of 100. The histograms were constructed by repeating this \(2\times 10^{4}\) times. The solid red line represents the normal distribution, \(\mathcal{N}(50.2,5)\). (b) Distribution representing a number between 0 and 255 from a sample of \(25\times 10^{4}\). The dashed red line denotes uniform distribution, \(\mathcal{U}(0,255)\) acting on the FM is represented as a voltage-controlled current source. The results of this model were benchmarked against that of the LLG code implemented on CUDA. Though all the components of \(\mathbf{m}\) match very well for different applied biases, we present in Fig. 9(c) only the z-component for the sake of brevity. Finally, this model was integrated with the CMOS circuitry to obtain the full TRNG circuit, and the trapezoidal solver of Cadence Spectre\({}^{\text{\textregistered}}\) with a time step of 10 fs was used for all transient simulations. Our SMART TRNGs are triggered by a short voltage-pulse, which leads to a stochastic switching of the MBM FL from parallel to antiparallel state. As a result, the SMART device should be initialized such that the FL and PL are in the P state before every write cycle. This is ensured by the reset circuit of Fig. 9, which comprises transistors M3 and M4 along with the SMART device, and is activated for RS = 1, CLK = 0. In this work, the time period of both these signals, shown in Fig. 10(a, b), is chosen as 2 ns. Here, RS is in state '1' for 0.8 ns while CLK is in state '1' for 0.46 ns. Next, in our design we fix the widths of both M3 and M4 to 180 nm. During this phase, current flows from FL to PL and a voltage drop of 0.8 V is developed across the SMART MTJ. This voltage is large enough to deterministically switch the FL to its P state. The write sub-circuit of Fig. 9, which consists of transistors M1, M2, M5, and M6 along with the SMART MTJ, is activated when RS = 0, CLK = 0. It provides the triggering voltage, with opposite polarity to that applied during the reset operation, to probabilistically switch the FL from its P state to AP state. In our design, we fix the width of M1 and M2 to 140 nm while that of M5 and M6 to 90 nm. This leads to a voltage drop of 0.6 V across the SMART MTJ which ensures stochastic switching behavior. The digitized state of the MTJ is shown in Fig. 10(c). State '1' ('0') corresponds to AP (P) relative orientation of the FL and PL. Finally, the read sub-circuit is activated for RS = 0, CLK = 1. It extracts the random uncorrelated bitstream as a voltage signal measured at the output node, OUT, as shown in Fig. 10(d). It comprises a fixed MTJ with fixed resistance (\(R_{\mathrm{P}}<R<R_{\mathrm{AP}}\)), transistors M7, M8, and two output latches. Fig. 10: (a) CLK and (b) RS signals. Both of them have a time period of 2 ns. RS is high for 0.8 ns whereas CLK is high for 0.46 ns. (c) Digitized state of the SMART MTJ. (d) The voltage at the OUT node at the beginning of the next clock cycle depends on OUT during the current cycle and the state of the MTJ. Resel phase: RS = 1, CLK = 0. The SMART MTJ is deterministically set to the P state, or STATE = 0. Write phase: RS = 0, CLK = 0. Probabilistic switching of the SMART device from P state to AP state, or to STATE = 1. Read phase: RS = 0, CLK = 1. Output of first latch is updated based on current value of OUT and STATE. Output of the second latch is updated in the next clock cycle. Fig. 9: (a) A circuit diagram for the TRNG described in this work. It consists of three parts—write, reset, and read. The write sub-circuit switches the MTJ with a probability of 50% while the reset sub-circuit ensures that the MTJ is in parallel state before every write operation. The read sub-circuit consists of two output latches and a reference MTJ. The output state of the MTJ is digitized and collected at OUT terminal. The circuit operation is controlled by two signals—RS and CLK. (b) Equivalent circuit model for the FL of the SMART device. Detailed model in Ref. [40]. (c) Comparison of z-component of m between CUDA and circuit simulator Cadence Spectre\({}^{\text{\textregistered}}\) for different input voltages. For the purpose of comparison only, thermal noise was not included here. The latches serve as internal post-processing units and increase the entropy of the system. This is because the output of the first latch corresponds to an XOR operation between the state of the SMART MTJ ('0' or '1') and the output of the second latch (OUT) [39]. For example, if the SMART MTJ is in '0' ('1') state and OUT = '1', the voltage between the reference MTJ and the SMART MTJ would be lower (higher) than \(\mathrm{VD}_{\mathrm{D}}/2\). This would ensure that the output of the first latch is '1' ('0'). On the other hand, if the SMART MTJ is in '0' ('1') state and OUT = '0', the voltage between the reference MTJ and the SMART MTJ would be higher (lower) than \(\mathrm{V}_{\mathrm{D}}/2\). This would lead to '0' ('1') at the output of first latch. The output of the second latch is updated in the next clock cycle, as \(\mathrm{OUT}_{i+1}=\overline{\mathrm{STATE}\oplus\mathrm{OUT}_{i}}\). The can be easily observed from Fig. 10(c, d). Here, the resistance of the reference MTJ is assumed to be 18 \(\mathrm{k\Omega}\). Table II lists the performance metrics of our SMART TRNG and compares it to prior MTJ-based TRNGs. We note that the EDP of the SMART TRNG is about 46% of that of [15]. ## VI Discussion The SMART TRNG circuit presented here uses a PMA FL (Fig. 1 (a)) with an energy barrier of \(35k_{B}T\), which is referred to as an MBM (i.e., medium-barrier magnet), operated in the short-pulse ballistic limit. A direct consequence of operating in the ballistic regime is the reduced sensitivity of switching probability to temperature variations (Fig. 6). This is because, in this regime of operation, the magnetization dynamics is strongly dependent on the torque due to input voltage pulse as compared to that due to thermal field. On the contrary, in the long-pulse diffusive regime, the torque due to thermal field becomes stronger leading to a significantly higher dependence of switching probability on temperature variations (Fig. 6). Previous spintronic TRNGs were designed to operate in the diffusive limit [10], and therefore exhibited significant variation to temperature fluctuations [11]. In addition, compared to diffusive operation, ballistic operation of the TRNG leads to lower energy consumption and lower energy-delay product, as shown in Fig. 5. The use of MBMs in the SMART MTJ reduces the overall energy required to switch the FL as the energy between the two stable states (P and AP) is lower than that in case of HBMs, which are typically used for NV memories. For example, if the barrier height of the FL increases by \(1.5\times\) due to an increase in its volume, the switching energy would roughly increase by \(2.25\times\). This is because, firstly, the threshold voltage, \(V^{\mathrm{th}}(\propto\mathcal{V})\), increases by \(1.5\times\), which requires an increase in the input voltage amplitude, \(V_{\mathrm{ap}}\). Secondly, in the ballistic regime, the switching probability is dependent on the net charge [41], which is \(G_{p}\big{(}V_{\mathrm{ap}}-V^{\mathrm{th}}\big{)}t_{\mathrm{pw}}\). This leads to the scaling of the switching time as \(\sim\frac{(1+\alpha^{2})}{\alpha\gamma\mu_{0}H_{k}(V_{\mathrm{ap}}/V^{\mathrm{ th}}-1)}\log\!\left(4\sqrt{\pi\frac{E_{k}}{k_{B}T}}\right)\)[26, 42]. Therefore, the switching energy, \(G_{p}V_{\mathrm{ap}}^{2}t_{\mathrm{pw}}\), scales as \(\sim G_{p}V_{\mathrm{ap}}^{2}\frac{(1+\alpha^{2})}{\alpha\gamma\mu_{0}H_{k}(V_ {\mathrm{ap}}/V^{\mathrm{th}}-1)}\log\!\left(4\sqrt{\pi\frac{E_{k}}{k_{B}T}}\right)\). Here, it is assumed that all other material parameters including the resistance of the MTJ are exactly the same. The above result suggests that FMs with lower energy barriers are preferable for reducing the switching energy. However, lowering the energy barrier by reducing the diameter of the FM could lead to fabrication-related reliability issues. Alternatively, the energy barrier could also be lowered by using ferromagnetic materials with lower effective anisotropy, and hence \(H_{k}\). This would, however, increase the switching time since it scales inversely with \(H_{k}\), as stated above. Superparamagnets with \(E_{b}\lesssim 5k_{B}T\) have recently been shown to operate in the nanosecond regime [43, 44], and could be used as a fast and low energy source of random bitstreams [44]. Their operation, however, is mainly dominated by thermal noise, which leads to extreme sensitivity to temperature changes [11]. In addition, changes in \(E_{b}\) due to any variation in material parameters or external magnetic perturbations could also affect their switching probability significantly [11]. In this work, the bitstreams generated using the SMART MTJ device required at least one XOR operation in order to pass all the NIST tests even at \(300~{}\mathrm{K}\) (Fig. 8). This, in turn, requires at least two SMART MTJs and an XOR circuit, which increases the area and power overheads. The internal post-processing unit of the TRNG circuit considered in this work (Fig. 9(a)) enables this XOR operation internally, without the need for an external XOR circuit, at a lower energy and area cost [39]. As a result, the generated bitstreams pass all the 188 NIST tests at \(300~{}\mathrm{K}\). However, if the temperature changes by \(30~{}\mathrm{K}\) external XOR operations would be required in order to ensure that the bitstreams pass all NIST tests ( Fig. 8). The switching probability of the SMART MTJ is sensitive to the input voltage pulse width and amplitude (Figs. 3 and 4). The XOR operation alleviates this sensitivity. One of the drawbacks of the SMART TRNG is the necessary requirement of the reset operation. It increases the area and energy cost while decreasing the throughput. Another key problem is the charge current flowing through the MTJ during both the reset and write operation. The charge current adds stress on the tunnel barrier and makes it vulnerable to dielectric breakdown, which can compromise the reliability of the TRNG circuit. This problem of device endurance could be addressed by the use of a device set-up which utilizes the spin-orbit torque (SOT) to switch the magnetization of the FL. Towards this a bilayer of in-plane ferromagnet (CoFeB) and non-magnet (Ti) could be used to generate interfacial SOT with spin polarization perpendicular to the interface of both the FL and the bilayer [45]. In this set-up, a small charge current would flow through the MTJ only during the read process, thereby increasing its endurance. ## VII Conclusion and Outlook In this work, we presented the design of a SMART (stochastic magnetic actuated random transducer) MTJ device utilizing an MBM magnet with PMA which serves as the device's free layer. Unlike LBM-based spintronic TRNGs, SMART TRNGs are faster with lower EDP and can be fabricated more easily. Short voltage pulses in the ballistic limit were applied to the SMART device in order to tune its switching probability to 50%. Through numerical simulations we obtained corresponding values of pulse amplitude and duration that would lead to 50% switching probability. The effect of temperature on the switching probability was also found to be the lowest in the ballistic regime of operation. An optimal regime for low energy operation of SMART TRNGs was also suggested. In general, bitstreams generated by the simulation of the SMART device required two levels of XOR operation to pass all NIST tests. A TRNG circuit with an internal post-processing XOR unit was considered. Our simulations showed that, as compared to previous spintronic TRNGs, the SMART TRNG circuit had a higher bitrate and relatively low energy and area costs. This is due to its operation in the ballistic mode and the use of the internal post-processing unit. For the material parameters considered in this work, uniformly distributed bitstreams comprising \(0.5\times 10^{9}\) bits can be generated at the cost of \(46\)\(\mu\)W of power. Such bitstreams would be useful in Monte Carlo simulations, and as initial cipher-key in cryptography algorithms or for watermarking in other hardware primitives. By tuning the input pulse width or amplitude, bitstreams with different switching probabilities can also be generated for use in error-resilient and energy efficient stochastic computing tasks such as Bayesian inference. Decoupling the read and write paths via the use of interfacial spin-orbit torque could improve the reliability of the TRNG circuit. ## Acknowledgments The authors at UIUC acknowledge the support of NSF through Award # CCF-1930620 and Air Force Research Laboratory under Grant # FA8750-21-1-0002. The research at NYU was supported by the DOE Office of Science (ASCR/BES) Microelectronics Co-Design project CONFLIPS. The research at UVA was supported by NSF I/UCRC on Multi-functional Integrated System Technology (MIST) Center; IIP-1439644, IIP-1439680, IIP-1738752, IIP-1939009, IIP-1939050, and IIP-1939012.
2305.07812
Lightweight Delivery Detection on Doorbell Cameras
Despite recent advances in video-based action recognition and robust spatio-temporal modeling, most of the proposed approaches rely on the abundance of computational resources to afford running huge and computation-intensive convolutional or transformer-based neural networks to obtain satisfactory results. This limits the deployment of such models on edge devices with limited power and computing resources. In this work we investigate an important smart home application, video based delivery detection, and present a simple and lightweight pipeline for this task that can run on resource-constrained doorbell cameras. Our method relies on motion cues to generate a set of coarse activity proposals followed by their classification with a mobile-friendly 3DCNN network. To train we design a novel semi-supervised attention module that helps the network to learn robust spatio-temporal features and adopt an evidence-based optimization objective that allows for quantifying the uncertainty of predictions made by the network. Experimental results on our curated delivery dataset shows the significant effectiveness of our pipeline and highlights the benefits of our training phase novelties to achieve free and considerable inference-time performance gains.
Pirazh Khorramshahi, Zhe Wu, Tianchen Wang, Luke Deluccia, Hongcheng Wang
2023-05-13T01:28:28Z
http://arxiv.org/abs/2305.07812v2
# Lightweight Delivery Detection on Doorbell Cameras ###### Abstract Despite recent advances in video-based action recognition and robust spatio-temporal modeling, most of the proposed approaches rely on the abundance of computational resources to afford running huge and computation-intensive convolutional or transformer-based neural networks to obtain satisfactory results. This limits the deployment of such models on edge devices with limited power and computing resources. In this work we investigate an important smart home application, video based delivery detection, and present a simple and lightweight pipeline for this task that can run on resource-constrained doorbell cameras. Our proposed pipeline relies on motion cues to generate a set of coarse activity proposals followed by their classification with a mobile-friendly 3DCNN network. For training we design a novel semi-supervised attention module that helps the network to learn robust spatio-temporal features and adopt an evidence-based optimization objective that allows for quantifying the uncertainty of predictions made by the network. Experimental results on our curated delivery dataset shows the significant effectiveness of our pipeline compared to alternatives and highlights the benefits of our training phase novelties to achieve free and considerable inference-time performance gains. ## 1 Introduction Computer Vision has become potent thanks to advances in Deep Learning to an extent that long standing problems like object detection, semantic segmentation, face and human action recognition can now be solved with high accuracy. Despite this, we have to highlight that this often comes at the price of significant computational burden which is typically accelerated by the use of Graphical Processing Units (GPU), Tensor Processing Units (TPU), or Neural Processing Units (NPU). Therefore, the degree to which computer vision tasks can be solved on edge devices with limited resources and computational power is constrained. Among these tasks is **Delivery Detection** which is concerned with recognizing delivery of merchandises (package, food, groceries, mail, etc. ) at front doors to provide timely notifications for customers. Note that delivery detection task is different from package detection in that it identifies the instances of delivering items rather than the mere detection of packages which is currently practiced in smart home solutions. Delivery detection has numerous advantages including prevention of food perishing and porch piracy to name a few. According to the package theft annual report1, in a twelve month period from 2021 to 2022, there has been more than 49 million package theft incidents in the United States alone with the estimated value of \(\$2.4\)B. The prevalence of smart devices including smart doorbell and security cameras through out houses facilitates the development and adoption of automated delivery detection systems which can significantly reduce losses. Fig. 1 shows captured deliveries by doorbell cameras. Despite potential applications, delivery detection is a challenging task. Type, shape and size of packages can be quite diverse. Cardboard boxes in various size, mail, grocery bags, and food are among the items that are frequently being delivered. Additionally, there are various courier services including United States Postal Services (USPS), United Parcel Services (UPS), DHL and Amazon, as well growing number of smaller companies like DoorDash and Uber Eats especially after the COVID-19 pandemic. This translates to delivery personnel having diverse outfits and appearances as evidenced by Fig. 1. Finally the temporal extent of delivery events have high variance. Smaller items are delivered in a matter of seconds while delivering heavier objects can take much longer in the order of minutes. This also depends on submitting the proof of delivery in the form of a picture. Existing solutions such as Ring, Figure 1: Sample delivery events captured by doorbell cameras. Nest Hello, Arlo, AWS Rekognition, and Vivint mainly require cloud processing which results in higher bandwidth utilization, computation, and increased subscription fees. In addition, transferring data creates privacy concerns as opposed to local processing. Moreover, these methods primarily focus on detecting packages/boxes and not the instances of deliveries. In addition, package detection may fail as small or occluded packages are harder to be detected. Therefore, we set out to propose a solution to detect delivery instances that can be implemented on edge devices. We present a lightweight system that relies on motion detection to generate event proposals followed by their classification with a mobile-friendly 3DCNN [11] network. Through the novel incorporation of an attention mechanism and benefiting from the the theory of evidence and subjective logic, we significantly improve the base performance of this system without imposing additional processing. In summary, this paper makes the following contributions: * We introduce a lightweight delivery detection system running on doorbell cameras with ARM Cortex-A family of processors. In contrast to the widely used package detection in the industry, our system is to detect the delivery events from videos. * We propose a semi-supervised attention module in 3DCNNs to extract robust spatio-temporal features. * We propose to adopt evidential learning objective to quantify the uncertainty of predictions and enforce a minimum certainty score to ensure quality predictions. ## 2 Related Work Development of CNNs has substantially contributed to the remarkable improvements in the status of video action recognition. [17] presented a two-stream design to leverage both RGB and Optical Flow modalities to capture spatio-temporal cues. With the prevalence of 3D convolutional kernels [11], I3D network was introduced to better model temporal interactions and established a strong baseline [14]. Authors in [18] proposed a two-step approach to localize potential activities from hierarchical clustering of detected objects and recognize a wide range of activities in surveillance cameras from Optical Flow modality using a modified I3D, namely TRI3D, to adjust the temporal bounds of localized activities. Despite strong performance, I3D incurs high computational cost due to its depth and significant number of 3D filters. To reduce the computational burden, authors in [15] proposed S3D network in which only deeper layers of the network are designed to capture temporal information, namely top-heavy design. Additionally they propose to factorize 3D convolutional filters into spatial and temporal layers to reduce computational complexity. This is also proposed by [12] in their R(2+1)D network to improve the efficiency of action recognition models. In another line of work [16] proposed a two-path design to capture spatial semantics at a reduced frame rate and temporal cues at finer resolution in a faster and lighter pathway; the design known as slow-fast, has improved the accuracy/efficiency trade-off significantly. With the introduction of transformers [15], many works soon adopted transformers in the context of video understanding. [1] uses the base of I3D network to obtain initial spatio-temporal features upon which proposals are generated and corresponding features are passed to a stack of action transformer units. To improve efficiency, [10] proposed a multi-scale network MViT to generate multi-scale feature hierarchies by spatio-temporal down-sampling as well as down-sampling of the dimensionality in attention heads. Similarly, [17] adopted SWIN transformer [18] for video action recognition to reduce the quadratic complexity of standard transformer modules. Additionally, Authors in [1] designed ViViT with factorized encoding scheme to ingest tokenized input sequences and lower the complexity of running transformer blocks. Despite these progress, these models are heavyweight which limits their deployment on a device with limited power and compute [13]. Therefore in this work, we present a lightweight pipeline that can run on edge devices and consistently improve its performance by adopting a novel attention module and training objective. ## 3 Method To the best of our knowledge, there are no published research on delivery detection task. Therefore, we first establish a simple, intuitive, and easy to implement baseline. Next, we propose our novel delivery detection system to process untrimmed videos and overcome the shortcomings of the baseline. ### Baseline System Our baseline model which is shown in Fig. 2, is a two-stage method where the first stage is responsible to identify whether a person is present in the scene for a given frame. In case a person is detected, second stage crops the person from the full frame, and passes it to a 2D classifier to generate a delivery score \(s_{i}\in[0,1]\) (\(i\) is the frame index) where larger \(s_{i}\) indicates higher chance of a delivery personnel. The system queries the scene at a fixed rate of 1 frame per second over the continuous intervals of length \(15\) seconds. Therefore, each chunk is summarized by \(15\) delivery scores; max-pooling is used to obtain the final delivery score, _i.e._\(s=\max_{i=1}^{15}s_{i}\). Backbone architectures for person detector and 2D classifier are MobileNet-SSD [18] and EfficientNet-B0 [19] respectively to meet the constraints imposed by the resource-limited hardware of a doorbell camera. Adoption of max-pooling facilitates the implementation and results in an efficient system which can be easily interpreted. However, this cultivates a high False Positive Rate (FPR). In case, the second-stage outputs a relatively high delivery score for a single frame due to an artifact, or sudden variation in illumination, the system makes a false detection of despite opposing evidence from other frames. This will be discussed in more details in section 4. Moreover, this design requires two individual modules which prevents the end-to-end optimization of the overall system. Temporal information that can provide critical cues about recognizing deliveries are also not considered. This motivates us to devise a system that can model temporal interactions and is optimized in an end-to-end fashion. ### Proposed Delivery Detection System As mentioned above, rich temporal information that is critical to detect deliveries, are not captured by the baseline as frames are processed individually. For instance, a person getting close by, bending towards the ground, and moving away is a strong indication for a delivery event. Therefore, having a model that accounts for temporal interactions across neighboring frames, creates the opportunity to learn enhanced representations. To this end, we propose to use a lightweight 3DCNN model that can process multiple frames at a time and extract rich temporal semantics. Fig. 3 shows the overview of the proposed pipeline. For smart doorbell cameras, we need to have a mobile-friendly design that can be accommodated by the limited computational budget. To achieve this, we propose to use motion detection and tracking algorithm to reduce the spatial extent of frames to where motion occurs followed by a 3D classifier to differentiate delivery from non-delivery events. Below, we discuss motion detection and tracking algorithm. Next, due to the computation-intensive nature of 3D convolutional kernels, we first discuss the lightweight networks that are mobile friendly. Based on this we set a 3D baseline and propose two novel approaches, namely **Semi-supervised Attention** and **Evidential Delivery Detection**, to improve performance while preserving computational complexity. #### Motion Detection & Tracking To generate spatially-tighter proposals for delivery events compared to using full frames, we propose to use foreground motion to focus on regions that activities happen. This enables us to preserve a better pixel resolution as lightweight CNNs more often than not require small spatial size, _e.g._\(112\)x\(112\). In case of using frames in their entirety, activity regions can only occupy a few number of pixels. Fig. 4 shows the impact of using motion to generate tighter action proposals. Therefore, motion detection is an important pre-processing step to generate activity proposals. The algorithm for motion detection is based on Mixture of Gaussians (MOG) for background/foreground segmentation which adaptively models each pixel by a mixture of Gaussian distributions. This generates foreground motion mask containing motion blobs that are refined via adopting connected components. When a motion blob passes two thresholds, namely active time and variance, a motion event is triggered to signal the camera to query the scene. Active time indicates period of time in which a motion blob is continuously detected and tracked based on centroid distance measure. Variance criteria shows how much a blob has moved in the camera's field of view. This helps removing waving flags, leaves, and swaying trees which generate trivial motion events. Once a motion event is triggered, a thumbnail of fixed size is placed on the region where motion blobs reside. #### 3DCNN Backbone Architecture Compared to their 2D counterparts, 3DCNNs have the ability to learn temporal interactions. This is achieved via additional parameters and higher computational complexity. Therefore, their adoption in resource-limited applications is constrained. To address this, [15] introduced 3D versions of MobileNetv1 [19], MobileNetv2 [16], ShuffleNetv1[14], ShuffleNetv2[15], and SqueezeNet[16] which were developed for 2D mobile applications. Moreover, these networks are pre-trained on the Kinetics [17], a large-scale human action recognition dataset, to provide robust weight initialization in the context of transfer learning when used in down-stream tasks. We use these networks as our candidate backbone architectures. We performed initial experiments on a subset of our curated dataset discussed in section 4. Fig. 5 compares the Figure 4: Using motion to generate tighter activity proposals. (a) and (b) are full and cropped frames of a delivery event. (c) and (d) are full and cropped frames of a non-delivery event. Figure 3: Proposed delivery detection pipeline. Motion algorithm detects and tracks foreground motion blobs. The foreground motion is used to reduce the spatial extent of frames to motion regions. Once certain number of frames are gathered, they are passed to a 3D classifier to obtain the delivery score. Figure 2: Baseline delivery detection pipeline. In each frame, a person detector localizes a person with highest detection confidence and passes the person’s crop to a 2D classifier to obtain a delivery score. Delivery scores from all the sampled frames in the video snippet are max-pooled to generate the final delivery score. performance of these networks to distinguish delivery from non-delivery events in terms of Precision and Recall (PR) curve and \(F_{1}\) score. Since MobileNetv2 obtains the highest accuracy, we base all our subsequent analysis on this network. ### Semi-supervised Attention So far, we settled on the MobileNetv2 network and setup a baseline 3D model. However, we are interested to investigate opportunities of improving accuracy without introducing additional compute and run-time. Inspired by the works of [1, 2] which incorporate the paradigm of curriculum learning [1] for object detection and vehicle analytics tasks, we devise a training mechanism to simplify the learning at early stages of optimization and gradually make the task more realistic as training progresses. The motivation for this simplification is that people can provide critical cues to distinguish deliveries versus non-deliveries compared to the rest of the scene. During training, we explicitly excite parts of the intermediate feature maps of the network that correspond to the location of people in the scene so that the network can assign higher weights to such signatures. As training progresses we reduce the degree to which excitation happens so that once training is concluded, excitation stops. Therefore, the computational complexity will is preserved. To excite the output of the \(l^{th}\) layer \(f_{l}\) of shape \(C*T*H*W\) where \(C\), \(T\), \(H\), and \(W\) represent number of channels, frames, height and width respectively, we generate \(T\) binary single-channel masks of shape \(H*W\) denoted by \(m_{l}\) in which pixels corresponding to the bounding box location of people are set to one while the rest are set to zero. Afterwards, channel-averaged feature maps \(\tilde{f}_{l}=\sum_{c=1}^{C}f_{l}(c,.,.,.)/C\) are multiplied with \(m_{l}\) in a pointwise manner. The resulting tensor is then multiplied by an scalar \(\alpha[n]\) which is a function of training epoch \(n\) as follows: \(\alpha[n]=0.5*(1+\cos(\pi n/N))\) where \(N\) is the total number of epochs. The ensuing excitation component is finally added to the input feature maps. Equation 1 expresses the mathematical relationship between the excited \(f_{l}^{e}\) and original \(f_{l}\) feature maps. \[f_{l}^{e}=f_{l}+\alpha[n]*(\tilde{f}_{l}.m_{l}) \tag{1}\] For 3D MobileNetv2, we restricted the excitation to the outputs of first and second convolutional layers since they have the same temporal resolution as the input and the content of feature maps are not as abstract as deeper layers of the network. The impact of excitation on the first and second layer features of the 3D MobileNetv2 is shown in Fig. 6. Note that how regions of the feature map containing a person are highlighted with respect to the rest. This helps the network to focus more on the visual information of people and extract more robust representations in early stages of training. ### Evidence-based Delivery Detection To differentiate delivery from non-delivery events, a straightforward learning objective would be to obtain logits corresponding to each of the two classes followed by maximizing the likelihood of each input sample \(p(y|x,\theta)\) where \(x\), \(y\), and \(\theta\) are input sample, corresponding label, and model parameters. In practice this is achieved via Cross-Entropy loss which applies softmax function and minimizes the negative log-likelihood of the true class. While this approach is widely used, it that does not account for uncertainties when making predictions and only provides point estimates of class probabilities. In addition, the relative comparison of class probabilities cannot be used to quantify prediction uncertainty as softmax is known to inflate the probabilities [1]. In contrast [1] proposed a learning objec Figure 5: Precision-Recall comparison of lightweight 3DCNN architectures on the test set of our Doorbell delivery detection dataset discussed in section 4.1. Figure 6: Exciting the first and second layer features of 3D MobileNetv2 for a sample video snippet in the first training epoch. tive based on the theory of evidence [16] and subjective logic [17] in which a predictor is tasked to gather evidence for any of the possible outcomes to formulate classification in conjunction with uncertainty modeling by considering a Dirichlet distribution as a prior on class probabilities. To realize this, an evidence function \(g\) (which can be implemented as either ReLU, exponential, or softplus) is applied to the output of the network \(h\) to ensure that outputs are always non-negative and are representative of the amount of evidence gathered by the network for each of the \(K\) classes: \[e_{i}=g(h_{i}(x;\theta)),\quad i=1\dots K \tag{2}\] where \(x\) and \(K\) are the input video and the number of classes. This is equivalent to gathering \(K+1\) mass values \(u\) and \(b_{i},i=1\dots K\) which are related through \(u+\sum_{i=1}^{K}b_{i}=1\). \(u\) is the is the uncertainty of the prediction and \(b_{i}\) is the belief mass corresponding to the \(i^{th}\) class which is related to the evidence of \(i^{th}\) class via \(b_{i}=e_{i}/S\) where \(S=\sum_{i=1}^{K}(\alpha_{i})\) is referred to as the total strength of the Dirichlet distribution and \(\alpha_{i}=e_{i}+1,i=1\dots K\) are Dirichlet parameters. Based on this, uncertainty can be written as \(u=K/S\) which is inversely proportional to the total strength \(S\) or the total evidence gathered by the network \(\sum_{i=1}^{K}e_{i}\). Therefore, gathering high evidence results in small uncertainty and vice-versa. Since class probabilities are assumed to follow Dirichlet distribution, _i.e._\(\mathbf{p}\sim\text{Dir}(\mathbf{p}|\alpha)\) where \(\mathbf{p}\in\mathbb{R}^{K}\), the average probability of the \(i^{th}\) class can be computed as \(\alpha_{i}/S\)[15]. Therefore, the resulting loss function is computed via the following formulation: \[\mathcal{L}=-\sum_{i=1}^{K}\mathbf{y}_{i}(\log(\alpha_{i})-\log(S)) \tag{3}\] \(\mathbf{y}_{i}\) is the \(i^{th}\) entry of the one-hot encoding label vector. Despite providing quantifying the uncertainty of predictions, this approach is deterministic and may suffer from the overfitting caused by the training of neural networks. To alleviate this issue, a number of regularization terms are proposed. [15] proposed a Kullback-Leibler (KL) term to encourage the network to generate zero evidence for a sample if it cannot be correctly classified. This is achieved by removing the generated evidence for the true class, _i.e._\(\tilde{\alpha}_{i}=\mathbf{y}_{i}+(1-\mathbf{y}_{i}).(e_{i}+1)\), and minimizing the KL distance of the corresponding Dirichlet distribution \(\text{Dir}(\mathbf{p}|\tilde{\alpha}_{i})\) from the one with zero total evidence, _i.e._\(S=K\) and \(\text{Dir}(\mathbf{p}|\mathbf{1})\), which represents a uniform distribution. Note that \(\mathbf{1}\) is the notation for all one vector. This KL term essentially discourages the network to over-fit and to generate evidence for samples about which it is uncertain: \[\begin{split}\mathcal{L}_{KL}&=\log(\frac{\Gamma( \sum_{i=1}^{K}\tilde{\alpha}_{i})}{\Gamma(K)\prod_{i=1}^{K}\Gamma(\tilde{ \alpha}_{i})})\\ &+\sum_{i=1}^{K}(\tilde{\alpha_{i}}-1)\left(\psi(\tilde{\alpha}_ {i})-\psi(\sum_{j=1}^{K}\tilde{\alpha_{j}})\right)\end{split} \tag{4}\] where \(\Gamma(.)\) and \(\psi(.)\) are gamma and logarithmic derivative of gamma function respectively. In addition to \(\mathcal{L}_{KL}\) regularization, [1] propose to calibrate the feature extraction network to be confident for its accurate predictions while being uncertain for it false predictions. To realize this goal, authors propose to maximize the Accuracy versus Uncertainty (AvU) utility function defined in [18]. AvU is formally defined as: \[\text{AvU}=\frac{n_{AC}+n_{IU}}{n_{AC}+n_{AU}+n_{IC}+n_{IU}} \tag{5}\] In Eq. 5, \(n_{AC}\), \(n_{AU}\), \(n_{IC}\), and \(n_{IU}\) are number of accurate and confident predictions, number of accurate and uncertain predictions, number of inaccurate and confident predictions, and number of inaccurate and uncertain predictions respectively. A well calibrated model should obtain high \(n_{AC}\) and \(n_{IU}\) and low \(n_{IC}\) and \(n_{AU}\). To regularize the learning of the model to achieve this objective, we draw inspiration from [1] and add the following calibration objective to the overall loss function. \[\begin{split}\mathcal{L}_{cal}=&-\lambda_{n}\mathbb{1 }\left(\tilde{y}=y\right)p\log(1-u)\\ &-(1-\lambda_{n})\mathbb{1}\left(\tilde{y}\neq y)(1-p)\log(u) \end{split} \tag{6}\] where \(\hat{y}=\arg\max_{i}\{\alpha_{i}/S\}\), and \(p=\max_{i}(\alpha_{i}/S)\) for \(i\in\{1\dots K\}\) are the predicted class label and predicted probability for a given input sample. Note that \(\mathbb{1}(.)\) is the indicator function. Moreover, \(\lambda_{n}\) is an epoch-dependent weight to adjust the contribution for each of the terms in the right hand side of Eq. 6. Specifically, \(\lambda_{n}=\lambda_{0}e^{-\ln\left(\lambda_{0}\right)n/N}\) is set to be exponentially-increasing (\(\lambda_{0}<1\)) with respect to epoch index \(n\). The intuition is that over the initial training epochs the model mainly makes inaccurate predictions and therefore it should be penalized to increase uncertainty for these predictions via \(\mathbb{1}\left(\tilde{y}\neq y\right)(1-p)\log(u)\). However, as training progresses the model makes accurate predictions more often and therefore it should reduce the corresponding uncertainties which is enforced by \(\mathbb{1}\left(\tilde{y}=y\right)p\log(1-u)\). ## 4 Experiments Here we first describe the dataset we gathered and curated for our experiments. Afterwards the implementation details will be discussed followed by the presentation of the experimental results. ### Dataset To the best of our knowledge, there are no publicly available dataset that is suited for our application. Therefore, we choose to collect a video dataset by recording video snippets from static doorbell cameras installed at the front door of \(339\) residences whose residents approved and signed the designated data collection agreement for this purpose. To ensure all videos contain activities, we started to record only when there was a motion trigger and stopped recording around two minutes after the start. This resulted in \(5477\) videos with the resolution of \(1280\)x\(960\). Sample frames of this dataset are shown in Fig. 1. In the initial round of annotation process, all the videos were assigned a video-level tag to denote whether at least a delivery event happens during the entire duration of the video, which resulted in \(1898\) delivery and \(3579\) non-delivery video samples. However, we note that delivery events only occupy a small portion of a video as shown in Fig. 7. Therefore, to train our 3DCNN we require finer annotation for the start and end time of an event within the video. Given a delivery event involves at least one person, to collect finer information, we run person detection and tracking on all videos to obtain person tracks. Afterwards we annotated person tracks with delivery/non-delivery tags. We used EfficientDet-D4 [14] for detection and DeepSort [15, 16] multi-object tracker with embeddings extracted by the bag of tricks for person re-identification [17] model with ResNet50_IBN [15] backbone implemented in FastReID [11], to compute tracks. This led to the collection of \(2057\) person tracks delivering items and \(2930\) person tracks that do not correspond to any delivery events, _e.g._, entering or exiting the residence, and playing in front lawn. Despite gathering fine annotations with tight spatial and temporal bounds, our delivery detection system is intended to be deployed on proposals generated based on motion events that are not tight around people in time and space and do not necessarily involve people, _e.g._, passing vehicles, presence of pets or wildlife. Therefore, we need to generate and label motion events to prepare our training and testing data. To generate these events we use the algorithm outlined in section 3.2. To assign labels, labeled person tracks of delivery events are used to accelerate the process. To this end, we measured the overlap between a computed motion event and all person tracks within a video in terms of temporal and spatial Intersection over Union (IoU): \[IoU_{t}=\frac{T(m_{i}\cap t_{j})}{T(m_{i}\cup t_{j})},\qquad IoU_{s}=\frac{A(m_ {i}\cap t_{j})}{A(m_{i}\cup t_{j})} \tag{7}\] where \(m_{i}\), \(t_{j}\) are \(i^{th}\) motion event and \(j^{th}\) person track. In addition, \(T(.)\), \(A(.)\) are temporal length and the spatial area functions respectively. Note that initially we tried to use spatio-temporal (3D) IoU for measuring the overlap for a motion-track pair; however, we noticed that the overlap values become quite small even for closely matched pairs due to measuring the volume. Using two different thresholds, namely temporal \(t_{min}\) and spatial \(s_{min}\), provides more flexibility. Accordingly, \(3625\) positive and \(7248\) negative motion events were gathered and split into train, validation and test sets as reported by Table 1. ### Implementation Details Each motion proposal is uniformly sampled by \(16\) frames which are spatially resized to \(112\)x\(112\) by preserving the original aspect ratio. During training, temporal jittering of \(\pm 10\) frames is applied to the start and end bounds of proposals as a form of data augmentation. Color jittering is also employed with the probability of \(0.2\); brightness, contrast and saturation factors are uniformly chosen from \([0.9,1.1]\) while hue factor is uniformly selected from \([-0.1,0.1]\). Additionally, generated cuboids are horizontally flipped with the probability of \(0.5\). To optimize Adam [13] with decoupled weight decay AdamW [12] is employed with gradient \(L_{2}\) norm clipping at \(0.25\) and the learning rate that is linearly warmed up to \(5e-4\) in the first \(5\) epochs and decayed at \(20^{th}\) and \(40^{th}\) epochs with gamma factor of \(0.1\). Weight decay is set to \(5e-4\) and the network is trained for the total of \(50\) epochs. To alleviate the impact of easy samples, focal loss [12] with focusing parameter of \(1.0\) is adopted. ### Evaluation Metric As our target task is to classify an input video as delivery or non-delivery, we use \(F_{1}\) score and mean Average Precision (mAP) that presents the area under the precision-recall curve (AUC). Note that for a given video, multiple motion events may occur. In case the tag of the video is delivery, the classifier must classify at least one of those events as delivery and if the tag is non-delivery, the classifier must classify all the events as non-delivery. As we noticed high number of false positives with 2D baseline system 3.1, we report the FPR as well. ### Experimental Results This section compares the 2D baseline system described in section 3.1, the 3D MobileNetv2, the 3D MobileNetv2 with semi-supervised attention module outlined in section 3.2, namely excited MobileNetv2, and the excited 3D MobileNetv2 optimized with the evidence-based objective of section 3.2. Additionally, as package detection is offered in many smart home solutions, we choose to evaluate its ap \begin{table} \begin{tabular}{c||c||c||c} \hline \multirow{2}{*}{Split} & \multirow{2}{*}{\# Cameras} & \# Delivery & \# non-Delivery \\ & & Videos / Events & Videos / Events \\ \hline \hline Train & \(182\) & \(1016\) / \(2324\) & \(1902\) / \(3817\) \\ Validation & \(59\) & \(416\) / \(595\) & \(769\) / \(1680\) \\ Test & \(98\) & \(466\) / \(706\) & \(900\) / \(1751\) \\ \hline Total & \(339\) & \(1898\) / \(3625\) & \(3579\) / \(7248\) \\ \hline \hline \end{tabular} \end{table} Table 1: Doorbell delivery detection dataset statistics. Note that cameras across splits are disjoint. Figure 8: Precision-Recall comparison of package detection, baseline 2D pipeline and variants of 3D MobileNetv2 on the test set. Figure 7: Extent of deliveries are shorter than the length of videos. Here, in a \(144\)-seconds long video, delivery only lasts \(12\) seconds. plicability for delivery detection task. To this end, we train a COCO [11] pre-trained MobileNet-SSD v2 object detector on \(4405\) manually labeled images with package annotations. Fig. 8 plots the precision-recall curves and Table 2 reports evaluation metrics for each of these models when evaluated on the test set. Unsurprisingly, package detection has a significantly inferior performance as detecting small and occluded packages is challenging. Also drastic variation in size, shape and appearance of delivered items further exacerbates this performance gap. Therefore, solutions based on package detection are not suited for detecting delivery instances. From Table 2 it is also seen that a 2D model compared to 3D alternatives performs at a lower level due to inability of modeling temporal interactions. We observe the increase of \(5.4\%\) and \(25\%\) in \(F_{1}\) and mAP scores when we incorporate the temporal information via 3D MobileNetv2 which also results in a reduction of \(20\%\) in the FPR showing that the proposed pipeline generates much fewer false delivery notifications. Moreover, our novel semi-supervised attention module and evidence-based optimization objective further enhances the performance of 3D MobileNet in a meaningful manner without introducing any additional overhead during test time. This is of great importance for a resource limited design to maintain a fixed computational budget for inference. For the case of 3D MobileNetv2 we have increased \(F_{1}\) and mAP scores by \(5.1\%\) and \(7.5\%\) and reduced the FPR by \(31\%\) for free. It is also important to compare the 2D baseline model with the 3D MobileNetv2 in terms of run-time speed, binary size of quantized models and the number of floating point operations (FLOPS) which are critical when deployed on an edge device. Table 3 provides this comparison. While the inference time of 2D baseline system is \(36\%\) lower, we have to note that the system continuously queries the scene at a fixed rate and performs \(2.5\) times more operations (\(1.41\) GFLOPS) per forward pass compared to 3D MobileNetv2 (\(0.55\) GFLOPS) which performs inference once a motion event is concluded. Also the required memory to store 3D MobileNetv2 is much smaller compared to the 2D baseline which provides opportunities for increasing the complexity and potentially the accuracy of a prospective model. Finally we would like to highlight an additional benefit of using an evidence-based objective compared to Cross-Entropy. We can compute the average uncertainty score for the samples within the validation set on which the model made mistakes. We further use this value as a threshold to remove the predictions whose uncertainty values are higher when processing the test set as presented in Table 4. By applying this threshold, we remove \(89\) videos from our test set which not only increases \(F_{1}\) and mAP scores, but also reduces the FPR. Here we visualize randomly selected samples about which the model was uncertain in Fig. 9. It is seen that, these sample have flavors of delivery events. For instance, in (a) a mainman is going towards a neighboring house, in (b) a family member is holding a tablet which is what most delivery personnel do after delivering an item to submit proof of delivery, in (c) a mail man is picking an item from front door for either shipping or returning, in (d) a food delivery person with no uniform is seen, and in (e) the resident is putting down her belongings at the front door. Therefore, uncertainty score can be used effectively to reduce the number of false predictions. ## 5 Conclusion In this work we presented a mobile-friendly pipeline to perform the task of delivery detection in contrast to package detection on resource-limited platforms such as doorbell cameras. The proposed system has the capacity of modeling temporal interactions in video streams to enhance its predictions over a 2D model. In addition, we have improved the accuracy of our designed system by a significant margin through the novel incorporation of a semi-supervised attention module, namely excitation layer. We have also benefited from the advances in the theory of evidence and subjective logic to modify the optimization objective of the system. This not only boosts the system performance but also quantifies the uncertainty of predictions made by the network and provides the opportunity to enforce a level of certainty to further advance predictions. We emphasize that all these improvements are \begin{table} \begin{tabular}{c||c|c|c|c} \cline{2-5} & \multicolumn{4}{c|}{Evaluation Metrics} \\ \hline Model & \(F_{1}(\uparrow)\) & mAP(\(\uparrow\)) & FPR(\(\downarrow\)) & Classification \\ & & & Accuracy (\%)(\(\uparrow\)) \\ \hline \hline Evidence-based & \multirow{2}{*}{\(0.81\)} & \multirow{2}{*}{\(0.86\)} & \multirow{2}{*}{\(0.13\)} & \multirow{2}{*}{\(86.11\)} \\ Excited 3D MobileNetv2 & & & & \\ \hline Evidence-based & \multirow{2}{*}{**0.83**} & \multirow{2}{*}{**0.87**} & \multirow{2}{*}{**0.12**} & \multirow{2}{*}{**87.92**} \\ Excited 3D MobileNetv2 & & & & \\ \cline{1-1} \cline{4-4} \cline{6-6} \cline{5-6} \ achieved without adding any inference-time computation and memory overhead to the proposed design.
2303.11326
Spin- and Momentum-Correlated Atom Pairs Mediated by Photon Exchange and Seeded by Vacuum Fluctuations
Engineering pairs of massive particles that are simultaneously correlated in their external and internal degrees of freedom is a major challenge, yet essential for advancing fundamental tests of physics and quantum technologies. In this Letter, we experimentally demonstrate a mechanism for generating pairs of atoms in well-defined spin and momentum modes. This mechanism couples atoms from a degenerate Bose gas via a superradiant photon-exchange process in an optical cavity, producing pairs via a single channel or two discernible channels. The scheme is independent of collisional interactions, fast and tunable. We observe a collectively enhanced production of pairs and probe interspin correlations in momentum space. We characterize the emergent pair statistics and find that the observed dynamics is consistent with being primarily seeded by vacuum fluctuations in the corresponding atomic modes. Together with our observations of coherent many-body oscillations involving well-defined momentum modes, our results offer promising prospects for quantum-enhanced interferometry and quantum simulation experiments using entangled matter waves.
Fabian Finger, Rodrigo Rosa-Medina, Nicola Reiter, Panagiotis Christodoulou, Tobias Donner, Tilman Esslinger
2023-03-20T17:59:03Z
http://arxiv.org/abs/2303.11326v3
# Spin- and momentum-correlated atom pairs mediated by photon exchange ###### Abstract Pairs of correlated particles are at the core of complex many-body phenomena and their control is essential for quantum technologies. Engineering pairs that are simultaneously correlated in their external and internal degrees of freedom is a major challenge. In this work, we experimentally demonstrate a mechanism for generating pairs of atoms in well-defined spin and momentum modes. This mechanism couples atoms from a degenerate Bose gas via a superradiant photon-exchange process mediated by the vacuum mode of an optical cavity. The scheme is independent of collisional interactions, fast and tunable. We observe a collectively enhanced production of pairs, characterize their statistics, and measure inter-spin correlations in momentum space. Our observation of coherent many-body oscillations involving well-defined momentum modes offers promising prospects for quantum-enhanced interferometry using entangled matter waves. + Footnote †: preprint: APS/123-QED Mechanisms generating correlated pairs of particles have proven pivotal in diverse fields of physics. In cosmology, vacuum fluctuations are at the origin of the creation of elementary particle-antiparticle pairs and Hawking radiation [1; 2]. In condensed-matter systems, the pairing of quasiparticles drives strongly correlated phenomena, such as phonon-mediated superconductivity [3] or vortex-induced superfluidity [4]. In quantum-optics experiments, entangled photons are produced through parametric down-conversion, with high-potential applications in metrology and quantum information science [5]. Similar approaches have been explored with ultracold atomic gases to correlate massive particles either in their internal [6; 7; 8; 9; 10; 11; 12] or motional [13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25] degrees of freedom. Quantum metrology based on atom interferometry, including precision gravitometry and magnetometry [26; 27], would benefit from a rapid pair production in well-defined momentum modes. However, existing schemes relying on collisions are limited by the timescales of contact interactions. Fast timescales are characteristic for strong light-matter interactions and photon-atom pairs can be created in superradiant processes [28]. Yet, pairs comprising different species are difficult to manipulate and detect. To overcome this, strong light-matter coupling can be used as a building block to correlate matter pairs. This had been demonstrated with Rydberg atoms exchanging a photon in a microwave cavity [29] and has more recently been extended to generate matter pairs and control spin correlations in thermal atomic ensembles [30; 31; 32]. Here, we employ a Bose-Einstein condensate (BEC) coupled to a high-finesse optical cavity to generate photon-mediated atom pairs correlated simultaneously in their spin and momentum. Unlike schemes relying on isotropic collisions [12; 33], our implementation directly couples individual momentum modes, offering an efficient route for pair production with large mode occupation. The fast timescales associated with the strong light-matter coupling are on the order of tens of microseconds and allow us to separate this process from typical dissipative mechanisms in atomic systems, such as heating, three-body losses, and trapping effects [34; 35]. We explore the properties of the pairs by examining their statistics and inter-spin correlations in momentum space. Our scheme enables independent control over unitary and additional dissipative processes which originate from the open character of our system [36; 37], allowing us to observe coherent many-body oscillations. In our experiments, we prepare a \({}^{87}\)Rb BEC consisting of up to \(N\approx 8\times 10^{4}\) atoms in the \(m=0\) magnetic sublevel of the \(F=1\) hyperfine manifold. We couple the atoms dispersively to a single mode of a high-finesse optical cavity by illuminating them with a running-wave laser drive propagating along the \(z\) direction, cf. Fig. 1a. The drive has a wavelength \(2\pi/k\) and is switched on for a quench time \(t\). This coupling converts atoms in \(m=0\) with zero momentum into pairs of \(m=\pm 1\) with opposite recoil momenta \(\hbar k\) along \(z\). The atoms additionally acquire momentum \(\hbar k\) symmetrically in \(\pm x\) direction following the standing-wave structure of the cavity mode. This process conserves both the total center of mass and angular momentum of the atoms. The underlying coupling mechanism is a superradiant photon-exchange process involving the drive and the vacuum mode of the cavity field [30], as illustrated in Fig. 1a (upper panel). During this process, one atom in the mode \(\left|k_{z}=0\right\rangle_{m=0}\equiv\left|0\right\rangle_{0}\) scatters a photon from the drive and flips its spin to either \(m=\pm 1\) while obtaining a recoil momentum \(\hbar k\) along \(+z\) and populating the modes \(\left|+k\right\rangle_{\pm 1}\). The emitted 'virtual' cavity photon is rescattered into the drive field by a second atom in \(\left|0\right\rangle_{0}\), which obtains a recoil momentum along \(-z\) and populates the complementary spin state \(m=\mp 1\), i.e., the modes \(\left|-k\right\rangle_{\mp 1}\). Pair production can occur via two discernible channels with coupling rates \(\chi_{+}\) and \(\chi_{-}\), depending on the first atom occupying the mode \(\left|+k\right\rangle_{+1}\) or \(\left|+k\right\rangle_{-1}\), cf. Fig. 1a (lower panel); for similar rates, the generated pair is expected to occupy a balanced super position of the modes associated with the two channels. For initially unpopulated \(\left|+k\right\rangle_{\pm 1}\) and \(\left|-k\right\rangle_{\mp 1}\) modes, our mechanism amplifies vacuum fluctuations analogously to optical \(\chi^{(3)}\) parametric amplifiers [5], see scheme in Fig. 1b. The atoms in \(\left|0\right\rangle_{0}\) correspond to the input 'pump' mode, whereas the finite-momentum atom pairs in \(m=\pm 1\) compare with the output'signal' and 'idler' modes. Experimentally, we observe a super-linear increase of the mean pair number \(\left\langle N_{\mathrm{p}}\right\rangle\) when adjusting \(N\) for a fixed \(t=65\,\mathrm{\SIUnitSymbolMicro s}\) (Fig. 1c). This behaviour is due to collective enhancement of the pair production, which results in coupling rates \(N\chi_{\pm}\) akin to the third-order susceptibility \(\chi^{(3)}\). The timescale of this process is orders of magnitude faster than realizations relying on collisional spin-mixing dynamics (\(\sim 10\,\mathrm{ms}-1\,\mathrm{s}\)) [26]. To characterize the key properties of our system, we derive an effective many-body Hamiltonian \(\hat{H}\) using a few-mode expansion in spin and momentum space and adiabatically eliminating the cavity field. We obtain \(\hat{H}~{}=~{}\hat{H}_{0}+\hat{H}_{+}+\hat{H}_{-}\), with approximate contributions \[\hat{H}_{0} =\frac{\hbar\omega_{0}}{2}\sum_{k=\pm k}\left(\hat{c}_{k,1}^{ \dagger}\hat{c}_{k,1}+\hat{c}_{-k,-1}^{\dagger}\hat{c}_{-k,-1}\right), \tag{1}\] \[\hat{H}_{\pm} =\hbar\chi_{\pm}\Big{(}2\hat{c}_{-k,\mp 1}^{\dagger}\hat{c}_{+k, \pm 1}^{\dagger}\hat{c}_{0,0}\hat{c}_{0,0}+\mathrm{h.c.}\Big{)}, \tag{2}\] where the bosonic operators \(\hat{c}_{k,m}^{\dagger}\) create atoms in the modes \(\left|\tilde{k}\right\rangle_{m}\) with \(\tilde{k}=\{0,+k,-k\}\) and \(m=\{0,+1,-1\}\) (Methods). The various energy scales of the system are schematically depicted in Fig. 1d. The first term, \(\hat{H}_{0}\), describes the energy cost \(\hbar\omega_{0}=2\hbar q+4\hbar\omega_{\mathrm{rec}}\) for creating a single pair, with the quadratic Zeeman splitting \(q\) and the recoil kinetic energy \(\hbar\omega_{\mathrm{rec}}=h\times 3.68\) kHz. The interaction terms, \(\hat{H}_{\pm}\), describe the two discernible pair-production channels with the corresponding intermediate states being separated by twice the linear Zeeman splitting \(\omega_{\mathrm{z}}\). The coupling rates \(\chi_{\pm}=\eta^{2}\delta_{\pm}/(\delta_{\pm}^{2}+\kappa^{2})\) depend on the decay rate of the cavity field, \(\kappa=2\pi\times 1.25\,\mathrm{MHz}\), and the tunable parameters \(\eta\) and \(\delta_{\pm}=\omega_{c}-\omega_{\pm}\); here \(\eta\) denotes the two-photon scattering rate [38], \(\omega_{c}\) is the cavity resonance and \(\omega_{\pm}\) are the frequencies of the virtual cavity photons for the two channels. The behaviour of our system is determined by the competition between \(\hat{H}_{0}\) and \(\hat{H}_{\pm}\). For sufficiently strong negative couplings \(\chi_{\pm}\), we expect finite pair occupation in the corresponding modes. In our system, the typical interaction time required to produce pairs, \(T_{\mathrm{int}}=2\pi/(N\chi_{\pm})\approx 40\,\mathrm{\SIUnitSymbolMicro s}\), is significantly shorter than the lifetime of the emergent matter waves, which separate from the \(m=0\) BEC within \(T_{\mathrm{LT}}\approx 1\,\mathrm{ms}\) due to their finite Figure 1: **Spin-momentum pairs in an optical cavity**. **a**, Upper panel: A BEC inside a cavity is illuminated by a running-wave drive, with the magnetic field \(B\) defining the quantization axis. An \(m=0\) atom (green) flips its spin to \(m=1\) (blue) while scattering a photon into the cavity mode. This photon is rescattered by a second \(m=0\) atom, changing its state to \(m=-1\) (red), and resulting in an atom pair with opposite spin and momentum. Lower panel: Pair creation in momentum space via two channels \(\chi_{\pm}\), depending on the first atom changing its spin to \(m=\pm 1\) (grid in units of recoil \(k\)). **b**, Schematics of an optical parametric amplifier, with two pump photons (green) being converted into a pair of signal (blue) and idler (red) photons via the nonlinear interaction \(\chi^{(3)}\). **c**, Experimentally observed mean number of pairs \(\left\langle N_{\mathrm{p}}\right\rangle\) showing a super-linear growth with the initial atom number \(N\). Throughout this work, the errorbars are the SE obtained via jackknife resampling. The solid line shows our numerical simulations; see Methods also for experimental parameters. **d**, Energy-level diagram describing our pair-production mechanism which is composed of two superradiant processes, each involving a drive (straight) and a cavity (curly arrows) photon. The intermediate modes are split by twice the linear Zeeman shift \(\omega_{\mathrm{z}}\) and determine two discernible channels with coupling rates \(\chi_{\pm}\), depending on the detunings \(\delta_{\pm}\) and the cavity loss rate \(\kappa\). The offset \(\omega_{0}\) is set by the kinetic and internal energy of the pairs. momenta (Methods). This separation of timescales in our system, \(T_{\rm int}\ll T_{\rm LT}\), ensures that pair production occurs deeply in the collective regime, and results in the occupation of well-defined individual momentum modes. In the experiment, we individually control the couplings \(\chi_{\pm}\) by varying \(\delta_{\pm}\) via the combined tuning of \(\omega_{c}\) and \(\omega_{z}\), and determine the populations of the various modes by measuring spin-resolved momentum distributions. For the relevant case of \(\delta_{\pm}<0\), and for sufficiently large \(\omega_{z}\), only the \(\chi_{+}\) channel contributes and gives rise to pairs occupying the modes \(\ket{+k}_{+1}\) and \(\ket{-k}_{-1}\). This is highlighted in the exemplary momentum distribution in Fig. 2a for \(\omega_{z}=2\pi\times 7.09(1)\,\mathrm{MHz}\). For smaller \(\omega_{z}\), both channels become active, resulting in concurrent occupation of the modes \(\ket{+k}_{-1}\) and \(\ket{-k}_{+1}\), as shown in Fig. 2b for \(\omega_{z}=2\pi\times 1.01(1)\,\mathrm{MHz}\). In the following, we make extensive use of these two settings, which we refer to as the single- and two-channel configurations, respectively. By accumulating hundreds of experimental realizations for the single-channel (Fig. 2c) and two-channel (Fig. 2d) configurations, we obtain the respective pair statistics. The number of pairs associated with the \(\chi_{+}\) and \(\chi_{-}\) processes are shown in the left and right panels, respectively. When a channel becomes active, we observe large fluctuations compatible with a Bose-Einstein distribution \(p(N_{\rm p})\ =\bra{N_{\rm p}}^{N_{\rm p}}/(1+\bra{N_{\rm p}})^{\langle N_{ \rm p}\rangle+1}\)[11], which satisfies \(\sigma(N_{\rm p})=\bra{N_{\rm p}}\) for the standard deviation \(\sigma(N_{\rm p})\) and the mean \(\bra{N_{\rm p}}\) (see also arrow and purple bin in Figs. 2c and d). In the single-channel case, our observations are consistent with the system occupying a so-called 'two-mode squeezed vacuum' state, i.e., a superposition of twin-Fock states comprising \(\ket{+k}_{+1}\) and \(\ket{-k}_{-1}\)[39]. Multimode parametric amplification is not expected to alter the resulting distributions for an undepleted pump mode as long as the different channels remain discernible [40]. We estimate a negligible upper bound of \(\bra{N_{T}}\approx 0.016\) for the average number of thermal atoms occupying each mode \(\ket{\pm k}_{\pm 1}\) (Methods), indicating that the observed distributions indeed arise from the amplification of quantum fluctuations with empty-mode occupations \(\bra{N_{\rm QF}}\approx 0.5\). Going beyond studies of individual modes, we further verify the correlated nature of the produced pairs. We introduce the inter-spin noise correlation map \[\mathcal{C}^{+1,-1}(k_{+1}^{z},k_{-1}^{z})= \frac{\bra{n_{+1}n_{-1}}-\bra{n_{+1}}\bra{n_{-1}}}{\sigma(n_{+1} )\sigma(n_{-1})}, \tag{3}\] with \(n_{m}\equiv n_{m}(k_{m}^{z})\) indicating the momentum-space density distribution of spin state \(m\) along \(z\) (after integrating along \(x\)) at coordinate \(k_{m}^{z}\), and \(\sigma(n_{m})=\bra{n_{m}^{2}}-\bra{n_{m}}^{2}\). In Figs. 2e and f, we show the extracted correlation maps Figure 2: **Pair statistics and correlations.****a, b**, Exemplary spin-resolved momentum distributions for the single-channel (a) and double-channel (b) configuration. The orange and yellow boxes indicate the modes \(\ket{\pm k}_{\pm 1}\) and \(\ket{\pm k}_{\mp 1}\), respectively. **c, d**, Pair statistics, generated through the \(\chi_{+}\) (orange) and \(\chi_{-}\) (yellow histograms) process, for the single-channel (c) and two-channel (d) configurations. The solid lines correspond to Bose-Einstein distributions with experimentally determined mean \(\bra{N_{\rm p}}\) (purple coloured bin) convolved with our Gaussian detection noise (Methods). The arrows indicate the standard deviation of the resulting distributions, demonstrating \(\bra{N_{\rm p}}\approx\sigma(N_{\rm p})\). The dashed line is consistent with a distribution with zero mean pairs. **e, f**, Momentum space inter-spin correlation maps \(\mathcal{C}^{+1,-1}(k_{+1}^{z},k_{-1}^{z})\) for the single-channel (e) and two-channel (f) configuration, demonstrating the correlated nature of the produced pairs. We attribute the side patterns beside the correlation peaks to residual density-dependent imaging artifacts. **g**, Anticorrelation peaks \(\mathcal{C}^{+1,-1}(\pm k,\pm k)=[\mathcal{C}^{+1,-1}(k,k)+\mathcal{C}^{+1,-1}( -k,-k)]/2\) for realizations with \(N_{\rm p}>N_{\rm p}^{\rm min}\) for a single-channel (light blue) and two-channel (dark blue) configuration, with the solid lines showing the results from our numerical simulations. The anticorrelations increase with \(N_{\rm p}^{\rm min}\) due to pump-mode depletion. The inset displays a representative correlation map for \(N_{\rm p}^{\rm min}=7\times 10^{3}\). See Methods for relevant experimental parameters. \(\mathcal{C}^{+1,-1}(k_{+1}^{z},k_{-1}^{z})\) for both the single-channel and two-channel configurations, respectively. In the former case, we observe positive correlations around \((k_{+1}^{z},k_{-1}^{z})=(+k,-k)\), demonstrating that pairs occupy the modes \(\ket{+k}_{+1}\) and \(\ket{-k}_{-1}\) in a correlated fashion. For the latter case, the positive peaks around \((+k,-k)\) and \((-k,+k)\) indicate correlated generation of \(m=\pm 1\) pairs via the two channels \(\chi_{\pm}\). When postselecting for realizations above a minimum pair number \(N_{\mathrm{p}}^{\mathrm{min}}\), we observe increasingly pronounced anticorrelation peaks for the two-channel configuration around equal momenta \((+k,+k)\) and \((-k,-k)\), cf. Fig. 2g. We attribute this behaviour to the competition between the channels in the presence of pump-mode depletion, which inhibits large simultaneous occupation of these modes. This trend suggests that the many-body state cannot be merely expressed as a product state of two-mode squeezed vacuum states in the individual channels, especially in a regime of large occupations. A deeper understanding of the pair dynamics and its interplay with depletion effects can be gained by investigating the full population evolution of the different modes (Fig. 3a). For clarity, we concentrate on the single-channel configuration involving the modes \(\ket{+k}_{+1}\) and \(\ket{-k}_{-1}\). We observe the onset of pair production around \(T_{\mathrm{int}}\approx 40\,\mathrm{\SIUnitSymbolMicro s}\), followed by a fast exponential population increase; this behaviour is in resemblance to optical parametric amplification [5]. As time elapses, we observe coherent many-body oscillations redistributing the atoms between the different available modes. These pair oscillations are a consequence of the restricted phase space at finite \(N\). While similar to pair oscillations arising from spin-mixing interactions [6], our observations also demonstrate coherent pair dynamics involving well-defined momentum modes. For longer times, we observe a progressive accumulation of atoms in \(\ket{+k}_{+1}\) (see inset in Fig. 3a), resulting in a population imbalance between \(\ket{+k}_{+1}\) and \(\ket{-k}_{-1}\). The oscillations are damped on a timescale \(T_{\mathrm{coh}}\sim 150\,\mathrm{\SIUnitSymbolMicro s}\), which we identify as the coherence time. We attribute both effects to the intrinsic open nature of our system as photons are sporadically lost at the cavity mirrors, inhibiting pair production through photon exchange. We model this dissipative superradiant Raman process via effective Lindblad terms with rates \(\gamma_{\pm}=\eta^{2}\frac{2\kappa}{\delta_{+}^{2}+\kappa^{2}}\) for the two channels, and perform truncated Wigner simulations (Methods), which stochastically sample quantum fluctuations of the initially empty modes in \(m=\pm 1\)[41]. Our simulations quantitatively reproduce the observed population evolution (solid lines in Fig. 3a), with the coupling \(\eta\) being the only free parameter of the simulations and optimized to \(\eta=0.94\eta_{\mathrm{exp}}\) of the experimentally calibrated value \(\eta_{\mathrm{exp}}\). We attribute this small difference to the imperfect alignment between the BEC and the cavity mode, and systematic uncertainties in the atom-number calibration. Note that the simulations also indicate that for our experimental parameters on average \(\sim\chi_{+}/\gamma_{+}\approx 10\) pairs are created before the first photon is lost from the cavity. Finally, the scaling of the couplings \(\chi_{\pm}/\gamma_{\pm}=\delta_{\pm}/(2\kappa)\) allows us to independently tune the coherent and dissipative processes of our system. We demonstrate this control for the single-channel configuration by quenching to a fixed coupling \(\chi_{+}=-2\pi\times 0.61(3)\,\mathrm{Hz}\) and varying the detuning \(\delta_{+}\) at a constant \(t=80\,\mathrm{\SIUnitSymbolMicro s}\), see Fig. 3b. The measured mean number of pairs \(\langle N_{\mathrm{p}}\rangle\) (upper panel) remains small close to the two-photon resonance (\(\delta_{+}=0\)), and monotonically increases for large detunings \(\ket{\delta_{+}}/(2\kappa)\gg 1\). Concurrently, the mean population imbalance \(\langle N_{\mathrm{imb}}\rangle\) between \(\ket{+k}_{1}\) and \(\ket{-k}_{-1}\) (lower panel) exhibits the opposite trend and gradually decreases towards zero for large detunings. We also present the number of photons \(\langle N_{\mathrm{ph}}\rangle\) lost from the cavity, as measured with our heterodyne detection [38]. The qualitative agreement between \(\langle N_{\mathrm{ph}}\rangle\) and \(\langle N_{\mathrm{imb}}\rangle\) verifies that superradiant Raman scattering is indeed the primary dissipation source. The experimental results are reasonably Figure 3: **Coherent many-body oscillations and tunable dissipation**. **a**, Time evolution of mode occupations in the single-channel configuration, exhibiting oscillatory dynamics. The solid curves show numerical simulations (Methods). For longer times, photon loss results in an imbalance between the \(\ket{+k}_{+1}\) and \(\ket{-k}_{-1}\) populations. Inset: representative momentum-space distribution with large imbalance at \(t=138\,\mathrm{\SIUnitSymbolMicro s}\). **b**, Mean number of pairs \(\langle N_{\mathrm{p}}\rangle\) (upper panel) and imbalance \(\langle N_{\mathrm{imb}}\rangle\) (lower panel, grey), and number of photons lost from the cavity \(\langle N_{\mathrm{ph}}\rangle\) (lower panel, orange) [38] for \(t=~{}80\,\mathrm{\SIUnitSymbolMicro s}\) as a function of the detuning \(\delta_{+}/(2\kappa)\), which controls the coherent and dissipative processes. We attribute both the deviation from theory at large quench times as well as the excess photon numbers (\(\langle N_{\mathrm{ph}}\rangle>\langle N_{\mathrm{imb}}\rangle\)) to superradiant decay to higher-order momentum modes in \(m=1\), which are outside the field of view (\(\sim 2.2k\)) along \(x\)[38]. See Methods for all relevant experimental parameters. captured by our numerical simulations. The deviation of the simulated pair number \(\langle N_{\mathrm{p}}\rangle\) at \(|\delta_{+}|/(2\kappa)\lesssim 1\) is ascribed to the limited validity of the adiabatic elimination of the cavity field in this regime. In conclusion, we experimentally demonstrate the creation of correlated atom pairs in well-defined spin and momentum modes via a superradiant photon-exchange process in a degenerate Bose gas. Our scheme amplifies vacuum fluctuations to induce fast photon-mediated spin-mixing dynamics within tens of microseconds. We demonstrate independent control over coherent and dissipative processes, and probe them by measurements of atomic and photonic observables. As the dynamics remains coherent for long times, \(T_{\mathrm{coh}}\gg T_{\mathrm{int}}\), our results pave the way for fast entanglement generation in spatially separated atomic clouds [42, 43]. Combining such a mechanism with mode-selective spin rotations offers a promising route for performing loophole-free Bell tests with massive particles [44, 45]. As both the sign and strength of the photon-mediated interactions can be independently controlled, our system offers prospects for implementing time-reversal protocols for noise-resilient atom interferometry [46, 47]. Finally, extending our experimental scheme to degenerate Fermi gases could facilitate the manipulation of photon-induced Cooper pairs [48, 49].
2308.02996
A Review of Gaps between Web 4.0 and Web 3.0 Intelligent Network Infrastructure
World Wide Web is speeding up its pace into an intelligent and decentralized ecosystem, as seen in the campaign of Web 3.0 and forthcoming Web 4.0. Marked by the Europe Commission's latest mention of Web 4.0, a race towards strategic Web 4.0 success has started. Web 4.0 is committed to bringing the next technological transition with an open, secure, trustworthy fairness and digital ecosystem for individuals and businesses in private and public sectors. Despite overlapping scopes and objectives of Web 3.0 and Web 4.0 from academic and industrial perspectives, there are distinct and definitive features and gaps for the next generation of WWW. In this review, a brief introduction to WWW development unravels the entangled but consistent requirement of a more vivid web experience, enhancing human-centric experience in both societal and technical aspects. Moreover, the review brings a decentralized intelligence prospect of view on native AI entities for Web 4.0, envisioning sustainable, autonomous and decentralized AI services for the entire Web 4.0 environment, powering a self-sustainable Decentralized Physical and Software Infrastructure for Computing Force Network, Semantic Network, Virtual/Mixed Reality, and Privacy-preserving content presumption. The review aims to reveal that Web 4.0 offers native intelligence with focused thinking on utilizing decentralized physical infrastructure, in addition to sole requirements on decentralization, bridging the gap between Web 4.0 and Web 3.0 advances with the latest future-shaping blockchain-enabled computing and network routing protocols.
Zihan Zhou, Zihao Li, Xiaoshuai Zhang, Yunqing Sun, Hao Xu
2023-08-06T02:49:35Z
http://arxiv.org/abs/2308.02996v1
# A Review of Gaps between Web 4.0 and Web 3.0 Intelligent Network Infrastructure ###### Abstract World Wide Web is speeding up its pace into an intelligent and decentralized ecosystem, as seen in the campaign of Web 3.0 and forthcoming Web 4.0. Marked by the Europe Commission's latest mention of Web 4.0, a race towards strategic Web 4.0 success has started. Web 4.0 is committed to bringing the next technological transition with an open, secure, trustworthy fairness and digital ecosystem for individuals and businesses in private and public sectors. Despite overlapping scopes and objectives of Web 3.0 and Web 4.0 from academic and industrial perspectives, there are distinct and definitive features and gaps for the next generation of WWW. In this review, a brief introduction to WWW development unravels the entangled but consistent requirement of a more vivid web experience, enhancing human-centric experience in both societal and technical aspects. Moreover, the review brings a decentralized intelligence prospect of view on native AI entities for Web 4.0, envisioning sustainable, autonomous and decentralized AI services for the entire Web 4.0 environment, powering a self-sustainable Decentralized Physical and Software Infrastructure for Computing Force Network, Semantic Network, Virtual/Mixed Reality, and Privacy-preserving content presumption. The review aims to reveal that Web 4.0 offers native intelligence with focused thinking on utilizing decentralized physical infrastructure, in addition to sole requirements on decentralization, bridging the gap between Web 4.0 and Web 3.0 advances with the latest future-shaping blockchain-enabled computing and network routing protocols. Web 4.0, Web 3.0, Blockchain, Intelligence, AI, Semantic network, VR, AR, Computing Force Network. ## I Introduction Moving on from the decentralized ecosystem of Web 3.0 [1], the fourth generation of the World Wide Web has emerged through its unique requirement on intelligence and immersion between virtual and reality, known as Web 4.0, highlighted by European Commission [2] in the report addressing the inequality of basic rights and the interactive efficiency of the environment. Web 4.0 is expected to combine advanced artificial and ambient intelligence, the Internet of Things, trusted blockchain transactions, virtual worlds and XR capabilities, and digital and real objects to establish an environment where every component is fully integrated and communicates with each other, enabling truly intuitive, immersive experiences, seamlessly blending the physical and digital worlds [2]. The EU's strategy for Web 4.0 involves empowering users and supporting businesses in the virtual world while fostering open, inter-operable standards and multi-stakeholder governance. The ultimate goal of Web 4.0 is to pioneer user-centric, ethical and inclusive virtual worlds that boost competitiveness, foster creativity, and uphold rights. ### _Gaps between Web 4.0 and Web 3.0_ In Web 3.0, the terminology has a refined scope for decentralizing the entire World Wide Web with decentralized Applications (dApp) [1, 3], decentralized physical infrastructure (DePIN) [4] and many blockchain infrastructures in both the network layer and the application layer [1, 5]. However, Web 3.0 lacks the focus of content delivery, specifically the immersive VR/XR contents, which have exceeded the capacity of existing network infrastructure in terms of bit rates, Quality of Services (QoS) and Quality of Experience (QoE) [6, 7]. Web 4.0 sees the gap between the Web 3.0 decentralized backbone of the control plane and the incoming Web 4.0 data plane that requires network native intelligence together with future generation network infrastructures, e.g., 6G [8]. To achieve the required data rate and connectivity of VR/XR content, semantic communication is proposed as a relief from demanding bit rates [9, 10, 11]. The Web 4.0 data is characterized as semantic data, that treat bits differently based on their features and priority [10]. Joint Source Channel Coding (JSCC) is widely adopted in the latest research of making the semantic aware communication network [11, 12]. At the same time, the semantic processing leads to a computing-heavy design for future generation networks, in particular, computing force network (CFN)[13], emphasizing high-performance computing with ultra-low latency and extraordinary reachability offered by both the access network [14], the core network and the data network. Unlike Web 3.0 which serves the same purpose in decentralized architecture, Web 4.0 introduces AI as a new entity in the network, an integrated part of the network, compared to service-only AI applications in Web 3.0. The new entity plays a pivotal role in enabling network intelligence and requires the network evolution of integrated computing and networking nodes with decentralized controllers. This model allows the AI entity to adapt, learn, and optimize itself, achieving levels of efficiency and responsiveness unattainable by the service-only AI applications in Web 3.0. Second, Web 4.0 emphasises virtual experience consumption, requiring advanced network evolution on semantic and deterministic quality of services for end consumers, as compared in Table I with features of Web 1.0, Web 2.0. ### _Motivations and Contributions_ This paper contributes to Web 4.0 in three aspects: * First, the paper gives a glance at Web history, revealing the entangled but consistent ethos of a more vivid World Wide Web with humanism, pumping enhanced virtual world interactions with considerations on privacy, efficiency and human rights. * Second, a native perspective on how AI entities interact with the general public and their sustainability is detailed and envisioned. The proposed decentralized intelligence service operation principle is the key to closing the gaps in AI accessibility and enabling a self-evolving AI for Network and Network for AI by crowdsourcing and decentralized vending of AI services. * Finally, the paper discusses the opportunities and challenges of regulations with an outlook on bringing a more intelligent and responsible privacy-preserving Web 4.0. ## II Native AI entities for Web 4 ### _Decentralized operation of Computing Force Network for network native AI services_ Native AI Entities (NAEs) are autonomous artificial intelligence artifacts, purposefully designed to operate on the Web 4.0 infrastructure. Born from the collective efforts of the crowd, these NAEs are primed to serve the community that fostered their inception, with the commitment to being responsible and sustainable. Within the Web 4.0 realm, NAEs operate under a decentralized framework that is as dynamic as it is evolving. Their operational matrix intersects crucial nodes such as the Computing Force Network, Blockchain nodes, AI nodes, Semantic Networks, and the VR/AR real-time network. In Fig. 1, seven main interactive scenarios and three kinds of flow are highlighted. Details of the operational blueprint of an NAE in this setting are as follows: Purpose, Objectives, and Model OptimizationNAEs are designed to deliver specific AI services within the Web 4.0 environment, such as image generation or conversational interaction. They continually optimize their operations and upgrade models according to market demand, seamlessly integrating with elements like Semantic Networks and VR/AR real-time networks for an enhanced user experience. In scenario 1, NAEs use their deposited funds to compensate decentralized data providers within the Semantic Networks, obtaining the necessary training data for model optimization and upgrading. Computational ResourceNAEs source computational power from computing service providers within the Web 4.0 network, i.e., decentralized computing services [18, 20] or centralized computing services, e.g., cloud computing or supercomputers. The decision to utilize specific providers is automated based on the entity's assessment of its computational requirements and service costs. NAEs have the intrinsic capability to transition or contract alternate providers when needed. As shown in scenario 2, NAEs secure the necessary computational resources from decentralized computing resource providers within the Computing Force Network, using their deposited funds as payment. This flexible approach allows NAEs to adjust their computational resources dynamically in response to changing demands. Fig. 1: Framework of Native AI Entities \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline Web generations & Networking & Content \& Applications & Computing Infrastructure & Intelligence Services & Quality of Services & Security \\ \hline Web 1.0 & Decentralized & Static content \& Portal page & Individual server & N/A & No QoS & N/A \\ \hline Web 2.0 & Centralized & Interactive content \& Search engine \& Social Media & Cloud Computing [15] & N/A & Limited QoS and PKI [16] \\ \hline Web 3.0 & Decentralized [17] & User personalized and owned content \& Marketplace & Distributed Cloud Computing [18] & Client-Server AI, Network for AI & Dynamic QoS & Cerificateless PKI [19] \\ \hline Web 4.0 & Decentralized & Immersive media\& GPT & Computing Force Network & Semantic-aware, AI for Network & Deterministic QoE & Trust Web \\ \hline \end{tabular} \end{table} TABLE I: World Wide Web generations comparisons Initial Funding and Financial ManagementIn scenario 3, the initial funding for NAEs is crowdsourced within the Web 4.0 network, with the rules governing these funds being transparently communicated to all contributors. NAEs can autonomously manage their budgets and adjust their financial strategies based on real-time analysis and anticipated future needs. The deposited funds of an NAE may circulate to other NAEs or decentralized capital within the Web 4.0 network, forming a complex and dynamic network of capital circulation. This financial ecosystem is adaptive and responsive, promoting financial flexibility and resilience, and accommodating the unique requirements of various NAEs while facilitating collaboration and interdependence among them. NAEs prioritize efficiency, transparency, and accountability in their financial management, ensuring that all financial operations support the delivery of their AI services and contribute to the overall sustainability of their operations. Blockchain Network InteractionNAEs interact with the blockchain network primarily via smart contracts in scenario 4. This process represents a major aspect of the financial and data management activities within the Web 4.0 network. Deposited funds from NAEs may flow into the blockchain network, which in turn, can channel funds back to the NAEs. The implementation of blockchain offers a secure and transparent method for managing finances and data, fostering trust and credibility in the operations of NAEs while enabling real-time tracking and verification of transactions. Provision of Decentralized Services and User Data ProtectionIn scenario 5, users can call NAEs' decentralized services via smart contracts. This process involves the flow of funds and data from the user to the NAEs and the provision of computational resources and data to the user. NAEs operate in compliance with data protection regulations, requiring explicit user permissions to utilize user data. The process of securing permissions and usage of the data is recorded and verifiable on the blockchain, ensuring privacy and security. VR/AR Real-time Network InteractionIn the sixth interaction scenario, the computational resources of an NAE and the Computing Force Network collectively provide computational power to the VR/AR Real-time Network, as detailed in [12]. NAEs also supply data obtained from their decentralized services to the VR/AR network, enabling real-time virtual or augmented reality experiences. Risk Assessment and DAO AuditInvestors are made aware of the inherent risk in NAE operations. To facilitate risk assessment, details of NAE operations, including service quality and financial status, are made transparent on the blockchain network. In the seventh interaction scenario, decentralized autonomous organizations (DAOs) audit NAEs' services and capital flow, ensuring service quality, fairness, and accountability. Iii-B1 Four Phases: Initialization, Growth, Steady and RetirementAs autonomous constructs within the Web 4.0 environment, NAEs operate under a life-cycle model to facilitate specific AI services. This life cycle encompasses four stages: Initialization and Configuration, Early Operation and Growth, Steady Operation and Expansion, and Decline and Termination. The progression through these stages is marked by distinctive operational behaviors and economic transactions, revealing how an NAE acquires, utilizes, and manages its resources in pursuance of its service objectives. Each phase also presents a snapshot of the economic flows at various life-cycle stages, shown in Fig. 2, from initial capital accumulation to resource allocation, revenue generation, reinvestment, and eventually, asset liquidation. * Phase 1: Initialization and Configuration: 1. Acquisition of Capital and Computational Resources: NAEs generate their initial funds via crowdfunding efforts or from mature NAEs, employing these funds to lease computational resources. 2. Market Positioning: This involves identifying service provisions, target markets, and potential users. 3. Resource and Market Analysis: An efficiency assessment of resource allocation is performed along with predictions for market demands and potential gains. * Phase 2: Early Operation and Growth: 1. Provision of Services and Revenue Generation: NAEs commence their operations by providing services. Revenue is generated via these services and the execution of smart contracts. 2. Repayment to Initial Investors: A portion of the revenue is deducted by the NAE to repay the initial investors. 3. Market Expansion: Market feedback is analyzed and service provision adjusted while exploring new service markets and growth points. * Phase 3: Steady Operation and Expansion: 1. Stable Revenue and Self-Sustenance: By providing services and generating revenue, NAEs attain self-sustenance and gradually repay the initial investors. 2. Service Optimization and Innovation: Existing services are improved and optimized, which may include new AI models, superior user experiences, etc. 3. Investment in Other NAEs: Surplus revenue is used to invest in other budding NAEs or blockchain projects for additional profit. Fig. 2: Operation phases of Native AI Entities 4. Economic Benefit Analysis: Economic benefits are analyzed, optimizing investment and operation strategies. * Phase 4: Sunset and Termination: 1. Service Termination and Asset Liquidation: NAEs cease their services and liquidate any remaining assets. 2. Investor Refunds: Any remaining assets (if any) are returned to the investors. 3. Lessons and Experience: The causes of failure are analyzed, offering valuable lessons and experiences for other NAEs and investors. ### _Semantic content aware network for VR/AR in confidential peer-to-peer network_ Semantic Networks play an essential role within NAEs, allowing them to understand and process complex information across diverse contexts. By mapping relationships and drawing inferences between different entities, NAEs can generate context-relevant responses and provide highly personalized services in the VR/AR space. For instance, an NAE serving as a VR guide could understand a user's interests and pre-train JSCC models based on the characteristics of content [12]. As NAEs process complex and often sensitive information, they employ robust measures to prevent potential misuse of this information within the network. A key component of these preventative measures is confidential computing, which ensures that data remains encrypted while being processed. This secure processing prevents unauthorized access, safeguarding the information even when operating in shared or potentially insecure environments. In addition to confidential computing, NAEs are considered operated with the support of Trusted Execution Environments (TEE) to further enhance data security. TEE creates isolated, secure environments for processing sensitive information away from the rest of the system [5]. Even in scenarios where a malicious entity gains access to the network, the integrity and confidentiality of the data being processed within the TEE remain unaffected. Beyond these protective measures, NAEs also leverage their AI capabilities for real-time network monitoring and anomaly detection. This allows them to swiftly identify potential threats and take immediate corrective actions against abnormal traffics. ## III Opportunities and Challenges ### _Privacy challenges in identity and data distribution_ Since Web 4.0 is evolved from Web 3.0, it inherits the privacy challenges from not only Web 3.0 itself but also the shift from Web 1.0/2.0 to Web 3.0. To transform the centralized identity management in Web 1.0/2.0 to a decentralized manner, a universal identity management strategy is proposed in Web 3.0 using zero-knowledge proof (ZKP), multi-party computation (MPC) and other cryptographic methods [1, 4]. Such a strategy utilizes ZKP to protect users' real identities and derive different identities used by networks and applications in Web 3.0. Furthermore, the derived identities can be publicly verified by smart contracts but the private information related to the identities is not revealed. However, this rudimentary design for Web 3.0 does not consider the border control issue in identity verification. Since its nationalities issue the real identities of a user, the derived identities based on the real identities can be only verified by the networks and applications in the countries of the user's nationalities with the assistance of the corresponding authorities. When a user needs to access networks and applications outside its nationalities, the identity verification performed by other countries still needs to be further advanced in Web 4.0. Specifically, the interaction between two or more smart contracts of identity verification belonging to different countries should be considered and designed. In addition, the private information of users involved in such abroad identity verification also needs to be standardized to protect user privacy and ease the identity verification process to ensure desirable access efficiency in Web 4.0. "Read-write-own" is the core of Web 3.0, representing the self-governance of data for all users in equivalent. In Web 3.0, all users aim to freely reach content, services and applications deployed in decentralized networks but not in centralized servers applied in Web 1.0/2.0. Meanwhile, all users can manage their data in distributed storage and authorize access without barriers. As the next generation of the Internet inherits and further develops Web 3.0, Web 4.0 should keep building user data autonomy. However, more and more NAEs involved in Web 4.0 networks, content, services and applications may challenge the progress of data autonomy. Compared with Web 3.0, a distinctive characteristic of Web 4.0 is tremendous data generated by NAE for different purposes. Although there is a multi-layer identity strategy has been suggested to enable users to use different identities in different scenarios in Web 3.0, fine-grained permissions and multi-layer tags for different parts of data have not been considered for users to control who can access specific data, especially for massive data generated by AI. In addition, data flows and content distributions across different countries may incur censorship difficulty for authorities due to varied data censorship mechanisms and standards. On the other hand, censorship conflicts with data autonomy in users' view since censorship may leak user private information. Therefore, data censorship by different countries should be regulated in a uniform manner with restricting the exposure of sensitive parts of data. ### _Regulatory opportunities and concerns_ As discussed in previous sections, NAE-enabled Web 4.0 can benefit individual by achieving legal objectives in cyberspace. The existing legislative framework in cyberspace already applies to several aspects of Web 4.0; however, there is no a powerful enforcement toolkit. In relation to users' privacy and personal data, General Data Protection Regulation (GDPR) establishes an omnibus system of informed-consent-based obligations and rights [21]. The advent of NAE and the transition to Web 4.0 have fundamentally revolutionized the ways users interact with service providers. For example, with NAE's assistance, users now have an enhanced capacity to review privacy policies in a more comprehensive manner. Users can utilize the automated NAE mapping to gain a clear understanding of the types of personal data the enterprises are collecting, processing, and storing. Meanwhile, NAEs can be tailored to learn the users' privacy preferences in order to better protect their privacy. In this way, the service can be personalized without sacrificing users' privacy in different contexts. This technology not only enables individuals to exercise their data subject rights, such as access, erasure, and rectification in a timely manner, but it also aids in detecting and flagging potential data breaches. Consequently, NAE-enabled Web 4.0 helps meet notification requirements, offering users a level of data security and privacy protection that was previously unattainable [22]. Moreover, NAE-enabled Web 4.0 can provide valuable analytical support for conducting data protection impact assessments required under data privacy regulations like the GDPR. It can systematically categorize and analyze personal data flows, flag potential compliance issues around legal basis or data minimization, identify inherent risk factors such as large-scale profiling or automated decision systems, assess potential impacts on individuals, and suggest technical controls to mitigate risks. As for the individuals' protection and regulations on competition and innovation, the Digital Services Act (DSA) and the Digital Markets Act (DMA) also apply to Web 4.0. In this context, NAE could automatically monitor online service providers for the presence of illegal content or prohibited goods. This could enable timely flagging and removal as well as enforcement of notice-and-action procedures. Regarding the anti-competitive practices, NAE-enabled Web 4.0 could identify anomaly anti-competitive activities that contravene DMA rules in cyberspace. For instance, self-preferencing and exclusion of competitor products or services can be automatically flagged to regulators. At a technical level, NAE testing and analysis of dominant service providers' APIs, data structures, and interoperability presents the means to evaluate and ensure compliance with data portability obligations under the DMA. Such NAE functionality applied in continuous compliance processes promotes the DMA's goals in regulating the gatekeepers on digital ecosystems. However, oversight and evaluation of NAE's reasoning is critical to ensure appropriate recommendations aligned with regulatory goals. While promising, more advanced legal and ethical oversight is required to align NAE in Web 4.0 with our fundamental values and rights protection in whole decentralized ecosystems. Existing conflicts between this emerging web architecture and legislations can been foreseen. For example, there are some potential conflicts between Right to be Forgotten (RtbF), granted by the GDPR and many other data privacy legislations, and the NAE-enable system [23]. A core tension arises from the fact that personal data used to train AI systems can become deeply embedded within the models' architectural parameters, neural network weights, and decision-making logic flows. Unlike data residing in traditional databases, deleting or removing the original training data does not necessarily erase its latent traces from the AI model itself. In a sense, the model persists as a record of its own training. This presents challenges in fully implementing individuals' right to be forgotten requests and erasing their personal data under GDPR Article 17 when such data was utilized in model development. Confronting these concerns and rapidly evolving technologies, regulatory experimental tools are a valuable approach to collecting data, assessing legal, institutional and technological ramifications and shaping new regulatory approaches outside of prevailing regulatory frameworks. Such experimental regulatory approaches include regulatory sandboxes, standardization, and co-regulation involving regulators, industrial guidelines and markets. Among these, sandboxes create a controlled area where participators obtain a waiver from certain legal provisions and compliance processes and attain tailored legal support to foster the development of new technologies. There are several successful instances for setting up such experimental regulatory environments. The UK Financial Conduct Authority (FCA) pioneered the first fintech regulatory sandbox [24], while the UK Information Commissioner's Office is testing the impact of AI-related products and services on data privacy frameworks [25]. Therefore, similar experimental regulatory toolkit can be used in the context of Web 4, in order to provide appropriate flexibility while protecting legal principles and fundamental rights. However, such experimental regulation requires careful design and testing. Otherwise, a lack of harmonized and standardized criteria, testing processes and inadequate specifications will harm competition, consumers, and public or personal data. In the context of Web 4, crafting the experimental regulatory environment firstly necessitates a multi-disciplinary regulatory framework, given the convergence of various emerging technologies inherent in Web 4.0. Secondly, multi-stakeholder's interests need to be considered. As Web 4.0 encompasses a complex ecosystem and impacts various sectors, the diverse interests of all stakeholders should be taken into account, including enterprises, SMEs, authorities, individuals, operators, standardization bodies and legislators, among others. These necessitate a strong regulatory interoperability in an international level. Thirdly, the comprehensive eligibility and testing criteria for experimental regulation are crucial. On the one hand, the pro-innovation mechanisms are important in order to provide flexibility. On the other hand, the design of testing parameters, duration, entry requirements and terminal conditions are also essential. Therefore, in the context of Web 4, the experimental regulatory tools should be carefully designed to embody the aforementioned concerns and probe into the potential impact on the risks on users' protection and innovation. ## IV Conclusions and Future Work In this paper we summarise the key impacts of Web 4.0 and detail how it is different from Web 3.0 in a tight scope developed within Europe Commissioners' outlook emphasizing not only the intelligence, decentralization, semantic and VR/XR ready but also regulator improvements on human rights, privacy and sustainability. A novel NAE operation model is proposed to benefit the network with its native ecosystem of sustainable AI and provides ubiquitous AI to every aspect of network, e.g., Autonomous Driving Network, NetGPT, ChatGPT-alike service, etc. Meanwhile, the end-to-end semantic communication network is also projected to be the key enabler for Web 4.0 due to the demands of VR/XR ready services with assured Quality of Services. In the end, this paper provides insight into regulator opportunities regarding the privacy, content ownership, sustainability, legal and ethics perspectives of Web 4.0.
2308.13643
Correlated Nonreciprocity around Conjugate Exceptional Points
The occurrence of exceptional points (EPs) is a fascinating non-Hermitian feature of open systems. A level-repulsion phenomenon between two complex states of an open system can be realized by positioning an EP and its time-reversal (T) conjugate pair in the underlying parameter space. Here, we report the fascinating nonreciprocal response of such two conjugate EPs by using a dual-mode planar waveguide system having two T-symmetric active variants concerning the transverse gain-loss profiles. We specifically reveal a comprehensive all-optical scheme to achieve correlative nonreciprocal light dynamics by using the reverse chirality of two dynamically encircled conjugate EPs in the presence of local nonlinearity. A specific nonreciprocal correlation between two designed T-symmetric waveguide variants is established in terms of their unidirectional transfer of light with a precise selection of modes. Here, the unconventional reverse chiral properties of two conjugate EPs allow the nonreciprocal transmission of two selective modes in the opposite directions of the underlying waveguide variants. An explicit dependence of the nonlinearity level on a significant enhancement of the nonreciprocity in terms of an isolation ratio is explored by investigating the effects of both local Kerr-type and saturable nonlinearities (considered separately). The physical insights and implications of harnessing the properties of conjugate EPs in nonlinear optical systems can enable the growth and development of a versatile platform for building nonreciprocal components and devices.
Arnab Laha, Adam Miranowicz, R. K. Varshney, Somnath Ghosh
2023-08-25T19:27:27Z
http://arxiv.org/abs/2308.13643v1
# Correlated Nonreciprocity around Conjugate Exceptional Points ###### Abstract The occurrence of exceptional points (EPs) is a fascinating non-Hermitian feature of open systems. A level-repulsion phenomenon between two complex states of an open system can be realized by positioning an EP and its time-reversal (\(\mathcal{T}\)) conjugate pair in the underlying parameter space. Here, we report the fascinating nonreciprocal response of such two conjugate EPs by using a dual-mode planar waveguide system having two \(\mathcal{T}\)-symmetric active variants concerning the transverse gain-loss profiles. We specifically reveal a comprehensive all-optical scheme to achieve correlative nonreciprocal light dynamics by using the reverse chirality of two dynamically encircled conjugate EPs in the presence of local nonlinearity. A specific nonreciprocal correlation between two designed \(\mathcal{T}\)-symmetric waveguide variants is established in terms of their unidirectional transfer of light with a precise selection of modes. Here, the unconventional reverse chiral properties of two conjugate EPs allow the nonreciprocal transmission of two selective modes in the opposite directions of the underlying waveguide variants. An explicit dependence of the nonlinearity level on a significant enhancement of the nonreciprocity in terms of an isolation ratio is explored by investigating the effects of both local Kerr-type and saturable nonlinearities (considered separately). The physical insights and implications of harnessing the properties of conjugate EPs in nonlinear optical systems can enable the growth and development of a versatile platform for building nonreciprocal components and devices. ## I Introduction The synergy of non-Hermitian quantum physics and photonics has been revealing a novel and promising direction for building a range of photonics components and devices [1]. An extensive study on the perturbation theory in quantum mechanics once revealed the occurrence of exceptional point (EP) singularities as an explicit mathematical feature of non-Hermitian or open systems [2]. EPs usually appear as topological defects in the system's parameter space, affecting the eigenspace dimensionality, which results in the simultaneous coalescence of at least two coupled eigenvalues and the associated eigenstates [2; 3; 4; 5; 6]. The parity-time (\(\mathcal{PT}\))-symmetric systems (a special class of non-Hermitian system with real eigenvalues) [7; 8] encounter an EP at a spontaneous transition from real (exact-\(\mathcal{PT}\)-phase) to complex (broken-\(\mathcal{PT}\)-phase) eigenvalues [9; 10; 11; 12]. Recently, the engineering of ubiquitous non-Hermitian components (e.g. loss and gain) in photonic systems has revealed such EP-like mathematical objects as a powerful tool to manipulate and detect the energy-states of light [11; 12; 13; 14; 15; 16; 17]. A controlled variation of the system's parameters in the vicinity of EPs can immensely boost a versatile range of quantum-photonic technologies in the context of, e.g., asymmetric energy transfer [18], programmable state-switching [19; 20], phonon lasing [21], coherent perfect absorption [22], slow-light engineering [23], enhanced energy harvesting [24], parametric instability [25] and highly-precise sensing [26; 27]. The concept of the occurrence of conjugate EPs has recently been introduced based on the complex parameter dependence of a non-Hermitian Hamiltonian [28]. This can be described by considering a generic two-level (without loss of generality for higher-order situations) non-Hermitian Hamiltonian \(\mathcal{H}(\lambda)\), which depends on a complex parameter \(\lambda=\lambda^{\text{R}}+i\lambda^{\text{I}}\). The associated eigenvalues \(\mathcal{E}_{1,2}(\lambda)\) and the eigenvectors \(\Psi_{1,2}(\lambda)\) would be analytical functions in the complex-\(\lambda\) plane except at a singularity \(\lambda=\lambda_{s}\), known as an EP. Concerning the imaginary part of the dependent parameter \(\lambda\) (i.e., \(\lambda^{\text{I}}\)), the considerations of \(\lambda^{\text{I}}<0\) and \(\lambda^{\text{I}}>0\) ideally define two complementary variants of \(\mathcal{H}(\lambda)\). Such two complementary systems can be correlated based on time-reversal (\(\mathcal{T}\))-symmetry. Here, two variants of \(\mathcal{H}(\lambda)\) under \(\mathcal{T}\)-symmetry separately host two EPs in the complex \(\lambda\)-plane at \(\lambda_{s}=\lambda_{s}^{\text{R}}+i\lambda_{s}^{\text{I}}\) and \(\lambda_{s}^{\ast}=\lambda_{s}^{\text{R}}-i\lambda_{s}^{\text{I}}\) (say, EP and its conjugate EP\({}^{\text{g}}\), respectively), which are in the complex conjugate relation. Such two correlated EPs in two \(\mathcal{T}\)-symmetric complementary systems can be called as conjugate EPs. Unconventional light guidance mechanism based on the chirality of EPs has extensively been studied, where a sufficiently slow length-dependent gain-loss dynamics along a closed 2D loop around an EP can steer the adiabatic and nonadiabatic conversions of modes [29; 30]. Here, even though the adiabaticity is maintained in the sense of the exchange of eigenvalues for a quasistatic gain-loss variation [31], the associated eigenmodes fail to meet adiabaticity while propagating along the length, which results in the conversion of all the modes into different particular dominating modes, based on the device chirality (in terms of direction of light propagation) [32; 33; 34; 35; 36]. Such a chirality-based asymmetric transfer of modes has recently been explored to reveal a distinct reverse-chiral behavior of a pair of conjugate EPs, while dynamically encircling them in two \(\mathcal{T}\)-symmetric active variants of a waveguide-based optical system [28]. Moreover, the reciprocity of such a chiral light guidance process can be broken by introducing nonreciprocal elements, where the occurrence of an EP can considerably enhance nonreciprocity [37; 38]. Nonreciprocal devices, such as isolators and circulators, allow only one-way light transmission with an asymmetric scattering matrix, which is indispensable to minimize unwanted back-reflection and multi-path interference in photonic circuits [39]. However, the common magneto-optical approaches (such as a Faraday rotator), mainly applied for bulky free-space devices, are usually inefficient in enabling a sufficient nonreciprocity for photonic circuits. Hence, there are growing demands to achieve high nonreciprocity on the chip-scale footprint, where the chiral response of an EP in nonlinear media can play a crucial role in meeting such demands. Recently, an EP-induced mode-selective isolation scheme has been revealed, where local nonlinearity has served as an efficient tool to enable all-optical nonreciprocity without using any magneto-optical effect [36]. In this context, the chiral response of two conjugate EPs in nonlinear media could have immense potential in developing correlative nonreciprocal devices with highly precise mode manipulation. Moreover, the recently developed non-Hermitian formalism of Liouvillian super operators [40; 41] can also be exploited for the quantum implementation of our waveguide-based classical analysis to explore the correlated features of conjugate quantum EPs. In this article, we comprehensively report the correlated nonreciprocal response of two \(\mathcal{T}\)-symmetric active variants of a gain-loss assisted dual-mode planar waveguide, operating near two conjugate EPs. Here, all-optical nonreciprocity is achieved with the introduction of local nonlinearity. We investigate the hosting of conjugate EPs in complementary gain-loss parameter planes based on Riemann surface connections associated with two quasi-guided modes. Besides establishing the reverse-chiral response concerning the asymmetric mode conversion process driven by dynamical parametric variation in the vicinity of two conjugate EPs, we exclusively investigate the asymmetric nonreciprocal waveguide mechanism in the context of all-photonic isolation through two \(\mathcal{T}\)-symmetric waveguide variants. Here, a correlation in the nonreciprocal transmission of selective modes with an enhanced isolation ratio (say, IR) through two complementary waveguides is established. Moreover, a comparative study on the individual effect of local Kerr-type nonlinearity and saturable nonlinearity is reported by showing the possibility of enhancing the IR significantly. ## II Results and discussion ### Designing two time-symmetric active waveguide variants We design a 2D planar step-index optical waveguide having the geometrical dimensions \(w=20\lambda/\pi\) (width) and \(l=l_{m}\times 10^{3}\) (length) with \(l_{m}=7.5\lambda/\pi\) (i.e., both the dimensions are considered in the unit of wavelength \(\lambda\)), where we set \(\lambda=2\pi\) corresponding to a normalized wavenumber \(k=1\) (in a dimensionless unit). The designed waveguide with a glass based core (\(n_{\text{co}}=1.5\)), surrounded by a silica based cladding (\(n_{\text{clad}}=1.46\)), is distributed in the \(xz\)-plane, where \(x\in[-w/2,w/2]\) and \(z\in[0,l]\) are the transverse and propagation axes, respectively. The real (background) refractive index profile is considered as \[\text{Re}[n(x)]=\left\{\begin{array}{ll}n_{\text{co}}&:-w/6\leq x\leq w/6,\\ n_{\text{clad}}&:w/6\leq|x|\leq w/2.\end{array}\right. \tag{1}\] Based on the chosen dimensional parameters and \(\text{Re}n(x)\)-profile, the designed waveguide supports only two scalar modes: the fundamental mode \(\Psi_{\text{F}}\) and the first higher order mode \(\Psi_{\text{H}}\) (scalar modal analysis is valid in the presence of a small index difference between the core and cladding; \(\Delta n=0.04\)). Now, we enable non-Hermiticity via the introduction of an unbalanced gain-loss profile [i.e., the imaginary part of \(n(x)\)] in the designed passive waveguide, which results in the coupling between two quasiguided modes \(\Psi_{\text{F}}\) and \(\Psi_{\text{H}}\). We can control such coupling with the modulation of a gain-loss profile in a 2D parameter space characterized by the gain-loss coefficient \(\gamma\) and a loss-to-gain ratio \(\tau\). Using this waveguide framework, we consider two complementary active variants connected via \(\mathcal{T}\)-symmetric \(\text{Im}[n(x)]\) profiles given by \[\text{Im}[n(x)]=\left\{\begin{array}{ll}-i\gamma&\quad+i\gamma\quad:-w/6 \leq x\leq 0,\\ +i\tau\gamma&\quad-i\tau\gamma\quad:0\leq x\leq w/6,\\ +i\gamma&\quad-i\gamma\quad:w/6\leq|x|\leq w/2.\end{array}\right. \tag{2}\] Such two \(\mathcal{T}\)-symmetric waveguide variants, say \(\text{WG}_{\text{A}}\) and \(\text{WG}_{\text{T}}\) are shown in Fig. 1(a). Figure 1(b) shows the profile of \(\text{Re}[n(x)]\) [dotted black line; given by (1)] of the background framework along with the normalized intensity profile of two supported modes \(\Psi_{\text{F}}\) and \(\Psi_{\text{H}}\), whereas Fig. 1(c) Figure 1: **(a)** Schematic design of \(\text{WG}_{\text{A}}\) and \(\text{WG}_{\text{T}}\) (\(\mathcal{T}\)-symmetric) based on the framework of a gain-loss assisted planar waveguide. Two arrows indicate their opposite propagation directions. Circular plus and minus signs in different segments are associated with the positive (loss) and negative (gain) imaginary indices as in (2). **(b)** Transverse background refractive index profile, i.e., \(\text{Re}[n(x)]\), (dotted black line; corresponding to the left vertical axis) along with normalized intensity profiles of two supported modes \(\Psi_{\text{F}}\) and \(\Psi_{\text{H}}\) (corresponding to the right vertical axis). **(c)** Transverse gain-loss distributions, i.e., \(\text{Im}[n(x)]\), for two \(\mathcal{T}\)-symmetric variants \(\text{WG}_{\text{A}}\) (solid green line) and \(\text{WG}_{\text{T}}\) (solid black line) for \(\gamma=0.01\) and \(\tau=2\). shows the profiles of \(\text{Im}[n(x)]\) of two active variants \(\text{WG}_{\text{A}}\) and \(\text{WG}_{\text{T}}\) (represented by green and black lines, respectively). As per the constraints of \(\mathcal{T}\)-symmetry, i.e., \(\mathcal{T}:\{i,t,x\}\rightarrow\{-i,-t,x\}\) (\(i\) is the imaginary quantity; \(t\) and \(x\) are the time and space coordinates, respectively), \(\text{WG}_{\text{A}}\) and \(\text{WG}_{\text{T}}\) host exactly two complex conjugate profiles of \(n(x)\) with respect to the transverse axis [as can be understood from (2) and Fig. 1(c)]. Here, we have to consider two opposite propagation directions for \(\text{WG}_{\text{A}}\) and \(\text{WG}_{\text{T}}\) to maintain \(\mathcal{T}\)-symmetric equivalence based on the quantum-optical analogy \(t\equiv z\). ### Riemann surface connections: Hosting conjugate EPs To host the pair of conjugate EPs, we study the interaction between two coupled eigenvalues associated with \(\Psi_{\text{F}}\) and \(\Psi_{\text{H}}\), while varying the control parameters \(\gamma\) and \(\tau\), simultaneously, within chosen ranges. In this context, an analytical treatment toward hosting conjugate EPs using a non-Hermitian Hamiltonian, which is analogous to our proposed waveguide-based system, is discussed in detail in an appendix. Here, the complex propagation constants (\(\beta\)-values), i.e., \(\beta_{\text{F}}\) and \(\beta_{\text{H}}\) (associated with \(\Psi_{\text{F}}\) and \(\Psi_{\text{H}}\), respectively) are considered as the system eigenvalues, which are calculated by computing the solutions of the 1D scalar wave equation \(\left[\partial_{x}^{2}+k^{2}n^{2}(x)-\beta^{2}\right]\psi(x)=0\). We identify the connections between the Riemann sheets associated with coupled \(\beta_{\text{F}}\) and \(\beta_{\text{H}}\) in Fig. 2(a) [with the distributions of \(\text{Re}(\beta)\) and \(\text{Im}(\beta)\) as shown in Figs. 2(a.1) and 2(a.2)], where the formation of a pair of conjugate EPs is clearly evident. Dotted red and blue curves show the trajectories of \(\beta_{\text{F}}\) and \(\beta_{\text{H}}\) concerning a continuous variation of \(\gamma\), when we particularly choose \(\tau=3.1607\). Here, we can observe a simultaneous bifurcation and a coalescence of the associated \(\text{Re}(\beta)\) and \(\text{Im}(\beta)\) values at \(\gamma=-8.1\times 10^{-3}\), as in Figs. 2(a.1) and 2(a.2), respectively. In contrary, a simultaneous coalescence and bifurcation of the associated \(\text{Re}(\beta)\) and \(\text{Im}(\beta)\) values can be observed at \(\gamma=8.1\times 10^{-3}\). Hence, two different circumstances corresponding to \(\gamma<0\) and \(\gamma>0\) for a specific \(\tau\) refers to perfect complex conjugate situations (as the parameters \(\gamma\) and \(\tau\) are associated with \(\text{Im}[n(x)]\), i.e., gain-loss), which can ideally be observed in two active variants \(\text{WG}_{\text{A}}\) and \(\text{WG}_{\text{T}}\). The associated characteristics of \(\beta_{\text{F}}\) and \(\beta_{\text{H}}\) refer to the encounter of two conjugate EPs at (\(\pm 8.1\times 10^{-3},3.1607\)) (say, an EP and its conjugate EP* for \(\text{WG}_{\text{A}}\) and \(\text{WG}_{\text{T}}\), respectively) in the respective (\(\gamma,\tau\))-planes. Topological dissimilarities in ARC-type interactions between \(\beta_{\text{F}}\) and \(\beta_{\text{H}}\) can clearly be observed alongside these conjugate EPs. The coalescence of the eigenmodes (\(\Psi_{\text{F}}\) and \(\Psi_{\text{H}}\)) at both the conjugate EPs can be understood from the variation of \(\langle\Psi_{\text{F}}|\Psi_{\text{H}}\rangle\) with \(\gamma\) at a fixed \(\tau=3.1607\), where \(\langle\Psi_{\text{F}}|\Psi_{\text{H}}\rangle=1\) only at EP and EP*, as shown in Fig. 2(b). The effect of parametric encirclement of the embedded conjugate EPs in terms of chiral branch-point features is investigated in Fig. 2(c). Here, we consider two parametric loops in the 2D (\(\gamma,\tau\))-plane according to the equations \[\gamma(\varphi)=\gamma_{c}\sin\left(\frac{\varphi}{2}\right)\quad\text{and} \quad\tau(\varphi)=\tau_{c}+r\sin(\varphi), \tag{3}\] Figure 2: **(a)** Connections between the Riemann surfaces associated with \(\beta_{\text{F}}\) and \(\beta_{\text{H}}\), while varying the control parameters \(\gamma\) and \(\tau\), simultaneously. (a.1) and (a.2) show the distributions of \(\text{Re}[\beta]\) and \(\text{Im}[\beta]\), respectively. Dotted red and blue curves represent the trajectories of \(\beta_{\text{F}}\) and \(\beta_{\text{H}}\) for a chosen \(\tau=3.1607\), which reveal the encounter of two conjugate EPs based on the coalescence and bifurcations in \(\text{Re}[\beta]\) and \(\text{Im}[\beta]\) at \(\gamma=\pm 8.1\times 10^{-3}\). Dotted blue squares separate the regions for \(\text{WG}_{\text{A}}\) and \(\text{WG}_{\text{T}}\). **(b)** Variation of \(\langle\Psi_{\text{F}}|\Psi_{\text{H}}\rangle\) with respect to \(\gamma\) (when \(\tau=3.1607\)), which shows the coalescence of \(\Psi_{\text{F}}\) and \(\Psi_{\text{H}}\) via \(\langle\Psi_{\text{F}}|\Psi_{\text{H}}\rangle=1\) at both EP and EP*. **(b)** Parametric encirclement of two conjugate EPs in the (\(\gamma\),\(\tau\))-plane following (3) (shown in the ground surfaces) and associated transfer process of \(\beta_{\text{F}}\) and \(\beta_{\text{H}}\) from their respective surfaces. which leads to a closed and simultaneous variation of gain and loss around the EP and EP*. A slow variation of \(\varphi\in[0,2\pi]\) governs the stroboscopic encirclements based on the characteristic parameters \(\gamma_{c}\), \(\tau_{c}\), and \(r\in(0,1]\), where the conjugate EPs would be inside the parametric loop only for \(|\gamma_{c}|>|\gamma_{\text{EP}}|\) (\(\gamma_{\text{EP}}=8.1\times 10^{-3}\); \(\gamma\)-value at the location of the EP). Here, the variations \(\varphi:0\to 2\pi\) and \(\varphi:0\to 2\pi\) enable a clockwise (CW) and a counter-clockwise (CCW) gain-loss variation around the EP for \(\gamma_{c}>0\), and vice-versa around the EP* for \(\gamma_{c}<0\). Such two parametric loops are shown in the ground surfaces of both Figs. 2(c.1) and 2(c.2) (for \(r=0.3\), \(\gamma_{c}=\pm 1.5\times 10^{-2}\). and \(\tau_{c}=3.1607\)), where the associated trajectories of coupled \(\beta_{\text{F}}\) and \(\beta_{\text{H}}\) are shown on their respective Riemann surfaces. Here, we observe that \(\beta_{\text{F}}\) and \(\beta_{\text{H}}\) are swapping their identities from their respective surfaces [concerning both Re(\(\beta\)) and Im(\(\beta\)), as can be seen in Figs. 2(c.1) and 2(c.2), respectively], and exchange their initial positions upon the completion of encirclement schemes. Such switching between complex \(\beta_{\text{F}}\) and \(\beta_{\text{H}}\) around both EP and EP* justify their branch-point behavior. ### Dynamically encircled conjugate EPs: Asymmetric transfer of modes in a linear medium Here, we consider the length dependence (analogous to the time dependence) on the encirclements of the conjugate EPs to study the correlative propagation dynamics of light (modes) through two \(\mathcal{T}\)-symmetric waveguide variants. Figure 3(a) shows the chosen parametric loops for WG\({}_{\text{A}}\) and WG\({}_{\text{T}}\) [to encircle EP and EP*; exactly the same loops, as can be seen in the ground surfaces of Fig. 2(c)]. We map the associated gain-loss distribution along the length (\(z\)-axis) of respective waveguides. Here, the reversal of the time axis (\(t\to-t\)) under the constraint of \(\mathcal{T}\)-symmetry allows us to consider mapping obligatorily in opposite directions (i.e., \(z\to-z\) as \(t\equiv z\)) for WG\({}_{\text{A}}\) and WG\({}_{\text{T}}\). Hence, we distribute the gain-loss profile from \(z=0\) to \(z=l\) based on the encirclement of EP (EP*) governed by \(\varphi:0\to 2\pi\) (\(\varphi:2\pi\to 0\)) for WG\({}_{\text{A}}\) (WG\({}_{\text{T}}\)). Such a \(z\)-dependent gain-loss distribution can be implemented by reconsidering (3) as a function of \(z\) as \[\gamma(z)=\gamma_{c}\sin\left(\frac{\pi z}{l}\right)\quad\text{and}\quad\tau( z)=\tau_{c}+r\sin\left(\frac{2\pi z}{l}\right) \tag{4}\] Figure 3(b) shows two complex conjugate 2D Im(\(n\))-profiles [governed by (4)] to encircle the EP and EP* dynamically. Here, the CW and CCW directions of encirclements are realized through one complete pass of light in the forward direction (\(z:l\to l\)) and backward direction (\(z:l\to 0\)), respectively for WG\({}_{\text{A}}\), and vice-versa for WG\({}_{\text{T}}\). Now, we implement the scalar beam propagation method to investigate the individual light transmission through WG\({}_{\text{A}}\) and WG\({}_{\text{T}}\). Under the paraxial and adiabatic (which corresponds to a sufficiently slow variation of gain-loss along \(z\)-direction) approximation, we implement the scalar beam propagation equation, i.e., \[-2ik\partial_{z}\Psi(x,z)=\left[\partial_{x}^{2}+k^{2}\Delta n^{2}(x,z) \right]\Psi(x,z), \tag{5}\] associated with both \(\Psi_{\text{F}}\) and \(\Psi_{\text{H}}\) with extremely fine split-step computation [with \(\Delta n^{2}(x,z)\equiv n^{2}(x,z)-n_{\text{clad}}^{2}\)]. Figure 4 shows the resultant propagation characteristics, while considering the dynamical encirclements around the EP and EP* in WG\({}_{\text{A}}\) and WG\({}_{\text{T}}\), respectively. Here, we initially verify the linear response (i.e., without any nonlinearity) in the context of an asymmetric transfer between the modes, which occurs due to the failure of the adiabatic approximation led by a dynamically encircled EP, despite the associated omnipresent \(\beta\)-switching process. Here, the EP (or EP*) itself acts as a source of chirality, which mainly steers the response of the underlying system in the context of a direction-dependent transfer of modes. Such an unconventional modal dynamics can be observed for both WG\({}_{\text{A}}\) and WG\({}_{\text{T}}\), as shown in Figs. 4(a) and 4(b), respectively. To enable a dynamical encirclement of the EP in the CW direction, we consider the propagation of light from \(z=0\) to \(z=l\) (forward direction) in WG\({}_{\text{A}}\). We can observe the corresponding dynamics of \(\Psi_{\text{F}}\) and \(\Psi_{\text{H}}\) in the upper panel of Fig. 4(a), where \(\Psi_{\text{F}}\) is converted into \(\Psi_{\text{H}}\), following the adiabatic expectation. However, \(\Psi_{\text{H}}\) wlolets the system adiabaticity, i.e., it becomes restructured and remains as \(\Psi_{\text{H}}\). Thus a light signal launched at \(z=0\) of WG\({}_{\text{A}}\) is converted into a dominating \(\Psi_{\text{H}}\) at \(z=l\). The lower panel of Fig. 4(a) shows the modal transitions, while considering light propagation in the backward direction (\(z:l\to 0\); associated with the CCW encirclement process). Here, \(\Psi_{\text{F}}\) dominates at the output \(z=0\) with the asymmetric conversions \(\{\Psi_{\text{F}},\Psi_{\text{H}}\}\to\Psi_{\text{F}}\), where only \(\Psi_{\text{H}}\) maintains the adiabatic expectations (unlike the case for the CW encirclement process). Thus during the dynamical encirclement of an EP, the system partially maintains the adiabaticity, which however turns into a fascinating chiral light dynamics, where irrespective of the excited modes at the input, the device delivers two different dominating modes in the opposite directions. Figure 3: **(a)** Parametric loops to encircle an EP and its conjugate EP* in the (\(\gamma\),\(\tau\))-plane [following (3)]. **(b)** Associated dynamical variation of gain-loss profiles, i.e., two complex conjugate active potentials (separated via a transparent plane), experienced by two \(\mathcal{T}\)-symmetric waveguide variants WG\({}_{\text{A}}\) and WG\({}_{\text{T}}\). Such a violation in the system adiabaticity around an EP, can be predicted with the associated nonadiabatic correction terms (\(\mathbb{N}_{\mathrm{F}\rightarrow\mathrm{H}}\) and \(\mathbb{N}_{\mathrm{H}\rightarrow\mathrm{F}}\) for the adiabatic expectations \(\Psi_{\mathrm{F}}\rightarrow\Psi_{\mathrm{H}}\) and \(\Psi_{\mathrm{H}}\rightarrow\Psi_{\mathrm{F}}\), respectively) from the adiabatic theorem [29]. These corrections mainly rely on the accumulated relative-gain (\(\Delta\gamma_{\mathrm{F,H}}^{\mathrm{ad}}\)) factor during the transition of modes as (generalized with a quantum-optical analogy under the operating condition) \[\mathbb{N}_{\mathrm{F}\{\mathrm{H}\}\rightarrow\mathrm{H}\{\mathrm{F}\}} \propto-\{+\}\exp\int_{0}^{l}\Delta\gamma_{\mathrm{F,H}}^{\mathrm{ad}}(\gamma, \tau)dz. \tag{6}\] Here, \(\Delta\gamma_{\mathrm{F,H}}^{\mathrm{ad}}\) can be estimated from the relative difference between the average loss (\(\gamma^{\mathrm{m}}\)) accured by the individual modes. The adiabatic trajectories of \(\mathrm{Im}(\beta)\)-values [as shown in Fig. 2(c.2)] for \(\Psi_{\mathrm{F}}\) and \(\Psi_{\mathrm{H}}\) gives the associated \(\gamma^{\mathrm{m}}\) with \(\oint\{\mathrm{Im}(\beta)/2\pi\}d\varphi\). Here, the variant \(\mathrm{WG_{A}}\) operating with a dynamically encircled EP gives \(\Delta\gamma_{\mathrm{F,H}}^{\mathrm{ad}}>0\) for the CW direction, whereas \(\Delta\gamma_{\mathrm{F,H}}^{\mathrm{ad}}<0\) for the CCW direction. These particular relations result in the domination of the \(\mathbb{N}\)-factor associated with the amplifying exponent of \(\Delta\gamma_{\mathrm{F,H}}^{\mathrm{ad}}\) over the overall adiabatic expectations, whereas cooperation of the \(\mathbb{N}\)-factor corresponding to the decaying exponent of \(\Delta\gamma_{\mathrm{F,H}}^{\mathrm{ad}}\) with the adiabatic expectations. Hence, the domination of \(\mathbb{N}_{\mathrm{H}\rightarrow\mathrm{F}}\) in the forward direction yields the nonadiabatic transition of \(\Psi_{\mathrm{H}}(\rightarrow\ \Psi_{\mathrm{H}})\), whereas the cooperation of \(\Psi_{\mathrm{F}\rightarrow\mathrm{H}}\) supports the adiabatic conversion of \(\Psi_{\mathrm{F}}(\rightarrow\ \Psi_{\mathrm{H}})\). On the other hand, the domination of \(\mathbb{N}_{\mathrm{F}\rightarrow\mathrm{H}}\) in the backward direction yields the nonadiabatic transition of \(\Psi_{\mathrm{F}}(\rightarrow\Psi_{\mathrm{F}})\), whereas the cooperation of \(\mathbb{N}_{\mathrm{H}\rightarrow\mathrm{F}}\) supports the adiabatic conversion of \(\Psi_{\mathrm{H}}(\rightarrow\Psi_{\mathrm{F}})\). The detailed analytical predictions completely support our numerical beam-propagation results for \(\mathrm{WG_{A}}\), as shown in Fig. 4(a). From the dependence of the relative-gain factor \(\Delta\gamma_{\mathrm{F,H}}^{\mathrm{ad}}\) on the EP-induced asymmetric mode conversions, one can generically conclude that the mode transiting with a lower average loss (\(\gamma^{\mathrm{m}}\)) follows the adiabatic rules, whereas its coupled counterpart evolves nonadiabatically. Now, if we consider the dynamical encirclement around EP*, then the concerned waveguide variant \(\mathrm{WG_{T}}\) exhibits reverse-chiral dynamics compared to the chiral behavior of \(\mathrm{WG_{A}}\), as can be seen in Fig. 4(b). During the encirclement in the CW directions, \(\Psi_{\mathrm{F}}\) and \(\Psi_{\mathrm{H}}\) transmit along the backward direction (\(z:l\to 0\)) with \(\Delta\gamma_{\mathrm{F,H}}^{\mathrm{ad}}<0\), which allows the nonadiabatic transfer of \(\Psi_{\mathrm{F}}\) and the adiabatic transfer of \(\Psi_{\mathrm{H}}\) with the asymmetric conversions \(\{\Psi_{\mathrm{F}},\Psi_{\mathrm{H}}\}\rightarrow\Psi_{\mathrm{F}}\) at \(z=0\) [as shown in the upper panel of Fig. 4(b)]. In this case, \(\Psi_{\mathrm{H}}\) evolves with a lower \(\gamma^{m}\) and maintains the adiabatic expectations. In contrary, the modal transmissions in the forward direction (\(z:0\to l\)) of \(\mathrm{WG_{T}}\) with a positive relative-gain factor (\(\Delta\gamma_{\mathrm{F,H}}^{\mathrm{ad}}>0\)) yields the delivery of the dominating \(\Psi_{\mathrm{F}}\) with the asymmetric conversions \(\{\Psi_{\mathrm{F}},\Psi_{\mathrm{H}}\}\rightarrow\Psi_{\mathrm{H}}\), while considering the encirclement in the CCW direction [as shown in the lower panel of Fig. 4(b)]. Here, \(\Psi_{\mathrm{F}}\) evolves with a lower \(\gamma^{m}\) and maintains the adiabatic expectations. Hence, based on the constraints of the \(\mathcal{T}\)-symmetry, we exclusively demonstrate interesting opposite chiral responses of two active variants designed on the same background waveguide system, where the opposite encirclement directions around the EP and EP* result in the delivery of modes of the same order. Effect of nonlinearity on the asymmetric state-transfer process: Enabling nonreciprocity around two conjugate EPs The direction-dependent light transmission process with the asymmetric transfer of modes in a dual-mode waveguide system (as described for two variants) can be mimicked by a scattering matrix (\(S\)-matrix) given by \[[\Psi_{\mathrm{op}}^{m}]_{4\times 1}=[S_{mn}]_{4\times 4}\left[\Psi_{ \mathrm{in}}^{n}\right]_{4\times 1}. \tag{7}\] Equation (7) describes the operation of an analogous four-port device via \(S\)-matrix formalism, as shown in Fig. 5. Here the elements of \([S]\) can be calculated as \(S_{mn}=\left<\Psi_{\mathrm{in}}^{n}|\Psi_{\mathrm{op}}^{m}\right>\) with \(\{m,n\}\in\{1,2,3,4\}\). We can safely consider the top-left and bottom-right blocks of \([S_{mn}]\) as \(2\times 2\) null matrices in order to neglect the possible reflections from the same port of the designed waveguide. Here, the forward (\(T_{\mathrm{F}}\)) and backward (\(T_{\text{B}}\)) transmissions can be estimated with \(T_{\text{F}}=|\max\left(\text{B}_{bl}\right)|^{2}\) and \(T_{\text{F}}=|\max\left(\text{B}_{tr}\right)|^{2}\) (where, \(\text{B}_{bl}\) and \(\text{B}_{tr}\) represent the bottom-left and top-right blocks, respectively). Now, it can be understood that if \([S]\) defines the scattering matrix for \(\text{WG}_{\text{A}}\), then the analytical transpose form of \([S]\) would be associated with \(\text{WG}_{\text{T}}\) (the numerical values of matrix elements would indeed be different for \(\text{WG}_{\text{A}}\) and \(\text{WG}_{\text{T}}\) due to the presence of two opposite gain-loss profile). In the linear regime, the chirality-drove asymmetric mode conversion process in a particular waveguide variant follows Lorentz's reciprocity with a symmetric \(S\)-matrix, i.e., \([S]=[S]^{T}\). Now, the direction dependence on the system's response can bring up a special interest in achieving one-way transmission, which is compulsory for designing nonreciprocal devices. However, the presence of nonreciprocity obligatorily indicates the breakdown of Lorentz's reciprocity with an asymmetric \(S\)-matrix, i.e., \([S]\neq[S]^{T}\)[42]. In this context, unidirectional transmission with a symmetric scattering matrix was reported in a photonic circuit [43], where isolation is not realizable [44; 42]. In order to break the reciprocity in EP-induced light dynamics, we exploit the effect of local nonlinearity. We schematically represent our proposed scheme in Fig. 5 with an operational analogy between one of the designed dual-mode waveguide variants (hosting a dynamically encircled EP or EP*) with nonlinearity and a 4-port isolator device. Here, we quantify a particular nonlinearity level as \(N_{l}=(\Delta n_{\text{NL}}/\Delta n)\times 100\%\) (with \(\Delta n=0.04\); for the designed passive waveguide), where the variation of \(\Delta n_{\text{NL}}\) depends on the modal field-intensities (\(I\equiv|\Psi|^{2}\)) for a particular nonlinear coefficient (\(n_{2}\)). Here, we initially study the effect of Kerr-type nonlinearities to achieve an adequate level of nonreciprocity for both waveguide variants (in terms of an isolation ratio, say, IR) with proper optimization. Then, we also explore the effect of saturable nonlinearities to enhance the IR further and perform a quantitative comparison. We consider the forms of two different types of nonlinearities, viz., \[\text{Kerr-type nonlinearity:}\quad\Delta n_{\text{NL}}(x,z)=n_{2}I, \tag{8a}\] \[\text{Saturable nonlinearity:}\quad\Delta n_{\text{NL}}(x,z)= \frac{n_{2}I}{(1+I/I_{s})}. \tag{8b}\] \(I_{s}\) in (8b) defines a saturating intensity. Here, the IRs under different operating conditions are calculated from the forward and backward transmission coefficients (i.e., \(T_{\text{F}}\) and \(T_{\text{B}}\)) with \[\text{IR}=10\log_{10}\left[\frac{\max\left\{T_{\text{F}},T_{\text{B}}\right\} }{\min\left\{T_{\text{F}},T_{\text{B}}\right\}}\right]. \tag{9}\] The operations of two \(\mathcal{T}\)-symmetric waveguide variants in terms of nonlinearity-induced optical isolations are illustrated in Figs. 6 and 7. Figure 6 shows prototype isolation schemes for both the variants with the one-way transfer of selective modes at an optimized nonlinearity level (\(N_{l}\)), where Fig. 7 illustrates how we optimize such a specific \(N_{l}\). In Fig. 6(a), we show the one-way propagation of modes through \(\text{WG}_{\text{A}}\) (which hosts a dynamically encircled EP) with Kerr-type nonlinearity in the spatial index distribution. We judiciously optimize the nonlinearity level at \(N_{l}=6.75\%\). Here, we observe that the waveguide is active for the encirclement in the CW direction, where both quasiguided modes are fully transmitted from \(z=0\) to \(z=l\). Moreover, the adiabatic and nonadiabatic relations [from (6)] for this specific encirclement condition allow the asymmetric conversions \(\{\Psi_{\text{F}},\Psi_{\text{H}}\}\rightarrow\Psi_{\text{H}}\), which results in delivery of the dominating \(\Psi_{\text{H}}\) at \(z=l\) of \(\text{WG}_{\text{A}}\). Meanwhile, for the consideration of the dynamical EP encirclement in the CCW direction, we also observe that almost no light is transmitted from \(z=l\) to \(z=0\), which is shown in Fig. 6(b) via a relative intensity difference. Figure 6(b) schematically shows the prototype isolation scheme achieved using \(\text{WG}_{\text{A}}\) along with one of the output (O/P) field intensities at both \(z=l\) and \(z=0\) (i.e., for the forward and backward transmissions, respectively, with the inputs as already shown in Fig. 1(b); as both modes are converted into a particular dominating mode for propagation in a specific direction, we obtain almost similar output intensities at a particular output-end, and hence we show only one of two output field-intensities for each of the propagation directions). Here, the dotted blue curve represents the normalized output field intensity (\(\Psi_{\text{H}}\)) at \(z=l\), while considering the forward propagation (\(z:0\to l\)). However, during the backward propagation (\(z:l\to 0\)), the dotted red curve shows the output field intensity (\(\Psi_{\text{H}}\)) at \(z=0\), which is relative with respect to the output at \(z=l\) obtained during the forward propagation (the relative output is considered to indicate the intensity difference while considering the propagation in two opposite directions). Here, output intensity at \(z=0\) decreases almost 98.6% (during the backward propagation) in comparison to the output at \(z=l\) (during the forward propagation). Two outputs at \(z=0\) and \(z=l\) perfectly imply the prototype isolation scheme of \(\text{WG}_{\text{A}}\), which passes \(\Psi_{\text{H}}\) in the forward direction and blocks \(\Psi_{\text{F}}\) in the backward direction. We calculate the IR using (9), where a maximum of the IR of 18.6 dB is achieved. In Fig. 6(c), we investigate a prototype isolation scheme based on \(\text{WG}_{\text{T}}\) in the presence of Kerr-type nonlinearity with Figure 5: A schematic analogy between a 4-port optical device and our designed dual-mode waveguide operating with a dynamically encircled EP or EP* in the presence of nonlinearity. This analogy is essentially drawn to construct a \(4\times 4\)\(S\)-matrix [given by (7)] considering all the possible transmissions. \(N_{l}=6.75\%\) (same as considered for WG\({}_{\text{A}}\)). Here, we observe that the waveguide is surprisingly active in the backward direction (\(z:l\to 0\)) which is associated with the CW dynamical encirclement scheme around the EP*. The waveguide passes the dominating \(\Psi_{\text{F}}\) [based on the corresponding nonadiabatic correction factors from (6)] with the asymmetric conversions \(\{\Psi_{\text{F}},\Psi_{\text{H}}\}\rightarrow\Psi_{\text{F}}\), as can be observed via the associated beam propagation results. The light becomes blocked in the forward direction, which is associated with the CCW dynamical encirclement process around the EP*. The prototype isolation scheme along with the output (O/P) field intensities for WG\({}_{\text{T}}\) are shown in Fig. 6(d). From the normalized output intensity at \(z=0\) (\(\Psi_{\text{F}}\); during the backward propagation) and relative output intensity at \(z=l\) (\(\Psi_{\text{H}}\); during the forward propagation; relative with respect to the output \(\Psi_{\text{F}}\) at \(z=0\)), it is clearly evident that the intensity decreases \(\approx 93.3\%\) during the forward propagation through WG\({}_{\text{T}}\). Hence, WG\({}_{\text{T}}\) allows \(\Psi_{\text{F}}\) to pass in the backward direction, however, blocks \(\Psi_{\text{H}}\) in the forward direction, where we achieve a maximum of the IR of 11.75 dB. Hence, at a particular nonlinearity level, both \(\mathcal{T}\)-symmetric waveguide variants behave as isolators, which allow the nonreciprocal transmission of two different modes in opposite directions. For a particular variant, a breakdown of the inversion symmetry in the length-depended gain-loss variation occurs in two opposite directions, where the tailored nonlinearity induces nonreciprocity. Hence, the intensity of the incoming waves becomes completely attenuated in a particular direction, despite being transmitted fully in the opposite direction. Here, a correlation between the nonreciprocal transmissions to two different allowed modes in two waveguide variants is dictated by the nonadiabatic corrections around EP and EP*. Such an exclusive nonreciprocal transmission of selective modes mainly relies on the interplay between dynamical gain-loss variation (active components) and the tailored local nonlinearity in the spatial index distribution (passive components). During the propagation of light around an EP in the presence of nonlinearity, the complex \(\beta\)-values of the supported modes become affected significantly. The EP-induced interactions are led by the variations of both Re(\(\beta\)) (modal confinement) and Im(\(\beta\)) (decay rates), where the incorporation of nonlinearity directly influences Re(\(\beta\)). Now, the mode confinement factors enhance with an increasing amount of nonlinearity, which results in the simultaneous reduction of the associated decay rates. Hence, the onset of nonlinearity modifies the gain-loss parameter space concerning the location of the EP (or EP*), and accordingly the relative-gain factor [\(\Delta\gamma_{\text{FH}}^{\text{ad}}\); associated with (6)] between the interacting modes is affected significantly during the evolution of modes following the dynamical EP-encirclement scheme. Based on such an interplay, the relative intensity difference at two opposite output-ends varies for different nonlinearity amounts, which can be understood from the variation of the IR concerning the nonlinearity level (\(N_{l}\)), as shown in Fig. 7(a). The IR initially increases with an increasing Kerr-type nonlinearity level and takes a maximum value of 18.6 dB for WG\({}_{\text{A}}\) and 11.75 dB for WG\({}_{\text{T}}\) at a certain threshold nonlinearity-level of \(N_{l}=6.75\%\) [as shown in Fig. 7(a)]. Here, the difference in the IR for two waveguide variants at a particular \(N_{l}\) can be observed, which occurs due to a different gain-loss profile (exactly opposite; based on \(\mathcal{T}\)-symmetry) as can be seen in Fig. 3(b). The operation of WG\({}_{\text{A}}\) is mainly dominated by loss, whereas WG\({}_{\text{T}}\) operates with an overall higher amount of gain. Hence, WG\({}_{\text{A}}\) is able to induce a comparably higher output intensity difference for the light propagation in two opposite directions. An additional gain-amplification in WG\({}_{\text{T}}\) might reduce such intensity difference between two outputs, which results in achieving a lower IR for WG\({}_{\text{T}}\) in comparison to WG\({}_{\text{A}}\) at a particular \(N_{l}\). However, we interestingly observe that both waveguide variants achieve their highest IR at a specific \(N_{l}=6.75\%\), which affirms their operational correlation based on the chiral behavior of two conjugate EPs. It is further noticeable that while increasing \(N_{l}\) more than 6.75%, the Figure 6: **(a)** Nonreciprocal transition of modes with the asymmetric conversions \(\{\Psi_{\text{F}},\Psi_{\text{H}}\}\rightarrow\Psi_{\text{H}}\) through WG\({}_{\text{A}}\) which is active in the forward direction (\(z:0\to l\); associated with the CW dynamical EP encirclement process). **(b)** Schematic nonreciprocal response of WG\({}_{\text{A}}\) (which allows light to pass in the forward direction, however, blocks in the backward directions) along with outputs (O/P) at \(z=l\) (for the allowed path \(z:0\to l\)) and \(z=0\) (for the blocked path \(z:l\to 0\)). **(c)** Nonreciprocal transition of modes with the asymmetric conversions \(\{\Psi_{\text{F}},\Psi_{\text{H}}\}\rightarrow\Psi_{\text{F}}\) through WG\({}_{\text{T}}\) which is active in the backward direction (\(z:l\to 0\); associated with the CW dynamical encirclement of the EP*). **(d)** Schematic nonreciprocal response of WG\({}_{\text{T}}\) (which allows light to pass in the backward direction, however, blocks in the forward directions) along with outputs (O/P) at \(z=0\) (for the allowed path \(z:l\to 0\)) and \(z=l\) (for the blocked path \(z:0\to l\)). For both WG\({}_{\text{A}}\) and WG\({}_{\text{T}}\), the self-normalized outputs are shown for their active directions, whereas relative outputs are shown in their blocked directions. IR decreases gradually [as shown in Fig. 7(a)] for both variants. Such a decrease of the IR after a certain threshold is mainly due to the abrupt effect of nonlinearity on the encirclement loop that affects the location of the EP significantly (i.e., the EP might come closer to the boundary of the modified loop in the parameter space due to a higher amount of nonlinearity). Here, judicious care should be taken to optimize the \(N_{l}\), as a higher nonlinearity after a certain limit may exclude the EP from the parametric loop, for which the overall observation might be intangible. However, there is a sufficient scope of scalability to investigate the device operation for different amounts of nonlinearities within a broad range. The characteristic curve shown in Fig. 7(a) defines the process to choose an optimized nonlinearity amount, from where we set \(N_{l}=6.75\%\) to obtain the beam propagation results in Fig. 6. Then, instead of local Kerr-type nonlinearity, we introduce saturable nonlinearity in the spatial index distribution to investigate the nonreciprocal transmission through \(\text{WG}_{\text{A}}\) and \(\text{WG}_{\text{T}}\). The saturable nonlinearity is considered with a chosen saturating intensity [\(I_{s}\); as per (8b)] based on the materials of the background waveguide. For Kerr-type nonlinearity, a nonlinear interaction of light in the optical medium gradually increases with an increasing signal intensity, which might ensemble instability in the output signals after a certain limit. In this context, the consideration of the saturable intensity in the associated nonlinear interactions can potentially stabilize the output signals, where we can observe a higher intensity difference at two output ends for propagation in the opposite directions. Hence, we optimize the saturable nonlinearity level at 7.5% from the characteristic dependence of the IR on \(N_{l}\), as shown in Fig. 7(b). Here, we observe an exactly similar nonreciprocal response of both \(\text{WG}_{\text{A}}\) and \(\text{WG}_{\text{T}}\), as we have seen for the choice of Kerr-type nonlinearity in Fig. 6. \(\text{WG}_{\text{A}}\) allows the nonreciprocal transmission of \(\Psi_{\text{H}}\) in the forward direction, whereas isolates \(\Psi_{\text{F}}\) in the backward direction. The field intensity decreases \(\approx 99.96\%\) during the backward propagation, where we achieve a maximum of the IR of 34.6 dB. On the other hand, we achieve a maximum of 18.6 dB IR for \(\text{WG}_{\text{T}}\), which allows \(\Psi_{\text{F}}\) to transmit along the backward direction and isolates \(\Psi_{\text{H}}\) in the forward direction with almost 98.7% reduction of the signal intensity. ## III Summary In summary, a significant stride in understanding and utilizing the concept of conjugate EPs has been made in the context of a correlative nonreciprocal light transmission process. Besides hosting the dynamical encirclements of two conjugate EPs in two \(\mathcal{T}\)-symmetric variants using the framework of a planar gain-loss assisted waveguide, a comprehensive all-optical platform has been developed based on the onset of nonlinearity along with the encirclement scheme to achieve nonreciprocal transmission of selective modes with a specific chiral correlation. We have established that two \(\mathcal{T}\)-symmetric waveguide variants, hosting two conjugate EPs, are characterized by their ability to behave as isolators enabling nonreciprocal transmission of selective modes in opposite directions. Here, they allow active transmission of two different dominant modes in opposite directions, whereas block light from passing in their respective reverse directions. We have investigated the effect of both Kerr-type and saturable nonlinearities on achieving nonreciprocity, where we have observed that the introduction of saturable nonlinearity can induce a comparably higher nonreciprocal effect. We have achieved a huge isolation ratio, even up to 34.6 dB under a specific operating condition. The intricate interplay of the dynamical gain-loss parameter space around the conjugate EPs in the presence of different types of nonlinearities has been discussed in detail to understand such unconventional chiral light dynamics. The insights and implementations of our approach harnessing the fascinating features of conjugate EPs in nonlinear optical systems would unlock a new avenue with exciting possibilities for boosting the development of various nonreciprocal components, such as optical isolators and circulators, for integrated (on-chip) photonic applications in next-generation communication networks and quantum information processing. ###### Acknowledgements. A.L. and A.M. acknowledge the financial support from the Maestro Grant (No. DEC-2019/34/A/ST2/00081) of the Polish National Science Center (NCN). A.L. also acknowledges the support from the National Postdoctoral Fellowship Grant (No. PDF/2021/001322) of the Science and Engineering Research Board (SERB), India. ## Appendix The occurrence of conjugate EPs in any physical system can be understood as a mathematical problem by constructing an analogous \(2\times 2\) non-Hermitian Hamiltonian given by \[\mathcal{H}(\lambda)=\mathcal{H}_{0}+\lambda\mathcal{H}_{p}=\left(\begin{array} []{cc}\beta_{1}&0\\ 0&\beta_{2}\end{array}\right)+\lambda\left(\begin{array}{cc}\kappa_{1}& \gamma_{1}\\ \gamma_{2}&\kappa_{2}\end{array}\right).\] (A.1) Figure 7: Dependence of the isolation ratio (IR) on the local nonlinearity level (\(N_{l}\)), while considering **(a)** Kerr-type nonlinearity and **(b)** saturable nonlinearity, separately. Red square and blue circular markers show such a variation of IR for \(\text{WG}_{\text{A}}\) and \(\text{WG}_{\text{T}}\), respectively. The green arrows in both (a) and (b) indicate the largest values of the IRs, as achieved at the same \(N_{l}\). Here, a passive Hamiltonian \(\mathcal{H}_{0}\), consisting of two passive eigenvalues \(\beta_{j}\left(j=1,2\right)\), is subjected by a perturbation \(\mathcal{H}_{p}\), which is dependent on some coupling parameters \(\kappa_{j}\) and \(\gamma_{j}\left(j=1,2\right)\) with a perturbation strength \(\lambda\). A trivial case can be considered with real-valued \(\beta_{j}\), \(\kappa_{j}\), and \(\lambda\) along with \(\gamma_{j}=0\), for which the effective Hamiltonian \(\mathcal{H}\) behaves as a Hermitian system and possesses two distinct eigenvalues: \(\mathcal{E}_{j}(\lambda)=\beta_{j}+\lambda\,\kappa_{j}\left(j=1,2\right)\). Here, a conventional degeneracy occurs at \(\lambda=-(\beta_{1}-\beta_{2})/(\kappa_{1}-\kappa_{2})\). Now, to ensure the system to be non-Hermitian, all the elements in \(\mathcal{H}_{p}\) might be chosen as non-zero with a complex \(\lambda\), where \([H_{0},H_{p}]\neq 0\). The operation of our designed dual mode waveguide based optical system can be understood based on such a non-Hermitian Hamiltonian. Here, \(\beta_{1}\) and \(\beta_{2}\) represent two real propagation constants. The complex \(\lambda\) defines the overall non-Hermitian elements based on gain-loss parameters \(\kappa_{j}\) and \(\gamma_{j}\left(j=1,2\right)\), where \(\kappa_{j}\) can be appeared as individual modal decay rates, whereas \(\gamma_{j}\) can be considered as introduced gain-loss elements. The eigenvalues of \(\mathcal{H}\) can generically be written as, \[\mathcal{E}_{1,2}(\lambda)=\frac{\beta_{1}+\beta_{2}+\lambda\left(\kappa_{1} +\kappa_{2}\right)}{2}\pm R; \tag{10}\] where, \[R= \left[\left(\frac{\beta_{1}-\beta_{2}}{2}\right)^{2}+\lambda^{2} \left\{\left(\frac{\kappa_{1}-\kappa_{2}}{2}\right)^{2}+\gamma_{1}\gamma_{2} \right\}\right.\] \[\left.+\frac{\lambda}{2}\left(\beta_{1}-\beta_{2}\right)\left( \kappa_{1}-\kappa_{2}\right)\right]^{1/2}. \tag{11}\] Owing to the coupling invoked by finite \(\gamma_{j}\left(j=1,2\right)\), two levels \(\mathcal{E}_{1}\) and \(\mathcal{E}_{2}\) exhibit avoided resonance crossing (ARC; i.e., two levels do not cross but avoid each other) type interactions with a continuous variation of \(\lambda\). While exhibiting ARCs, the two levels coalesce at two critical values of \(\lambda\), which represent a complex conjugate pair of EPs in the complex \(\lambda\)-plane. These two singularities can be obtained in the complex \(\lambda\)-plane by setting \(R=0\), which are given by \[\lambda_{s}^{\pm}=-\frac{\left(\beta_{1}-\beta_{2}\right)}{\left(\kappa_{1}- \kappa_{2}\right)\mp 2i\sqrt{\gamma_{1}\gamma_{2}}} \tag{12}\] The connection between two conjugate EPs can be understood by rewriting \(R\) in terms of \(\lambda_{s}^{+}\) and \(\lambda_{s}^{-}\) as \[R=\left[\left(\frac{\lambda-\lambda_{s}^{+}}{2}\right)\left\{\left(\frac{ \kappa_{1}-\kappa_{2}}{2}\right)^{2}+\gamma_{1}\gamma_{2}\right\}\left(\frac{ \lambda-\lambda_{s}^{-}}{2}\right)\right]^{1/2}. \tag{13}\] Hence, the coupled levels are specified by the value of \(\sqrt{\lambda-\lambda_{s}^{+}}\) and \(\sqrt{\lambda-\lambda_{s}^{-}}\) on two different Riemann surfaces. The critical eigenvalues at two conjugate EPs (i.e., at \(\lambda_{s}^{+}\) and \(\lambda_{s}^{-}\)) are given by \[\mathcal{E}_{s}(\lambda_{s}^{\pm})=\frac{\left(\kappa_{1}\beta_{2}-\kappa_{2 }\beta_{1}\right)\mp i\sqrt{\gamma_{1}\gamma_{2}}(\beta_{1}+\beta_{2})}{\left( \kappa_{1}-\kappa_{2}\right)\mp 2i\sqrt{\gamma_{1}\gamma_{2}}} \tag{14}\] Now, an EP is associated with the occurrence of only one independent eigenvector, unlike two orthogonal eigenvectors at a trivial Hermitian degeneracy. Thus, using the bi-orthogonal norm for a non-Hermitian Hamiltonian, two right-hand eigenvectors at two conjugate EPs (one for each of the EPs) can be written as (approximated up to a factor) \[\left|\Psi_{s}^{+}\right\rangle= \tag{15a}\] \[\left|\Psi_{s}^{-}\right\rangle= \tag{15b}\] with the associated left-hand eigenvectors \[\left\langle\widetilde{\Psi}_{s}^{+}\right|= \tag{16a}\] \[\left\langle\widetilde{\Psi}_{s}^{-}\right|= \tag{16b}\] From Eqs. (10) and (11), it is evident that \[\left\langle\widetilde{\Psi}_{s}^{+}\right|\Psi_{s}^{+}\right\rangle=0\qquad \text{and}\qquad\left\langle\widetilde{\Psi}_{s}^{-}\right|\Psi_{s}^{-}\rangle=0, \tag{17}\] These conditions are referred to as the self-orthogonality that holds at both the conjugate EPs. The existence of only one self-orthogonal eigenvector reflects the fact that the Hamiltonian \(\mathcal{H}(\lambda)\) becomes non-diagonalizable for both \(\lambda=\lambda_{s}^{+}\) or \(\lambda_{s}^{-}\), i.e., at the two conjugate EPs.
2301.10641
The upcoming spectroscopic powerhouses at the Isaac Newton Group of Telescopes
The Isaac Newton Group of Telescopes is completing a strategic change for the scientific use of its two telescopes, the 4.2-m William Herschel Telescope (WHT) and the 2.5-m Isaac Newton Telescope (INT). After more than 30 years operating as multi-purpose telescopes, the telescopes will soon complete their shift to nearly-single instrument operation dominated by large surveys. At the WHT, the WEAVE multi-fibre spectrograph is being commissioned in late 2022. Science surveys are expected to launch in 2023. 30% of the available time will be offered in open time. For the INT, construction of HARPS-3, a high-resolution ultra-stable spectrograph for extra-solar planet studies, is underway, with deployment planned for late 2024. The INT itself is being modernised and will operate as a robotic telescope. An average of 40% of the time will be offered as open time. The ING will maintain its student programme. Plans call for moving student work from the INT to the WHT once the INT starts operating robotically.
Marc Balcells
2023-01-25T15:24:10Z
http://arxiv.org/abs/2301.10641v1
###### Abstract ###### Abstract The Isaac Newton Group of Telescopes is completing a strategic change for the scientific use of its two telescopes, the 4.2-m William Herschel Telescope (WHT) and the 2.5-m Isaac Newton Telescope (INT). After more than 30 years operating as multi-purpose telescopes, the telescopes will soon complete their shift to nearly-single instrument operation dominated by large surveys. At the WHT, the WEAVE multi-fibre spectrograph is being commissioned in late 2022. Science surveys are expected to launch in 2023. 30% of the available time will be offered in open time. For the INT, construction of HARPS-3, a high-resolution ultra-stable spectrograph for extra-solar planet studies, is underway, with deployment planned for late 2024. The INT itself is being modernised and will operate as a robotic telescope. An average of 40% of the time will be offered as open time. The ING will maintain its student programme. Plans call for moving student work from the INT to the WHT once the INT starts operating robotically. The upcoming spectroscopic powerhouses at the Isaac Newton Group of Telescopes Balcells, M.\({}^{1,2,3}\) \({}^{1}\) Isaac Newton Group, 38700 Santa Cruz de La Palma, Spain \({}^{2}\) Instituto de Astrofisica de Canarias, 38200 La Laguna, Tenerife, Spain \({}^{3}\) Universidad de La Laguna, 38200 La Laguna, Tenerife, Spain ## 1 Introduction Since the mid 1980's, the Isaac Newton Group1 (ING) operates the 4.2-m William Herschel Telescope (WHT) and the 2.5-m Isaac Newton Telescope (INT) at the Observatorio Roque de los Muchachos on the Canarian island of La Palma. The telescopes have provided front-line multi-instrument observing capabilities to the ING astronomical communities of the UK, Spain and the Netherlands. Instrumentation comprised facility instruments (most recently ISIS, LIRIS, ACAM, PF, AF2, WYFFOS, INGRID, LDSS, NAOMI, OASIS, TAURUS-II, WFC, IDS), as well as powerful visiting instruments for science or for technology demonstration (AOLI, CANARY, DIPOL, GASP, GHaFaS, HiPERCAM, iQuEYE, LEXI, PAUCAM, PN.S, SPIFS, CIRPASS, CIRSI, ExPo, INTEGRAL, MAOJCam, PLANETPOL, PWFS, SAURON, ULTRACAM, Stereo-SCIDAR, Sodium Profiler). In 2010, following a strategic review and community consultation2, and following the Astronet Roadmap for European Astronomy [5] and the European Telescope Strategy Review Committee 2010 report [15], ING started making steps to allow the WHT to become a powerful spectroscopic survey facility. This transformation had been announced at previous SEA conferences[3]. Footnote 2: [https://www.ing.iac.es/about-ING/strategy/](https://www.ing.iac.es/about-ING/strategy/) This paper provides a much needed update, after the pandemia years, coinciding with the exciting times when the WHT is starting to collect sky light with its new instrumentation. ## 2 The new WHT With a strategic view focused on science from massive spectroscopic surveys doable on a 4-m class telescope, there were clear design drivers for new instrumentation[1]. The WEAVE instrument (Fig. 1) is the result of that strategy. Three main elements comprise the new WHT: **Prime-focus corrector.**: A new optical corrector delivers a corrected field of view (FOV) of 2 degree (40 cm) diameter, with good transmission from 360 nm to 1000 nm. **Fibre positioner.**: A pick-and-place system, based on the 2dF system at the AAT, employs two robots to place up to 960 fibres on the focal plane. In addition to the standard single-fibre multi-object mode, the instrument comprises 20 fully-deployable mini-integral-field Figure 1: The WHT prime focus equiped with the WEAVE fibre positioner. Photo credit: J. Méndez, ING. units (mIFU) of 10 arcsec FOV, as well as a monolithic integral-field unit (LIFU) with field of view 90 arcsec. A total of 3243 fibres transmit the light from the MOS system, the mIFU system and the LIFU to the spectrograph. **Spectrograph.**: A new two-arm spectrograph provides spectroscopy at resolving power of either 5,000 (LR) or 20,000 (HR). Wavelength coverage at LR is the entire optical range, 366 - 959 nm. ### WEAVE highlights The WEAVE instrument has been described elsewhere [7][8][17][2]. The definitive description of the as-built instrument, at the time of integration at the telescope, pre-commissioning, is given in [12]. And an updated description is being maintained at the ING web site3. Footnote 3: [https://www.ing.iac.es/astronomy/instruments/weave/weaveinst.html](https://www.ing.iac.es/astronomy/instruments/weave/weaveinst.html) We direct the reader to the above references, and here just note the three input modes: multi-object mode (MOS), the large-integral field mode (LIFU) and the mini-integral field mode (mIFU). We discuss WEAVE's main highlights in the global context of multi-object spectroscopy instruments. When compared to other MOS instruments on 4-m class telescopes, WEAVE shines in a number of aspects. The more salient of them are noted in Table 1: * In its default, low-resolution mode, WEAVE's resolving power \(R=5000\) is high among high-multiplex MOS instruments built for galactic and extra-galactic science, and makes WEAVE ideal for Milky Way dynamics and archaeology, as it provides stellar radial velocities with accuracies similar to those of tangential velocity data from Gaia. * In its high-resolution mode, WEAVE's resolving power \(R=20,000\), when combined with its high multiplex, represents a significant step forward for stellar line strength determinations, as continua adjacent to the lines are less affected by instrumental broadening. \begin{table} \begin{tabular}{|l|l|} \hline Parameter & Interest \\ \hline MOS lowres \(R=5,000\) & \(\delta v\sim 3\,\mathrm{km}\,\mathrm{s}^{-1}\) @ \(V\sim 20\), match to Gaia \(v_{\mathrm{T}}\) \\ MOS highres \(R=20,000\) & Improved continuum for line strengths \\ MOS, mIFU FOV \(\sim 2\,\mathrm{deg}\) & High multiplex \\ LIFU highres \(R=10,000\) & Resolving vertical vel. disp. of galaxy disks \\ LIFU FOV \(\sim 90\,\mathrm{arcsec}\) & Evolution of PPAK \\ 20 mIFU, FOV \(\sim 11\,\mathrm{arcsec}\) & MANGA but on a 4-m, \(R=5,000\) \\ End-to-end pipeline & Science-ready data \\ Offered for surveys and open time & Accommodates both large and small projects \\ \hline \end{tabular} \end{table} Table 1: WEAVE highlights * The MOS FOV, 2 degree diameter, is unique in its mIFU mode and in the high-resolution MOS. * The LIFU high-resolution configuration, which delivers R=10,000 spectra, can resolve the vertical velocity structure of galaxy disks. * The LIFU FOV, which was patterned after the PPAK LIFU[13], has 50% higher FOV, and higher wavelength coverage (\(3660-9590\)\(\AA\)), and represents a true next-generation large IFU for the study of extended sources. * The mIFU FOV matches the smaller end of the distribution of FOV for the MANGA units[4]. With their wide wavelength coverage and with spectral resolutions 2.5[10] times higher than the SDSS-III spectrographs in the low[high] WEAVE resolutions, the WEAVE mIFU's will be powerful tools for low-mass galaxies and for compact star-forming regions in our Galaxy. * Data will be delivered fully reduced and containing a range of science-ready products. ### WEAVE surveys Eight surveys (Table 2) have been approved for execution over 5 years of WEAVE operation; they will be allocated 70% of the available time on WEAVE. As of this writing, the most comprehensive description of the surveys is given in [12]. For updates and contact points for the surveys and for WEAVE science, the reader is directed to the WEAVE web site4. Footnote 4: [https://ingconfluence.ing.iac.es/confluence/display/WEAVEDEV/WEAVE%3A+The+Science](https://ingconfluence.ing.iac.es/confluence/display/WEAVEDEV/WEAVE%3A+The+Science) ### Using the WHT: surveys and open time ING is offering time on WEAVE for large surveys as well as through the ING national time-allocation committees. The International Time programme of the Canarian Observatories5 provides an additional means of obtaining up to 15 WEAVE nights per year in addition to time on any of the other ORM telescopes. Footnote 5: [https://www.iac.es/en/observatorios-de-canarias/observing-time/observing-time/international-time-programme](https://www.iac.es/en/observatorios-de-canarias/observing-time/observing-time/international-time-programme) WEAVE is due to start commissioning in the fall of 2022. We anticipate LIFU science verification observations in early 2023, and aim for starting science in the middle of 2023. \begin{table} \begin{tabular}{|l l|l l|} \hline & Title & \multicolumn{2}{c|}{Title} \\ \hline GA & Galactic Archaeology & SCIP & Stellar, Circumstellar and Interstellar Physics \\ WC & Galaxy Clusters[6] & StePS & Stellar Populations At Intermediate Redshift[11] \\ WA & WEAVE Apertif [10] & WL & WEAVE LOFAR[16] \\ WQ & WEAVE QSO[14] & WD & WEAVE White Dwarfs \\ \hline \end{tabular} \end{table} Table 2: The eight WEAVE surveys ## 3 The third life of the Isaac Newton Telescope ING is in the middle of transforming the 2.5-m Isaac Newton Telescope (INT) by installing HARPS36[18], an enhanced version of the HARPS and HARPS-N spectrographs aimed at achieving 10 cm s\({}^{-1}\) radial velocity precision on nearby stars to search for Earth-like planets. HARPS3 differs from its predecessors by a stabilised beam feed and a polarimetric sub-unit[9] which will provide a powerful tool for characterising stellar activity. The Terra Hunting Experiment (THE) consortium, P.I. Didier Queloz, is building HARPS3 and making it available in exchange for \(\sim\)50% of every night over 10 years; the remaining time will be offered as open time through the usual national time allocation channels. The consortium is also modernising the INT, which will become a robotic telescope. This will be the third encarnation of the venerable INT, after its first installation in Herstmonceux in 1967 and it re-deployment in La Palma in 1982. Footnote 6: [https://www.terrahunting.org/harps3.html](https://www.terrahunting.org/harps3.html) With this transformation, we expect the INT will provide the UK, ES and NL astronomical communities with a much needed tool for extra-solar planet science. Current plans call for the robotic INT to be commissioned during the summer of 2024, and for HARPS3 to start scientific operations before the end of 2024. ## 4 Opportunities for students ING will continue to welcome 4 to 6 students for year-long stays at the ING. Building on the success of this highly-demanded programme, we are hoping to increase the number of student positions in the near future. When the INT closes down for reforms and becomes a robotic telescope, students will dominantly work at the WHT, partaking in WEAVE survey and open-time observations. This will open up opportunities for the students to become familiar with the execution of large projects and to develop expertise in WEAVE data. ## Acknowledgments The Isaac Newton Group of Telescopes is operated on behalf of the UK Science and Technology Facilities Council (STFC) the Nederlanse Organisatie voor Wetenschappelijk Onderzoek (NWO), and the Instituto de Astrofisica de Canarias (IAC). WEAVE construction was funded from generous grants from STFC, NWO, Spanish Science Ministry, the French CNRS, and the Italian INAF. Additional contributions were received from Konkoly Observatory in Hungary, INAOE in Mexico, and PI grants from Lund Observatory, IAP Potsdam, MPIA Heidelberg. The HARPS3 instrument is being built by the Terra Hunting Experiment Consortium led by University of Cambridge Cavendish Laboratory.
2301.01994
Recurrent and (strongly) resolvable graphs
We develop a new approach to recurrence and the existence of non-constant harmonic functions on infinite weighted graphs. The approach is based on the capacity of subsets of metric boundaries with respect to intrinsic metrics. The main tool is a connection between polar sets in such boundaries and null sets of paths. This connection relies on suitably diverging functions of finite energy.
Daniel Lenz, Simon Puchert, Marcel Schmidt
2023-01-05T10:22:34Z
http://arxiv.org/abs/2301.01994v1
# Recurrent and (strongly) resolvable graphs ###### Abstract. We develop a new approach to recurrence and the existence of non-constant harmonic functions on infinite weighted graphs. The approach is based on the capacity of subsets of metric boundaries with respect to intrinsic metrics. The main tool is a connection between polar sets in such boundaries and null sets of paths. This connection relies on suitably diverging functions of finite energy. ## 1. Introduction Potential theory on infinite weighted graphs (sometimes called networks) studies the induced graph energy functional and derived quantities (e.g. harmonic functions, random walks, resistances, Laplacians,...). As already noted in the pioneering works by Yamasaki [27, 16, 28], so-called _null sets_ of infinite paths play an important role in this theory. For a comprehensive account of Yamasaki's work (and beyond) we refer to the book [22]. Another approach to potential theory on weighted graphs is via Dirichlet forms. The graph energy induces a Dirichlet form, which in turn leads to the notion of capacity of sets. In this language sets of capacity zero, so-called _polar sets_, are key to understanding properties of the Dirichlet form. In contrast to the situation on graphs, for general Dirichlet forms the concept of a path may be meaningless. Instead, for many applications it turned out fruitful to consider intrinsic metrics as geometric input, see e.g. [23, 24, 25, 3, 10]. In this paper we follow the intrinsic metric line of thinking. Our main observation relates polar sets in certain metric boundaries of graphs to null sets of paths, see Theorem 3.6. This sheds a new light on classical results formulated in terms of null sets of paths and allows us to prove new theorems. In this text we focus on consequences for recurrence and the existence of non-constant harmonic functions of finite energy: We prove a new characterization of recurrence in terms of the existence of intrinsic metrics with finite balls, see Theorem 4.2. This in turn leads to an alternative proof of a classical characterization of recurrence due to Yamasaki, see Corollary 4.3 and a characterization of recurrence in terms of metric boundaries being polar, see Corollary 4.4 and Theorem 4.5. We then turn to the problem of the existence of non-constant harmonic functions. Again, we tackle this problem by means of capacity on the boundary. Specifically, we introduce the notion of strong resolvability, which is essentially a path-free and capacity-based version of the notion of resolvability studied in [2]. We show that strongly resolvable transient graphs admit non-constant harmonic functions of finite energy, see Corollary 5.8 and Corollary 5.9. Since strong resolvability is stronger than resolvability (this is a consequence of our main observation mentioned previously), we also prove that locally finite planar graphs of bounded geometry, the main class of examples of resolvable graphs, are even strongly resolvable. This allows us to recover one of the main results from [2] that transient locally finite planar graphs of bounded geometry have non-constant harmonic functions of finite energy. Note that recently more precise descriptions of the space of harmonic functions of planar graphs were obtained, see [1, 5, 9], which are beyond the scope of our theory. Another consequence of planar graphs of bounded geometry being strongly resolvable is that they are never canonically compactifiable, see Theorem 6.4. The latter class of graphs was introduced and studied in [6]. Our results show that two basic issues in the theory of recurrence viz characterization of recurrence and existence of non-constant harmonic functions can naturally be understood in terms of capacities of (suitable) metric boundaries: Recurrence means that metric boundaries are negligible in the sense of having capacity zero whereas existence of non-constant harmonic functions is implied by some richness in the structure of such a boundary in the sense of the positive capacity of the whole boundary not being concentrated on a single point. Since our methods do not rely on paths but only on intrinsic metrics, they are not limited to locally finite graphs as is sometimes the case in the classical setting. Moreover, they can be adapted to more general Dirichlet spaces and even non-linear energies. Both directions will be investigated in upcoming works. Parts of this text are based on Simon Puchert's master's thesis. **Acknowledgments:** Partial support of DFG, in particular, within the Priority programme 'Geometry at infinity' is gratefully acknowledged. ## 2. Preliminaries In this section we introduce the notation and the objects that are used throughout the text. For \(a,b\in\mathbb{R}\) we let \(a\wedge b=\min\{a,b\}\) and \(a\lor b=\max\{a,b\}\). Moreover, \(a_{+}=a\lor 0\) and \(a_{-}=(-a)_{+}\). We extend this notation pointwise to real-valued functions. ### Graphs and Dirichlet energy Our study of graphs is based on an analytic tool given by the Dirichlet energy. A _graph_\(G=(X,b)\) consists of a nonempty countable set \(X\), whose elements are called _nodes_ or _vertices_, and a symmetric _edge weight function_\(b:X\times X\to[0,\infty)\), satisfying the following conditions: The edge weight \(b\) vanishes on the diagonal, i.e. \(b(x,x)=0\) for all \(x\in X\), and the _weighted vertex degree_ \[\deg(x):=\sum_{y\in X}b(x,y)\] must be finite for all \(x\in X\). If the weighted vertex degree is bounded, we say that the graph has _bounded geometry_. If the function \(b\) takes only values in \(\{0,1\}\), the graph \((X,b)\) is called _combinatorial_. Two vertices \(x,y\in X\) are said to be connected by the _edge_\((x,y)\) if \(b(x,y)>0\). In this case, we write \(x\sim y\). Note that since \(b\) is symmetric, we have \(x\sim y\) if and only if \(y\sim x\). The set of all (oriented) edges of \(G\) is denoted by \[E:=\{(x,y)\in X\times X\mid x\sim y\}.\] We say that a graph is _locally finite_ if for all \(x\in X\) the _set of its neighbors_\(\{y\in X\mid x\sim y\}\) is finite. A locally finite graph is called a _bounded valence graph_ if the cardinality of the set of neighbors of each vertex is bounded by a universal constant. A _path_ in \(G\) is a (finite or infinite) sequence \((x_{1},x_{2},\ldots)\) of nodes such that \(x_{i}\sim x_{i+1}\), for \(i=1,2,\ldots\). We say that two points \(x,y\in X\) are _connected_ if there is a finite path \((x=x_{1},\ldots,x_{n}=y)\). This defines an equivalence relation on the set of vertices and the resulting equivalence classes are called _connected components_. From now on we will **generally assume that the graph \(G\) is connected without mentioning this explicitly.** This is not a real restriction because all our considerations in this text can be reduced to connected components. We equip \(X\) with the discrete topology and write \(C(X)\) for all real-valued functions on \(X\) (the continuous functions on \(X\)) and \(C_{c}(X)\) for the finitely supported real-valued functions on \(X\) (the continuous functions of compact support). Any function \(m:X\to(0,\infty)\) induces a Radon measure of full support on all subsets of \(X\) via \[m(A):=\sum_{x\in A}m(x),\quad A\subseteq X.\] In what follows we do not distinguish between such measures and strictly positive functions and simply call them _measures_ on \(X\). Every weighted graph \(G=(X,b)\) gives rise to a quadratic form \(Q\colon C(X)\to[0,\infty]\) that assigns to any function \(f\colon X\to\mathbb{R}\) its _Dirichlet energy_ \[Q(f):=\frac{1}{2}\sum_{x,y\in X}b(x,y)|f(x)-f(y)|^{2}.\] The space of _functions of finite energy_ is \[\mathfrak{D}(G):=\{f\colon X\to\mathbb{R}\mid Q(f)<\infty\},\] on which \(Q\) acts as a bilinear form by polarization, namely \[Q(f,g)=\frac{1}{2}\sum_{x,y\in X}b(x,y)(f(x)-f(y))(g(x)-g(y)).\] Here we abuse notation so that \(Q(f)=Q(f,f)\) for every \(f\in\mathfrak{D}(G)\). The form \(Q\) has the following fundamental semi-continuity property, which is a direct consequence of Fatou's lemma. **Proposition 2.1** (Semicontinuity of \(Q\)).: _Let \((X,b)\) be a graph. Let \((f_{n})\) be a sequence of functions on \(X\) converging pointwise to the function \(f\). Then,_ \[Q(f)\leq\liminf_{n\to\infty}Q(f_{n}),\] _where the value \(\infty\) is allowed._ A map \(C\colon\mathbb{R}\to\mathbb{R}\) is called _contraction_ if \(|C(s)-C(t)|\leq|s-t|\) holds for all \(s,t\in\mathbb{R}\). The Dirichlet energy has the important property that it is reduced by contractions. Specifically, the following proposition is a direct consequence of the definition. **Proposition 2.2** (Fundamental contraction property).: _Let \((X,b)\) be a graph. For all \(f\in\mathfrak{D}(G)\) and all contractions \(C\colon\mathbb{R}\to\mathbb{R}\) we have \(C\circ f\in\mathfrak{D}(G)\) and_ \[Q(C\circ f)\leq Q(f).\] **Remark**.: Two useful families of contractions that will be of use later are the following: For \(a,b\in\mathbb{R}\) we define the _clamping function_\(C_{[a,b]}\colon\mathbb{R}\to\mathbb{R}\) by \[C_{[a,b]}(x):=(x\wedge b)\lor a\] and for \(c\in\mathbb{R}\) we define the _slicing function_\(S_{c}\colon\mathbb{R}\to\mathbb{R}\) by \[S_{c}(x):=(x-c)_{+}\wedge 1.\] For \(o\in X\) we define the (pseudo-)norm \(\left\lVert\cdot\right\rVert_{o}:\mathfrak{D}(G)\to[0,\infty)\) by \[\left\lVert f\right\rVert_{o}:=\sqrt{Q(f)+|f(o)|^{2}}.\] We denote the closure of the space of functions of compact support \(C_{c}(X)\) with respect to \(\left\lVert\cdot\right\rVert_{o}\) by \(\mathfrak{D}_{0}(G)\). The following well-known lemma shows that \(\left\lVert\cdot\right\rVert_{o}\) is indeed a norm and that the space \(\mathfrak{D}_{0}(G)\) does not depend on the choice of \(o\in X\), see e.g. [19, Proposition 1.6]. Its proof relies on the connectedness of \(G\), which we always assume in this text (see above). **Lemma 2.3**.: _Let \(G=(X,b)\) be a graph and let \(o\in X\)._ 1. \(\left\lVert\cdot\right\rVert_{o}\) _is a norm and_ \((\mathfrak{D}(G),\left\lVert\cdot\right\rVert_{o})\) _is a Banach space._ 2. \(f_{n}\to f\) _with respect to_ \(\left\lVert\cdot\right\rVert_{o}\) _implies_ \(f_{n}\to f\) _pointwise._ _._ 3. _For_ \(o^{\prime}\in X\) _the norms_ \(\left\lVert\cdot\right\rVert_{o}\) _and_ \(\left\lVert\cdot\right\rVert_{o^{\prime}}\) _are equivalent._ A measure \(m\) on \(X\) induces the Hilbert space \[\ell^{2}(X,m)=\{f\in C(X)\mid\sum_{x\in X}m(x)|f(x)|^{2}<\infty\}\] with inner product \[\langle f,g\rangle_{m}=\sum_{x\in X}m(x)f(x)g(x)\] and corresponding norm \(\left\lVert\cdot\right\rVert_{m}\). We denote by \(H^{1}(G,m):=\ell^{2}(X,m)\cap\mathfrak{D}(G)\) the corresponding _first-order Sobolev space_. Equipped with the inner product \[\langle f,g\rangle_{Q,m}:=\langle f,g\rangle_{m}+Q(f,g)\] it is a Hilbert space. We denote the associated norm by \(\left\lVert\cdot\right\rVert_{Q,m}\). For a proof of the completeness of this space we refer the reader to [19, Section 1.3], where the completeness of \(H^{1}(G,m)\) is discussed as closedness of the Dirichlet form \(Q^{(N)}\) (in the notation used there). ### Intrinsic metrics In this section we introduce the (pseudo)metrics relevant for our considerations and discuss their properties. A symmetric function \(\sigma:X\times X\to[0,\infty)\) is called a _pseudometric_ if it satisfies the triangle inequality, i.e. if for all \(x,y,z\in X\) it satisfies \[\sigma(x,y)\leq\sigma(x,z)+\sigma(z,y).\] For \(r\geq 0\) and \(x\in X\) we denote the corresponding ball of radius \(r\) around \(x\) by \[B_{r}^{\sigma}(x):=\{y\in X\mid\sigma(x,y)\leq r\}.\] For \(x\in X\) the distance \(\sigma_{U}\) from a nonempty subset \(U\subseteq X\) is defined by \[\sigma_{U}(x):=\sigma(x,U):=\inf_{y\in U}\sigma(x,y)\] and the _diameter_ of \(U\) with respect to \(\sigma\) is \[\operatorname{diam}_{\sigma}(U):=\sup_{x,y\in U}\sigma(x,y).\] A function \(f\) on \(X\) is called _Lipschitz-function_ with respect to the pseudometric \(\sigma\) if there exists a \(C>0\) with \(|f(x)-f(y)|\leq C\sigma(x,y)\) for all \(x,y\in X\). We then also say that \(f\) is a \(C\)-Lipschitz function. The set of all Lipschitz-functions with respect to \(\sigma\) is denoted by \(\operatorname{Lip}_{\sigma}(X)\). For a graph \(G=(X,b)\) a pseudometric \(\sigma\) is called _intrinsic_ with respect to the measure \(m\) if for all \(x\in X\) it satisfies \[\frac{1}{2}\sum_{y\in X}b(x,y)\sigma(x,y)^{2}\leq m(x).\] We write \(\mathfrak{M}(G)\) for the set of pseudometrics that are intrinsic with respect to a finite measure. Clearly, a pseudometric \(\sigma\) belongs to \(\mathfrak{M}(G)\) if and only if \[\sum_{x,y\in X}b(x,y)\sigma(x,y)^{2}<\infty\] holds. **Remark** (Background on intrinsic metrics).: Intrinsic metrics have long proven to be a useful tool in spectral geometry of manifolds and, more generally, for strongly local Dirichlet spaces, see e.g. Sturm's seminal work [23, 24]. For general Dirichlet spaces, including graphs, a systematic approach was developed in [3]. A key point in [3] is a Rademacher type theorem. In the context of graphs this theorem says that a pseudometric \(\sigma\) is intrinsic if and only if for all \(1\)-Lipschitz functions \(f\colon X\to\mathbb{R}\) with respect to \(\sigma\) we have \(|\nabla f|^{2}\leq 1\). Here, for \(f\in C(X)\) and \(x\in X\) the quantity \[|\nabla f|^{2}(x):=\frac{1}{2m(x)}\sum_{y\in X}b(x,y)(f(x)-f(y))^{2}\] can be interpreted as the square of the norm of the discrete gradient of \(f\) at \(x\) (with respect to the measure \(m\)). For graphs with measure \(m\) for which the scaled degree \(\deg/m\) is uniformly bounded the combinatorial metric is an intrinsic metric (up to a constant). For graphs with unbounded degree this is not the case anymore. For such graphs, intrinsic metrics (rather than the combinatorial metric) have turned out to be the right metrics for various questions, see e.g. the survey [10]. The present article can also be seen as a point in case. There are strong ties between functions of finite Dirichlet energy and intrinsic pseudometrics with respect to a finite measure. These will be of relevance for some of our theorems below. **Lemma 2.4** (From \(\mathfrak{M}(G)\) to \(\mathfrak{D}(G)\)).: _Let \(G=(X,b)\) be a graph. Let \(\sigma\) be an intrinsic pseudometric with respect to the finite measure \(m\). Let \(U\) be a subset of \(X\). Then, the following statements hold:_ 1. _Any function_ \(f\) _that is_ \(C\)_-Lipschitz with respect to_ \(\sigma\) _and constant on_ \(U\) _satisfies_ \[Q(f)\leq C^{2}\min\{m(X),2m(X\setminus U)\}.\] 2. _The inequality_ \[Q(\sigma_{U})\leq\min\{m(X),2m(X\setminus U)\}\] _is valid and, in particular,_ \(\sigma_{U}\) _belongs to_ \(\mathfrak{D}(G)\)_._ 3. _The inequality_ \[Q(f)\leq C^{2}m(X)\] _holds for any_ \(C\)_-Lipschitz function_ \(f\) _with respect to_ \(\sigma\)_. In particular, any Lipschitz function with respect to_ \(\sigma\) _belongs to_ \(\mathfrak{D}(G)\) Proof.: Clearly, both (b) and (c) are immediate consequences of (a). Thus, we only show (a). It suffices to consider the case \(C=1\) as for general \(C>0\) the function \(g=\frac{1}{C}f\) is \(1\)-Lipschitz with \(Q(g)=\frac{1}{C^{2}}Q(f)\). So, let \(f\) be a \(1\)-Lipschitz function vanishing on \(U\). The bound \(Q(f)\leq m(X)\) follows easily from the estimate \[Q(f)=\frac{1}{2}\sum_{x,y}b(x,y)(f(x)-f(y))^{2}\leq\sum_{x\in X}\left(\frac{1} {2}\sum_{y\in X}b(x,y)\sigma(x,y)^{2}\right)\] and the fact that \(\sigma\) is intrinsic with respect to \(m\). The other estimate can be shown as follows: Using * \(f(x)-f(y)=0\) for all \(x,y\in U\) and * \(|f(x)-f(y)|\leq\sigma(x,y)\) for all \(x,y\in X\) we infer \[Q(f) =\frac{1}{2}\sum_{x,y\in X}b(x,y)(f(x)-f(y))^{2}\] \[=\frac{1}{2}\sum_{(x,y)\in X^{2}\setminus U^{2}}b(x,y)(f(x)-f(y) )^{2}\] \[\leq\frac{1}{2}\sum_{(x,y)\in X^{2}\setminus U^{2}}b(x,y)\sigma( x,y)^{2}\] \[=\sum_{y\in X\setminus U}\left(\frac{1}{2}\sum_{x\in X}b(x,y) \sigma(x,y)^{2}\right)+\sum_{x\in X\setminus U}\left(\frac{1}{2}\sum_{y\in U }b(x,y)\sigma(x,y)^{2}\right)\] \[\leq 2m(X\setminus U).\qed\] **Remark**.: Clearly, the estimates given in the previous lemma trivially continue to hold if \(m\) is not a finite measure. **Lemma 2.5** (From to \(\mathfrak{D}(G)\) to \(\mathfrak{M}(G)\)).: _Let \(G=(X,b)\) be a graph. Then, for any function \(f\) of finite energy the function_ \[\sigma_{f}\colon X\times X\to[0,\infty),\quad\sigma_{f}(x,y):=|f(x)-f(y)|\] _is an intrinsic pseudometric with respect to the finite measure \(m_{f}\) that is given by_ \[m_{f}(x)=\frac{1}{2}\sum_{y\in X}b(x,y)\sigma_{f}(x,y)^{2}.\] _The function \(f\) is \(1\)-Lipschitz with respect to \(\sigma_{f}\) and \(m_{f}(X)=Q(f)\)._ Proof.: This is already shown in [6, Proposition 3.11]. By the preceding lemma any \(f\) of finite energy comes with a pseudometric \(\sigma_{f}\) that is intrinsic with respect to a finite measure. In general, this \(\sigma_{f}\) will not be a metric (as values of \(f\) in different points need not be distinct). However, this can easily be achieved by an arbitrarily small perturbation as the next proposition shows. **Proposition 2.6** (Small perturbation).: _Let \(G=(X,b)\) be a graph._ 1. _For any_ \(\varepsilon>0\) _there exist_ \(s_{x}>0\)_,_ \(x\in X\)_, such that any function_ \(g\colon X\to\mathbb{R}\) _with_ \(g(x)\in(-s_{x},s_{x})\) _for all_ \(x\in X\) _satisfies_ \(Q(g)<\varepsilon\)_._ 2. _For any_ \(f\in\mathfrak{D}(G)\) _and any_ \(\varepsilon>0\) _there exists a function_ \(f_{\varepsilon}\in\mathfrak{D}(G)\) _with_ \(f_{\varepsilon}(x)\neq f_{\varepsilon}(y)\) _for all_ \(x,y\in X\) _with_ \(x\neq y\) _and_ \[\sup_{x\in X}|f(x)-f_{\varepsilon}(x)|<\varepsilon\text{ and }|Q(f-f_{ \varepsilon})|<\varepsilon.\] Proof.: (a): We write \(1_{x}\) for the characteristic function of \(x\in X\). Then \(Q(1_{x})=\deg(x)<\infty\). For \(x\in X\) we choose \(s_{x}>0\) with \[\sum_{x\in X}s_{x}Q(1_{x})^{1/2}<\sqrt{\varepsilon}.\] Choose a sequence \((F_{n})\) of finite subsets of \(X\) with \(F_{n}\subseteq F_{n+1}\) and \(X=\bigcup_{n}F_{n}\). Then, any \(g\colon X\to\mathbb{R}\) with \(g(x)\in(-s_{x},s_{x})\) is the pointwise limit of the functions \(g_{n}:=\sum_{x\in F_{n}}g(x)1_{x}\). The pointwise lower semicontinuity of \(Q\) together with Cauchy-Schwarz inequality yield \[Q(g)\leq\liminf_{n\to\infty}Q(g_{n}) \leq\liminf_{n\to\infty}\sum_{x,y\in F_{n}}|g(x)||g(y)|Q(1_{x})^{ 1/2}Q(1_{y})^{1/2}\] \[\leq\left(\sum_{x\in X}s_{x}Q(1_{x})^{1/2}\right)^{2}<\varepsilon.\] (b): This follows from (a). Let \(\varepsilon>0\) be given and chose \(s_{x}\), \(x\in X\), according to (a). Without loss of generality we can assume \(s_{x}<\frac{\varepsilon}{2}\). Now let real numbers \(u_{x}\), \(x\in X\), be given such that \(u_{x}-u_{y}\) is irrational for any \(x\neq y\). Then, for any \(x\in X\) we can choose an \(t_{x}\in(-s_{x},s_{x})\) such that \(f(x)-u_{x}-t_{x}\) is rational. Then, \(f_{\varepsilon}\) with \[f_{\varepsilon}(x)=f(x)-t_{x}\] for all \(x\in X\) satisfies \(\sup_{x}|f(x)-f_{\varepsilon}(x)|\leq\sup_{x}s_{x}<\varepsilon\) as well as \(Q(f-f_{\varepsilon})<\varepsilon\). Moreover, the values of \(f_{\varepsilon}\) are pairwise different as for \(x\neq y\) we have \[(f(x)-t_{x})-(f(y)-t_{y})=(u_{x}-u_{y})+(f(x)-u_{x}-t_{x})-(f(y)-u_{y}-t_{y})\] can not vanish (as it is the sum of an irrational number and a rational number.) For us a special class of pseudometrics will be particularly useful. They will be introduced next. Given a symmetric function \(w\colon X\times X\to[0,\infty)\) and a (possibly infinite) path \(\gamma=(x_{1},x_{2},\ldots)\) in \(G\) we define the length of \(\gamma\) with respect to \(w\) by \[L_{w}(\gamma):=\sum_{i}w(x_{i},x_{i+1})\in[0,\infty].\] Since we always assume connectedness, this induces the _path pseudo-metric_\(d_{w}\) on \(X\) via \[d_{w}(x,y)=\inf\{L_{w}(\gamma)\mid\gamma\text{ a path from }x\text{ to }y\}.\] We say that \(\sigma\) is a _path pseudometric_ on \(X\) if \(\sigma=d_{w}\) for some symmetric function \(w\). A symmetric function \(w\colon X\times X\to[0,\infty)\) is called _edge weight_ if \(w(x,y)>0\) for all \((x,y)\in E\). A symmetric function \(w\) is called _adapted_ with respect to the graph \(G=(X,b)\) and the measure \(m\) if for all \(x\in X\) it satisfies \[\frac{1}{2}\sum_{y\in X}b(x,y)w(x,y)^{2}\leq m(x).\] The following lemma summarizes some elementary properties of path pseudometrics. **Lemma 2.7** (Path pseudometrics).: _Let \(G=(X,b)\) be a graph and let \(w\colon X\times X\to[0,\infty)\) be a symmetric function. Then \(d_{w}\) is a pseudometric that satisfies \(d_{w}(x,y)\leq w(x,y)\) if \(x\sim y\). Moreover, the following are satisfied._ 1. _If_ \(w\) _is a pseudometric, then_ \(d_{w}\geq w\)_. In particular,_ \(d_{w}(x,y)=w(x,y)\) _for all_ \(x,y\in X\) _with_ \(x\sim y\)_._ 2. _If_ \(w\) _is adapted with respect to the measure_ \(m\)_, then_ \(d_{w}\) _is intrinsic with respect to the measure_ \(m\)_._ 3. _For_ \(\sigma=d_{w}\) _the equality_ \(d_{\sigma}=\sigma\) _holds._ Proof.: The trivial path \((x,y)\) is one of the paths over which the infimum in the definition of \(d_{w}\) is taken. As \(L_{w}((x,y))=w(x,y)\), the inequality \(d_{w}(x,y)\leq w(x,y)\) is immediate. (a): Given a path \(\gamma=(x=x_{1},\ldots,x_{n}=y)\), an iteration of the triangle inequality \(w(x_{1},x_{k+1})\leq w(x_{1},x_{k})+w(x_{k},x_{k+1})\) yields \[w(x,y)\leq\sum_{i=1}^{n-1}w(x_{i},x_{i+1})=L_{w}(\gamma).\] (b): This is an immediate consequence of the inequality \(d_{w}(x,y)\leq w(x,y)\) for all \(x,y\in X\) with \(x\sim y\). (c): As \(\sigma=d_{w}\) is a pseudometric, (a) gives \(\sigma\leq d_{\sigma}\) and, for \(x,y\in X\) with \(x\sim y\), even \(\sigma(x,y)=d_{\sigma}(x,y)\). For arbitrary \(x,y\in X\) let a path \(\gamma=(x=x_{1},\ldots,x_{n}=y)\) be given. Then a short computation involving what we have shown already and the triangle inequality gives \[L_{w}(\gamma)=\sum_{j=1}^{n-1}w(x_{j},x_{j+1})\geq\sum_{j=1}^{n-1}\sigma(x_{j},x_{j+1})=\sum_{j=1}^{n}d_{\sigma}(x_{j},x_{j+1})\geq d_{\sigma}(x,y).\] Taking the infimum over all \(\gamma\) we find \(\sigma(x,y)\geq d_{\sigma}(x,y)\). We note the following consequence of our considerations: If \(f\) is a function of finite energy on the graph \((X,b)\), then \(\sigma_{f}\) (defined in Lemma 2.5) is an intrinsic pseudometric with respect to \(m_{f}\). Now, we can also consider \(\sigma_{f}\) as a symmetric function (adapted to \(m_{f}\)). This induces the path pseudometric \(d_{f}:=d_{\sigma_{f}}\). The preceding lemma immediately gives the following. **Corollary 2.8**.: _Let \(G=(X,b)\) be a graph and let \(f\in\mathfrak{D}(G)\). Then, \(d_{f}=d_{\sigma_{f}}\) is an intrinsic metric with respect to \(m_{f}\) and_ \[d_{f}(x,y)=|f(x)-f(y)|\] _holds for all \(x,y\in X\) with \(b(x,y)>0\)._ Proof.: By the preceding lemma we have \(d_{f}=d_{\sigma_{f}}\leq\sigma_{f}\) as well as \(d_{f}(x,y)=\sigma_{f}(x,y)=|f(x)-f(y)|\) for all \(x,y\in X\) with \(b(x,y)>0\). **Remark**.: Let a graph \(G=(X,b)\) be given. Define \(\widetilde{Q}\) on the set of symmetric functions \(w\colon X\times X\to[0,\infty)\) by \[\widetilde{Q}(w):=\frac{1}{2}\sum_{x,y}b(x,y)w(x,y)^{2}\in[0,\infty].\] Then, part of our considerations can be understood in terms of \(\widetilde{Q}\). As this may be instructive we give a brief discussion in the present remark: For a symmetric \(w\colon X\times X\to[0,\infty)\) we define \[m_{w}\colon X\to[0,\infty],\quad m_{w}(x)=\frac{1}{2}\sum_{y\in X}b(x,y)w(x,y) ^{2}\] and \(m_{w}(X):=\sum_{x\in X}m_{w}(x)\). Finally, for \(f\colon X\to\mathbb{R}\) define the symmetric function \(\sigma_{f}\colon X\times X\to[0,\infty)\) with \(\sigma_{f}(x,y):=|f(x)-f(y)|\). Then, the following holds: 1. Let \(w\) be a symmetric weight. Then, \(\widetilde{Q}(w)=m_{w}(X)\), where the value \(\infty\) is allowed. If \(w\) is actually a pseudometric, then \(\widetilde{Q}(d_{w})=\widetilde{Q}(w)\) holds. 2. Let \(\sigma\) be a pseudometric on \(X\). Then, \(m_{\sigma}\) is a finite measure if and only if \(\sigma\) belongs to \(\mathfrak{M}(G)\). If \(m_{\sigma}\) is a finite measure it is the smallest measure with respect to which \(\sigma\) is an intrinsic metric. 3. For \(f\colon X\to\mathbb{R}\) the equality \(Q(f)=\widetilde{Q}(\sigma_{f})\) holds, where the value \(\infty\) is allowed. Moreover, \(f\) belongs to \(\mathfrak{D}(G)\) if and only if \(\sigma_{f}\) belongs to \(\mathfrak{M}(G)\). 4. The function \(f\colon X\to\mathbb{R}\) is \(1\)-Lipschitz with respect to the pseudoometric \(\sigma\) if and only if \(\sigma_{f}\leq\sigma\) holds. In this case, \(Q(f)\leq\widetilde{Q}(\sigma_{f})\) is valid. As mentioned already we think of the space \(X\) underlying the graph \((X,b)\) as equipped with discrete topology. Thus, metrics compatible with the discrete topology are of particular relevance for us. The following lemma ensures the existence of such metrics in \(\mathfrak{M}(G)\). **Lemma 2.9**.: _Let \(G=(X,b)\) be a graph. Then, there exists a metric in \(\mathfrak{M}(G)\) that induces the discrete topology._ Proof.: Let \(\mathbb{N}\to X,n\mapsto x_{n}\), be an enumeration of \(X\). We define \[f\colon X\to(0,\infty),\quad f(x_{n})=\frac{1}{\sqrt{2^{n}\mathrm{deg}(x_{n})}},\] and \[\sigma\colon X\times X\to[0,\infty),\quad\sigma(x,y)=\begin{cases}\max\{f(x),f( y)\}&\text{for $x\neq y$}\\ 0&\text{else}\end{cases}.\] It is readily verified that \(\sigma\) is a metric (and even an ultrametric). By \(\sigma(x,y)^{2}\leq f(x)^{2}+f(y)^{2}\), the symmetry of \(b\) and Fubini's theorem we find \[\sum_{x\in X}b(x,y)\sigma(x,y)^{2} \leq \sum_{x,y\in X}b(x,y)(f(x)^{2}+f(y)^{2})\] \[= 2\sum_{x,y\in X}b(x,y)f(x)^{2}\] \[= 2\sum_{x\in X}\mathrm{deg}(x)f(x)^{2}.\] Now, the definition of \(f\) gives \[2\sum_{x\in X}\mathrm{deg}(x)f(x)^{2}\leq 2\] and it follows that \(\sigma\) is an intrinsic metric with respect to a finite measure. The metric \(\sigma\) induces the discrete topology as the distance from any point to \(x\in X\) is bounded from below by \(f(x)>0\). ### Boundaries of graphs As outlined in the introduction completions and boundaries of graphs will be most relevant for our considerations. Here we introduce the corresponding notions. Let \(X\) be a countable set. Let \(\sigma\) be a pseudometric on \(X\). The completion of \(X\) with respect to \(\sigma\) is defined as the set of equivalence classes of \(\sigma\)-Cauchy sequences in \(X\), where two such sequences \((a_{n})\) and \((b_{n})\) are considered to be equivalent if \[\lim_{n\to\infty}\sigma(a_{n},b_{n})=0.\] This set is denoted by \(\overline{X}^{\sigma}\) and contains a quotient of the vertex set \(X\) as the classes of the constant sequences. learly, \(\sigma\) can be extended to a pseudometric on \(X\) and this extension will - by a slight abuse of notation - also denoted by \(\sigma\). Subsequently, the boundary is defined as \[\partial_{\sigma}X=\overline{X}^{\sigma}\setminus(X/\simeq),\] where \(x\simeq y\) if \(\sigma(x,y)=0\). A graph is called _metrically complete_ with respect to a pseudometric if the boundary is empty. Clearly, if \(\sigma\) is a metric then \(\overline{X}^{\sigma}\) contains a copy of \(X\), this copy is dense, and and our definition of metric completeness agrees with the usual definition (that any Cauchy-sequence converges). There are further notions of completeness relevant to us. Let \(G=(X,b)\) be a graph and let \(w\) be an edge weight. The pseudometric space \((X,d_{w})\) is called _geodesically complete_ if every infinite path has infinite length with respect to \(w\). For later purposes we recall the following discrete Hopf-Rinow type theorem that characterizes geodesic completeness, see [8, Theorem A.1] and, for further generalizations, see also [13]. **Theorem 2.10** (Hopf-Rinow type theorem).: _Let \(G=(X,b)\) be a locally finite graph and let \(w\) be an edge weight. Then \(d_{w}\) is a metric that induces the discrete topology on \(X\). Moreover, the following assertions are equivalent:_ 1. \((X,d_{w})\) _is a complete metric space._ 2. \((X,d_{w})\) _is geodesically complete._ 3. _Every distance ball is finite._ 4. _Every bounded and closed set is compact._ Boundaries of graphs can not only arise from metric completions but also from compactifications. In fact, they can arise whenever the set \(X\) underlying the graph is suitable extended. We finish this section with a short discussion of this aspect. Let \(X\) be a countable set. Let \(Y\) be a topological Hausdorff space. We say that \(X\)_embeds densely_ in the topological space \(Y\) if \(Y\) contains a copy of \(X\), the restriction of the topology of \(Y\) on \(X\) is the discrete topology, and \(X\) is dense in \(Y\). Clearly, \(Y\) must be separable whenever \(X\) embeds densely in it. Whenever \(X\) embeds densely in \(Y\) we define the _boundary_\(\partial_{Y}X\) of \(X\) in \(Y\) by \[\partial_{Y}X:=Y\setminus X.\] The complement (in \(Y\)) of any finite subset of \(X\) is open in \(Y\) (as any finite set is compact and then must be closed due to Hausdorffness). Hence, any such a complement is an open neighborhood of \(\partial_{Y}X\). In particular, any function \(h\) with finite support on \(X\) can be extended (by zero) to a continuous function on \(Y\). Clearly, \(X\) embeds densely in \(\overline{X}^{\sigma}\) whenever \(\sigma\) is a metric on \(X\) inducing the discrete topology. This is what we have discussed above. Then, \[\partial_{\sigma}X=\partial_{\overline{X}^{\sigma}}X\] holds. If \(X\) embeds densely in a compact \(Y\), then \(Y\) is called a _compactification_ of \(X\). In this case the open neighborhoods of \(\partial X\) are exactly given by the complements of finite sets of \(X\) (as the complement of any open neighborhood of \(\partial_{Y}X\) must be a closed, and hence, compact subset of \(X\)). A particular instance is given by the _one-point-compactification_. It is given by the set \(Y=X\cup\{\text{pt}\}\), where pt is an arbitrary additional point, and this set is equipped with topology given by the family of all subsets of \(Y\) that are either subsets of \(X\) or whose complement is finite. In this case the boundary of \(\partial_{Y}X\) is just pt. ## 3. Capacity of sets in the boundary and infinite paths In this section we introduce the capacity and study the capacity of sets in the boundary with respect to an intrinsic metric. Let \(G=(X,b)\) be a graph and let \(m\colon X\to(0,\infty)\) be a measure. The _capacity_ of a subset \(U\subseteq X\) is defined by \[\text{cap}_{m}(U):=\inf\{\left\|f\right\|_{Q,m}^{2}\mid f\in H^{1}(G,m)\text{ with }f\geq 1\text{ on }U\},\] with the convention that \(\text{cap}_{m}(U)=\infty\) if the set in the above definition is empty. Using the fundamental contraction property, we can assume \(0\leq f\leq 1\) in this definition, since \((f\wedge 1)_{+}\) satisfies the same constraints as \(f\) but reduces the \(\left\|\cdot\right\|_{Q,m}\)-norm compared to \(f\). Whenever \(X\) embeds densely in \(Y\) we can extend the capacity to subsets of \(Y\) by setting \[\text{cap}_{m}(A):=\inf\{\text{cap}_{m}(O\cap X)\mid O\text{ open in }Y\text{ with }A\subseteq O\}.\] Since by assumption every subset \(A\subseteq X\) is open in \(Y\) (as the topology of \(Y\) induces the discrete topology on \(X\)) both definitions of capacity on \(X\) are compatible. This definition can in particular be applied to the completion \(\overline{X}^{\sigma}\), whenever the metric \(\sigma\) induces the discrete topology on \(X\). The capacity is an outer measure on the power set of \(Y\) with \(m(A)\leq\text{cap}_{m}(A)\) for all \(A\subseteq X\) and \(\text{cap}_{m}(Y)\leq m(X)\), see e.g. [4, Theorem 2.1.1 and Theorem A.1.2]. Next we discuss how the vanishing of the capacity of subsets of the boundary can be characterized with limits of functions of finite energy. **Definition 3.1** (Limes inferior).: Let \(X\) be a countable set, let \(\sigma\) be a metric on \(X\) that induces the discrete topology and let \(f\colon X\to\mathbb{R}\). For \(A\subseteq\overline{X}^{\sigma}\) the _limes inferior of \(f\) at \(A\) with respect to \(\sigma\)_ is defined by \[\liminf_{x\to A}f(x):=\sup\{\inf\{f(x)\mid x\in U\cap X\}\mid U\text{ open in }\overline{X}^{\sigma}\text{ with }A\subseteq U\}.\] Moreover, we define the _limes inferior at infinity_ by \[\liminf_{x\to\infty}f(x):=\sup\{\inf\{f(x)\mid x\in X\setminus F\}\mid F \subseteq X\text{ finite}\}.\] **Remark**.: For us the case where \(\liminf\) equals \(\infty\) and the case of compact \(\overline{X}^{\sigma}\) is particularly relevant. In this context we note the following. 1. We have \(\liminf_{x\to A}f(x)=\infty\) if and only if \(\lim_{n\to\infty}f(x_{n})=\infty\) for each sequence \((x_{n})\) in \(X\) with \(\sigma(x_{n},A)\to 0\), \(n\to\infty\). 2. We have \(\liminf_{x\to\infty}f(x)=\infty\) if and only if \(\lim_{n\to\infty}f(x_{n})=\infty\) for any sequence \((x_{n})\) converging to pt in the one-point-compactification of \(X\). In fact, this easily shows that \(\liminf_{x\to\infty}f(x)=\infty\) if and only if \(\lim_{n\to\infty}f(x_{n})=\infty\) for any sequence \((x_{n})\) converging to some \(y\in\partial_{Y}X\), where \(Y\) is a compactification of \(X\). 3. If \(\sigma\) induces the discrete topology on \(X\) and \((X,\sigma)\) is pre-compact, then every every open neighborhood \(U\) of \(\partial_{\sigma}X\) in \(\overline{X}^{\sigma}\) has the form \(U=X\setminus F\) for some finite \(F\subseteq X\). Hence, in this case \[\liminf_{x\to\infty}f(x)=\liminf_{x\to\partial_{\sigma}X}f(x).\] In this sense, the limes inferior at infinity is the limes inferior at the boundary for metric compactifications of \(X\). It turns out that \(\liminf_{x\to\infty}\) governs \(\liminf_{x\to\partial_{\sigma}X}\) in the following sense. **Proposition 3.2**.: _Let \(X\) be a countable set. Let \(f:X\to\mathbb{R}\) be given. Then,_ \[\liminf_{x\to\partial_{\sigma}X}f(x)\geq\liminf_{x\to\infty}f(x)\] _for any metric \(\sigma\) on \(X\) that induces the discrete topology._ Proof.: Any finite set \(F\) in \(X\) is compact in \(\overline{X}^{\sigma}\) and, hence, closed. Thus, for any finite set \(F\) in \(X\) the set \(\overline{X}^{\sigma}\setminus F\) is an open neighborhood of \(\partial_{\sigma}X\). We obtain \[\liminf_{x\to\partial_{\sigma}X}f(x) =\sup\{\inf\{f(x)\mid x\in U\cap X\}\mid U\text{ open with }\partial_{\sigma}X\subseteq U\}\] \[\geq\sup\{\inf\{f(x)\mid x\in X\setminus F\cap X\}\mid F\subseteq X \text{ finite}\}\] \[\geq\liminf_{x\to\infty}f(x).\qed\] With the help of the limes inferior we can characterize sets of capacity zero in the boundary. **Lemma 3.3** (Characterization of zero capacity sets in the boundary).: _Let \(G=(X,b)\) be an infinite graph and \(\sigma\) be a metric on \(G\) that induces the discrete topology. Further, let \(A\subseteq\partial_{\sigma}X\). The following assertions are equivalent:_ 1. _For one finite measure_ \(m\) _on_ \(X\) _we have_ \(\operatorname{cap}_{m}(A)=0\)_._ 2. _For all finite measures_ \(m\) _on_ \(X\) _we have_ \(\operatorname{cap}_{m}(A)=0\)_._ 3. _There exists_ \(f\in\mathfrak{D}(G)\) _with_ \(\liminf_{x\to A}f(x)=\infty\)_._ Proof.: (i) \(\Rightarrow\) (iii): The statement \(\operatorname{cap}_{m}(A)=0\) implies the existence of sequences of open sets \(U_{n}\supseteq A\) and functions \(f_{n}\geq 1_{U_{n}}\) that satisfy \[\lim_{n\to\infty}\left\|f_{n}\right\|_{Q,m}=\lim_{n\to\infty}\sqrt{Q(f_{n})+ \left\|f_{n}\right\|_{m}^{2}}=0.\] By restricting to a subsequence we can assume without loss of generality that \[\sum_{n\in\mathbb{N}}\left\|f_{n}\right\|_{Q,m}<\infty.\] This implies that the sum \(f:=\sum_{n\in\mathbb{N}}f_{n}\) converges in the Hilbert space \(H^{1}(G,m)\). In particular, \(f\in\mathfrak{D}(G)\). By the choice of the \(f_{n}\) we have \(f\geq N\) on the set \(\bigcap_{n=1}^{N}U_{n}\), which is an open set that contains \(A\). This proves that \(\liminf_{x\to A}f(x)\) is at least \(N\). Since this is true for all \(N\in\mathbb{N}\), the supremum has to be infinite. (iii) \(\Rightarrow\) (ii): Let \(f\in\mathfrak{D}(G)\) satisfy (iii). Without loss of generality we assume \(f\geq 0\), for otherwise we can replace \(f\) by \(\left|f\right|\), which also has finite energy due to Theorem 2.2. We slice \(f\) into the parts \[f_{n}:=(f-n)_{+}\wedge 1.\] First, we observe that \(0\leq f_{n}\leq 1\) and \(f=\sum_{n=0}^{\infty}f_{n}\) pointwise. Moreover, Theorem 2.2 yields \(f_{n}\in\mathfrak{D}(G)\) and since \(m\) is finite and the \(f_{n}\) are bounded, we have \(f_{n}\in H^{1}(G,m)\). Because \(\liminf_{x\to A}f(x)=\infty\), for every \(n\in\mathbb{N}_{0}\) there is an open set \(U_{n}\supseteq A\) such that \(f\geq n\) on \(U_{n}\cap X\). By construction, for \(x\in X\) the inequality \(f(x)\geq n+1\) implies \(f_{n}(x)=1\). Combining these observations we conclude \(f_{n}\geq 1_{X\cap U_{n+1}}\). Altogether, this shows that the functions \(f_{n}\) are usable in the definition of the capacity of \(A\) and \[\operatorname{cap}_{m}(A)\leq\left\|f_{n}\right\|_{Q,m}.\] We prove \(\left\|f_{n}\right\|_{Q,m}\to 0\), as \(n\to\infty\). It is readily verified that for \(n\neq m\) and \(x,y\in X\) the product \((f_{m}(x)-f_{m}(y))(f_{n}(x)-f_{n}(y))\) is always nonnegative, so that \[Q(f_{n},f_{m})=\frac{1}{2}\sum_{x,y\in X}b(x,y)(f_{m}(x)-f_{m}(y))(f_{n}(x)-f_{ n}(y))\geq 0.\] Recall that for \(N\in\mathbb{N}\) we defined \(C_{[0,N]}f=f_{+}\wedge N\). Using Proposition 2.2, the definition of \(f_{n}\) and the previous observation, we obtain \[\sum_{n=0}^{N-1}Q(f_{n}) \leq\sum_{n=0}^{N-1}Q(f_{n})+\sum_{\begin{subarray}{c}0\leq m,n \leq N-1\\ m\neq n\end{subarray}}Q(f_{m},f_{n})\] \[=Q\left(\sum_{n=0}^{N-1}f_{n}\right)=Q(C_{[0,N]}f)\leq Q(f).\] Since \(N\) was arbitrary and \(Q(f)<\infty\), we arrive at \(\lim_{n\to\infty}Q(f_{n})=0\). The convergence \(\lim_{n\to\infty}\left\|f_{n}\right\|_{m}=0\) follows from Lebesgue's dominated convergence theorem. This leads to \(\lim_{n\to\infty}\left\|f_{n}\right\|_{Q,m}=0\) and thus, \(\operatorname{cap}_{m}(A)=0\). (ii) \(\Rightarrow\) (i): This is clear. **Remark**.: (a) The lemma shows that having capacity zero does not depend on the choice of the finite measure. Indeed, we do not even need to assume that \(m\) is strictly positive. If we only assume that \(m(o)>0\) for one \(o\in X\), the space \(H^{1}(X,m)\) continuously embeds into \((\mathfrak{D}(G),\left\lVert\cdot\right\rVert_{o})\) and the proof can be carried out in the space \((\mathfrak{D}(G),\left\lVert\cdot\right\rVert_{o})\). The advantage of working in \(H^{1}(G,m)\) is that it is related to intrinsic metrics with respect to \(m\). 2. The inequality used in the proof of the implication (iii) \(\Rightarrow\) (ii) can be extended to a more general form. Let \(f:X\to\mathbb{R}\) be a function of finite energy and let \(C_{1},C_{2}:\mathbb{R}\to\mathbb{R}\) be two monotone increasing \(1\)-Lipschitz functions. Then \[Q(C_{1}\circ f+C_{2}\circ f)\geq Q(C_{1}\circ f)+Q(C_{2}\circ f).\] In the above proof this observation is applied to the monotone increasing contractions \(S_{n},n\in\mathbb{N}_{0}\), with \(S_{n}(x)=(x-n)_{+}\wedge 1\). One can understand the preceding result also as saying that the sets \(A\subseteq\partial_{\sigma}X\) with zero capacity are infinitely far away from any finite set. More specifically, the following holds. **Corollary 3.4** (Capacity zero sets in the boundary have infinite distance).: _Let \(G=(X,b)\) be an infinite graph and \(\sigma\) a metric on \(G\) that induces the discrete topology. Further, let \(A\subseteq\partial_{\sigma}X\). Then, \(\partial_{\sigma}A\) has zero capacity (with respect to any finite measure) if and only if there exists an intrinsic metric \(\varrho\in\mathfrak{M}(G)\) such that for any finite \(F\) in \(X\) and any \(r>0\) there exists an open neighborhood \(U\) of \(A\) with_ \[\varrho(U\cap X,F):=\inf\{\varrho(z,x)\mid z\in U\cap X,x\in F\}\geq r.\] Proof.: Assume that \(A\) has capacity zero (with respect to any finite measure). By the previous lemma, there exists an \(f\in\mathfrak{D}(G)\) with \(\liminf_{x\to A}f(x)=\infty\). Without loss of generality we can assume \(f(x)\neq f(y)\) for all \(x,y\in X\) with \(x\neq y\) (else we could add an arbitrary small perturbation by Proposition 2.6). Then, \(\varrho:=\sigma_{f}\) is an intrinsic metric with respect to a finite measure. By \(\liminf_{x\to A}f(x)=\infty\) the metric \(\varrho\) has the given property. Assume now that there exists an intrinsic metric \(\varrho\) with respect to the finite measure \(m\) that has the given property. Let an arbitrary finite set \(F\) be given. Then, there exists an open neighborhood \(U\) of \(A\) with \(\varrho(U\cap X,F)\geq 1\). Hence, \(g_{F}:=\varrho_{F}\wedge 1\) satisfies \(0\leq g_{F}\leq 1\), equals \(0\) on \(F\) and equals \(1\) on \(U\). Hence, \[\operatorname{cap}_{m}(A)\leq Q(g_{F})+\left\lvert g_{F}\right\rvert_{m}^{2} \leq Q(\varrho_{F})+m(X\setminus F)\leq 3m(X\setminus F)\] holds, where we used Lemma 2.4 in the last step. As this holds for arbitrary \(F\) we infer \(\operatorname{cap}_{m}(A)=0\). By the previous lemma this implies that the capacity of \(A\) vanishes with respect to any finite measure. **Remark**.: Replacing \(\varrho\) by \(d_{\varrho}\leq\varrho\) we can even take the intrinsic metric in the preceding corollary to be a path metric. **Remark**.: Lemma 3.3 and its corollary deal with metric completions of \(X\). However, they can directly be extended to any topological Hausdorff space \(Y\) in which \(X\) embeds densely. Indeed, both the definition of \(\liminf_{x\to A}\) and the proofs of the lemma and its corollary carry verbatim over to this more general situation. This means in particular that these considerations also holds for compactifications of \(X\). Next we discuss how the capacity of sets in the metric boundary is related to infinite paths. We recall the following standard notion for sets of infinite paths in a graph. **Definition 3.5** (Null set of paths).: A set of infinite paths \(\Gamma\) in \(G=(X,b)\) is called _null_ if there exists an edge weight \(w\) with \[\sum_{x,y\in X}b(x,y)w(x,y)^{2}<\infty\] such that \(L_{w}(\gamma)=\infty\) for all \(\gamma\in\Gamma\). Let \(\sigma\) be a metric on \(X\) that induces the discrete topology. For \(A\subseteq\partial_{\sigma}X\) we denote by \(\Gamma_{A,\sigma}\) the set of infinite paths which have at least one accumulation point with respect to \(\sigma\) lying in \(A\). With the help of our characterization of sets of capacity zero in the boundary we obtain the following relation between sets of capacity zero in the boundary and sets of paths with accumulation point in this set. This is the key observation relating our approach to the classical approach to recurrence by means of null sets of paths. As noted in the introduction, this observation was our motivation to write this paper. **Theorem 3.6** (Capacity and null sets of paths).: _Let \(G=(X,b)\) be an infinite graph and let \(\sigma\) be an intrinsic metric with respect to a finite measure \(m\) that induces the discrete topology. Let \(A\subseteq\partial_{\sigma}X\) with \(\operatorname{cap}_{m}(A)=0\). Then \(\Gamma_{A,\sigma}\) is null._ Proof.: According to the previous lemma there is \(f\in\mathfrak{D}(G)\) such that \(\liminf_{x\to A}f(x)=\infty.\) We consider the function \(w\colon X\times X\to\mathbb{R}\), \(w(x,y)=|f(x)-f(y)|\). Without loss of generality we can assume \(w(x,y)>0\) whenever \((x,y)\) is an edge (else at each vertex \(x\) add a small quantity to \(f(x)\) if necessary, see Proposition 2.6). Then \[\sum_{x,y\in X}b(x,y)w(x,y)^{2}\leq 2Q(f)<\infty.\] Let \(\gamma=(x_{1},x_{2},\ldots)\) be an infinite path with an accumulation point in \(A\). We obtain \[|f(x_{n})-f(x_{1})|\leq\sum_{k=1}^{n-1}|f(x_{k})-f(x_{k+1})|\leq L_{w}(\gamma).\] Since \(\liminf_{x\to A}f(x)=\infty\), the left hand side of this inequality diverges along a suitable subsequence and so we obtain \(L_{w}(\gamma)=\infty\). Hence, \(\Gamma_{A,\sigma}\) is null. The converse seems not to hold due to the complicated behavior of paths at metric boundaries of general graphs. For trees however we have the following converse for path metrics. Recall that \((X,b)\) is a tree if it does not have non-trivial cycles (injective paths \((x_{1},\ldots,x_{n})\) with \(x_{1}\sim x_{n}\)). **Proposition 3.7**.: _Let \(G=(X,b)\) be a tree and let \(\sigma\) be a path metric that induces the discrete topology on \(X\) and is intrinsic with respect to a finite measure \(m\). If for \(A\subseteq\partial_{\sigma}X\) the set of paths \(\Gamma_{A,\sigma}\) is null, then \(\operatorname{cap}_{m}(A)=0\)._ Proof.: Let \(w\) be an edge weight for \(\Gamma_{A,\sigma}\) as in the definition of null sets of paths. Fix \(o\in X\) and for \(x\in X\) let \(\gamma_{x}\) be the unique shortest path with respect to the combinatorial distance connecting \(o\) and \(x\) (uniqueness follows from \((X,b)\) being a tree). We define \[f\colon X\to\mathbb{R},\quad f(x)=L_{w}(\gamma_{x}).\] Since \((X,b)\) is a tree, for neighbors \(x,y\in X\) we have \(|f(x)-f(y)|=w(x,y)\) showing \(f\in\mathfrak{D}(G)\). Let \((x_{n})\) be a sequence in \(X\) with limit in \(x\in A\). We construct a monotone path \(\gamma=(y_{1},y_{2},\ldots)\) (i.e. the combinatorial distance of \(y_{1}\) and \(y_{n+1}\) is larger or equal than the combinatrial distance of \(y_{1}\) and \(y_{n}\)) starting in \(o\) such that \(y_{n}\to x\) and \[f(x_{k})=f(y_{n_{k}})+d_{w}(x_{k},y_{n_{k}})\geq f(y_{n_{k}})\] for a suitable subsequence \((y_{n_{k}})\). The monotonicity of \(\gamma\) and that \((X,b)\) is a tree imply \[\liminf_{k\to\infty}f(x_{k})\geq\liminf_{k\to\infty}f(y_{n_{k}})=L_{w}(\gamma)=\infty.\] Construction of \(\gamma\): We consider \(o\) as a root for the graph and denote by \(|x|\) the combinatorial distance of \(x\) to \(o\). We say that \(y\) is an _ancestor_ of \(c\) if all paths from \(x\) to \(o\) pass through \(y\). Since \((X,b)\) is a tree, every \(A\subseteq X\) has a unique _greatest common ancestor_, i.e., there exists and element \(y\in X\) with: * \(y\) is an ancestor of every element of \(A\). * For every \(x\in X\) with \(|x|>|y|\) there exists an \(a\in A\) such that \(x\) is not an ancestor of \(a\). We let \(z_{n}\) be the greatest common ancestor of \(\{x_{n},x_{n+1},\ldots\}\). This sequence is monotone as \(z_{n}\) is an ancestor of \(\{x_{n+1},x_{n+2},\ldots\}\) and hence an ancestor of \(z_{n+1}\). For every \(n\in\mathbb{N}\) there exists \(N>n\) such that the greatest common ancestor of \(\{x_{n},x_{N}\}\) is \(z_{n}\) (otherwise \(z_{n}\) would not be a greatest common ancestor). Every path from \(x_{n}\) to \(x_{N}\) passes through \(z_{n}\). Since \(\sigma\) is a path metric, this implies \[\sigma(x_{n},x_{N})=\sigma(x_{n},z_{n})+\sigma(z_{n},x_{N})\] and we obtain \[\sigma(x_{n},z_{n})\leq\sup\{\sigma(x_{n},x_{N})\mid N>n\}.\] Hence, \((z_{n})\) also converges to \(x\) but it need not be a path. We make it a path by inserting monotone paths from \(z_{n}\) to \(z_{n+1}\) (these exist since \(y_{n}\) is an ancestor of \(y_{n+1}\)). Using that \(\sigma\) is a path metric yields that any such additional point \(z\) lying between \(z_{n}\) and \(z_{n+1}\) satisfies \[\sigma(z_{n},z_{m})=\sigma(z_{n},z)+\sigma(z,z_{m})\geq\sigma(z_{n},z).\] Hence, also the so-constructed monotone path \((y_{n})\) converges to \(x\). We choose \(n_{k}\) such that \(y_{n_{k}}=z_{k}\). Using that \(z_{k}\) is an ancestor of \(x_{k}\), we obtain \[f(x_{k})=f(z_{k})+L_{w}(\gamma_{x_{k}})-L_{w}(\gamma_{z_{k}})=f(y_{n_{k}})+d_{ w}(y_{n_{k}},x_{k}).\] **Remark**.: In the previous proof we used the following observation utilizing that \((X,b)\) is a tree and \(\sigma\) is a path metric: For every \(x\in\partial_{\sigma}X\) and every sequence \((x_{n})\) in \(X\) converging to \(x\) there exists a monotone path \((y_{n})\) converging to \(x\) such that any \(x_{n}\) has an element from the path \((y_{n})\) as an ancestor. ## 4. Recurrence and intrinsic metrics In this section we use similar technics as in Section 3 to give a new characterization of recurrence in terms of intrinsic metrics and to study the relation of recurrence to the vanishing of the capacity of the boundary. Moreover, we provide an alternative proof for a classical characterization of recurrence due to Yamasaki. For general background on recurrence we refer the reader to the textbooks [12, 26]. The word recurrence stems from the stochastic perspective. In this perspective the graph gives rise to a Markov process modeling a particle jumping between the points of \(X\). Recurrence then describes the phenomenon that the particle comes back to any point of \(X\) again and again. In the analytic description, which is our concern here, this is encoded by various forms of irrelevance of what is happening far away (i.e. outside of finite sets). We will see precise versions as we go along and this is the main topic of this section. **Definition 4.1** (Recurrence).: A graph \(G=(X,b)\) is called _recurrent_ if the constant function \(1\) is contained in \(\mathfrak{D}_{0}(G)\). Graphs that are not recurrent are called _transient_. **Remark**.: (a) The definition of recurrence means that there exists a sequence of functions \((f_{n})\) in \(C_{c}(X)\) with \(f_{n}\to 1\) pointwise and \(Q(f_{n})=Q(f_{n}-1)\to 0,n\to\infty\). As the \(f_{n}\) have finite support this can be seen as an instance of how the behaviour outside of compact sets (in this case the supports of the \(f_{n}\)) becomes irrelevant. 2. Recurrence is equivalent to \(\mathfrak{D}_{0}(G)=\mathfrak{D}(G)\), i.e., \(C_{c}(X)\) being dense in \(\mathfrak{D}(G)\) with respect to the norm \(\left\|\cdot\right\|_{o}\), see e.g.[22, Theorem 3.63]. 3. For disconnected graphs transience is a stronger property than not being recurrent. Since all the graphs in this paper are assumed to be connected, we may well use the above definition. For further background on recurrence we refer the reader to [19]. Next we connect recurrence, vanishing of the capacity and finiteness of metric balls, to the existence of certain unbounded functions of finite energy. **Theorem 4.2** (Characterization of recurrence).: _Let \(G=(X,b)\) be an infinite graph. The following conditions are equivalent:_ 1. \(G\) _is recurrent._ 2. _There is a function of finite energy_ \(f\in\mathfrak{D}(G)\) _that satisfies_ \[\liminf_{x\to\infty}f(x)=\infty.\] 3. _There is an intrinsic metric_ \(\sigma\in\mathfrak{M}(G)\) _that induces the discrete topology on_ \(X\) _such that distance balls with respect to_ \(\sigma\) _are finite._ 4. _There exists a finite measure_ \(m\) _and an edge weight_ \(w\) _adapted to it such that the distance balls with respect to_ \(d_{w}\) _are finite._ 5. _For one (every) finite measure_ \(m\) _on_ \(X\) _and one (every) compactification_ \(Y\) _of_ \(X\) _the equality_ \(\operatorname{cap}_{m}(\partial_{Y}X)=0\) _holds._ 6. _One (every) finite measure has the following feature: For any_ \(\varepsilon>\) _there exists a finite set_ \(F\) _in_ \(X\) _with_ \(\operatorname{cap}_{m}(X\setminus F)=0\)_._ Proof.: (i) \(\Rightarrow\) (iv): Let \(m\) be an arbitrary finite measure on \(X\) and \(Y\) a compactification of \(X\). By (i) there exists a sequence \((f_{n})\) in \(C_{c}(X)\) with \(f_{n}\to 1\) pointwise and \(Q(f_{n})\to 0\). Replacing \(f_{n}\) by \((f_{n}\lor 0)\wedge 1\) we can assume without loss of generality \(0\leq f_{n}\leq 1\) for each \(n\). Then, \(0\leq 1-f_{n}\leq 1\) holds and \(1-f_{n}\) is \(1\) outside the finite support of \(f_{n}\). Hence, \[\operatorname{cap}_{m}(\partial_{Y}X)\leq Q(1-f_{n})+\left|1-f_{n}\right|_{m}^ {2}\] holds for each \(n\). It suffices to show that both terms on the right hand side converge to zero. The first term satisfies \(Q(1-f_{n})=Q(f_{n})\to 0,n\to\infty\). The second term satisfies \(\left|1-f_{n}\right|_{m}^{2}\to 0,n\to\infty\) by Lebesgue theorem on dominated convergence (as \(0\leq 1-f_{n}\leq 1\) holds and \(1-f_{n}\) converges pointwise to \(0\) and \(m\) is a finite measure). (iv) \(\Rightarrow\) (ii): : This follows by a straightforward adaption of the proof of (i)\(\Rightarrow\) (iii) of Lemma 3.3. (ii) \(\Rightarrow\) (iii): Let \(f\) be a function satisfying (ii). By Proposition 2.6 we can assume without loss of generality that the values of \(f\) are pairwise distinct. Set \(\sigma(x,y)=\sigma_{f}(x,y)=\left|f(x)-f(y)\right|\) for all \(x,y\in X\). This yields a pseudo metric that is intrinsic with respect to a finite measure, see Lemma 2.4. In fact, it is even a metric as the values of \(f\) are pairwise distinct. Its distance balls are given by \[B_{r}^{\sigma}(o)=\{x\in X\mid f(o)-r\leq f(x)\leq r+f(o)\}.\] Since \(\liminf_{x\to\infty}f(x)=\infty\), they are finite. In particular, this metric induces the discrete topology. (iii) \(\Rightarrow\) (i): Let an intrinsic metric \(\sigma\) with respect to a finite measure \(m\) be given according to (iii). Hence, \(\sigma\) induces the discrete topology and its distance balls are finite. Now, let \(F\) be an arbitrary finite set. Then, \(\sigma_{F}:=\sigma(\cdot,F)\) satisfies * \(\sigma_{F}=0\) on \(F\). * \(\sigma_{F}\geq 1\) outside of \(F_{1}:=\{x\in X:\sigma(F,x)<1\}\) and \(F_{1}\) is finite (as \(F\) is a finite set and distance balls with respect to \(\sigma\) are finite). Define \(g_{F}:=(1-\sigma_{F})_{+}\). Then \(g_{F}\) equals to \(1\) on \(F\) (by the first bullet point) and has finite support contained in \(F_{1}\) (by the second bullet point). Moreover, as \(Q\) is compatible with contractions and \(Q(1)=0\) holds we find from Lemma 2.4 the estimate \[Q(g_{F})\leq Q(1-\sigma_{F})=Q(\sigma_{F})\leq 2m(X\setminus F).\] So, choosing an increasing sequence \((F_{n})\) of finite sets with \(\cup_{n}F_{n}=X\) we obtain a sequence \(f_{n}:=g_{F_{n}}\) in \(C_{c}(X)\) converging pointwise to \(1\) with \[Q(f_{n})\leq 2m(X\setminus F_{n})\to 0,n\to\infty.\] This shows (i). The equivalence between (iv) and (v) is clear. (iii)' \(\Rightarrow\) (iii): By Lemma 2.7 the metric \(d_{w}\) is intrinsic with respect to the finite measure \(m\). To show that it induces the discrete topology we note that finiteness of \(d_{w}\)-balls implies that for all \(R>0\) and \(x\in X\) the set \[\{y\in X\mid y\sim x\text{ with }w(x,y)<R\}\] is finite (otherwise the \(R\)-ball around \(x\) would contain infinitely many points). This is known as essential local finiteness of \(w\) and, according to [13, Lemma 2.2], implies that \(d_{w}\) induces the discrete topology. (iii) \(\Rightarrow\) (iii)': We choose \(w:=\sigma\). Then, \(w\) is adapted to a finite measure \(m\) and \(d_{\sigma}\) is then an intrinsic metric with respect to \(m\) with \(\sigma\leq d_{\sigma}\) by Lemma 2.7. In particular, balls with respect to \(d_{w}\) are contained in the corresponding balls with respect to \(\sigma\) and are, hence, finite. **Remark**.: (a) The equivalence between (i) and (iv) can be seen as a special instance of the recurrence theory developed by the third author in his (unpublished) PhD thesis [21]. (b) Clearly, a metric with finite distance balls must induce the discrete topology. 3. In the proof of (ii) \(\Rightarrow\) (iii) we have seen that for \(f\) of finite energy with \(\liminf_{x\to\infty}f(x)=\infty\) the intrinsic (pseudo)metric \(\sigma_{f}\) has finite distance balls. There is a converse of sorts to this: Let \(\sigma\) be a metric and define for \(x\in X\) the function \(f_{x}\) by \(f_{x}(y):=\sigma(y,\{x\})\). Then, the distance balls around one \(x\in X\) are finite if and only if the distance balls around any \(x\in X\) are finite and this holds if and only if \(\liminf_{y\to\infty}f_{x}(y)=\infty\) holds for one (all) \(x\in X\). 4. The proof of (iii) \(\Rightarrow\) (i) only uses that the balls of radius \(1\) are finite. In fact, the number \(1\) is irrelevant. It suffices that there is an \(r>0\) such that all balls of radius \(r\) are finite. However, if \(\sigma\) is an intrinsic metric all of whose distance balls of radius \(r\) are finite then for any sequence \(F_{n}\) of finite sets in \(X\) with \(F_{n}\subseteq F_{n+1}\) and \(\cup F_{n}=X\) we can define \(f:=\sum_{n=1}^{\infty}\sigma(F_{n},\cdot)\). Then, \(f\) will be well-defined with \(\liminf_{x\to\infty}f(x)=\infty\) (by finiteness of \(r\)-balls). With a suitable choice of \(F_{n}\) then \(f\) will have finite energy and \(\sigma+\sigma_{f}\) will be an intrinsic metric with respect to a finite measure that has finite distance balls. 5. The existence of an intrinsic metric with respect to a (finite) measure \(m\) that has finite distance balls has strong consequences. In particular, as observed in [8], it implies that associated graph Laplacians on \(\ell^{2}(X,m)\) (and more general magnetic Schrodinger operators [7, 20]) are essentially self-adjoint. It is somewhat surprising that recurrence implies essential self-adjointness for a particular finite measure, as in general recurrence is strictly weaker than essential self-adjointness for all finite measures. We refer to discussion after Theorem 11.6.15 in the survey [20]. This survey contains a version of the previous theorem, which was first obtained in the second author's master's thesis [17]. We are now going to derive some consequence of the preceding theorem. As a a first consequence of it we obtain an alternative proof for the (by now) classical recurrence criterion of Yamasaki [28]. **Corollary 4.3** (Yamasaki's criterion).: _Let \(G=(X,b)\) be a locally finite graph. Then \(G\) is recurrent if and only if the set of all infinite paths is null._ Proof.: Let \(G\) be recurrent. According to Theorem 4.2 there exists a finite measure \(m\) and a weight \(w\) adapted to \(m\) such that the intrinsic path metric \(d_{w}\) has finite distance balls. As \(w\) is adapted to \(m\) we have \[\sum_{x,y}b(x,y)w(x,y)^{2}\leq 2m(X)<\infty.\] It suffices to show that the length \(L_{w}(\gamma)\) of any infinite path is \(\infty\). We consider two cases: Case 1: The path \(\gamma\) leaves any finite set. Then, the path leaves in particular any ball of finite radius (w.r.t. \(d_{w}\)). Hence, the path must have infinite length (as the metric is a path metric). Case 2: The path \(\gamma\) stays within a fixed finite set. Then, it must have infinite length anyway. Suppose that the set of all infinite paths is null and let \(w\) be a corresponding edge weight. The summability condition on \(w\) implies that \[m_{w}(x):=\frac{1}{2}\sum_{y\in X}b(x,y)w(x,y)^{2}\] is a finite measure. We consider the path metric \(d_{w}\) induced by \(w\). Lemma 2.7 (a) ensures that it is intrinsic with respect to \(m_{w}\). Thus, \(d_{w}\) is an intrinsic metric with respect to a finite measure. Since all infinite paths have infinite length, Theorem 2.10 implies that \(d_{w}\) has finite distance balls. This yields recurrence by Theorem 4.2. **Remark**.: For proving nullity of the set of all paths on recurrent graphs we did not use local finiteness. Another consequence of our characterization of recurrence is vanishing of the capacity of all boundaries of metric completions in the recurrent case. **Corollary 4.4**.: _Let \(G=(X,b)\) be a recurrent infinite graph. For any finite measure \(m\) on \(X\) and any metric \(\sigma\) on \(X\) that induces the discrete topology we have_ \[\operatorname{cap}_{m}(\partial_{\sigma}X)\ =\ 0.\] Proof.: This follows immediately from (v) of the previous theorem as \(\partial_{\sigma}X\) is contained in \(X\setminus F\) for any finite \(F\). In some cases the vanishing of the capacity of the boundary is equivalent to recurrence. For this one needs that \(\sigma\) is intrinsic with respect to the finite measure \(m\) and some more geometric data. In the following theorem we discuss two situations where this is the case. **Theorem 4.5** (Capacity criterion).: _Let \(G=(X,b)\) be a graph. Let \(\sigma\) be a metric that induces the discrete topology and is intrinsic with respect to a finite measure \(m\). Then \(G\) is recurrent if \(\operatorname{cap}_{m}(\partial_{\sigma}X)=0\) and one of the following conditions is satisfied:_ 1. \(G\) _is locally finite._ 2. \((X,\sigma)\) _is totally bounded._ Proof.: (a): According to Theorem 4.2 it suffices to construct an intrinsic metric \(e\) with respect to a finite measure that has finite distance balls and induces the discrete topology on \(X\). The metric \(e\) that we construct is a path metric. Since \(G\) is locally finite, it automatically induces the discrete topology. By the discrete Hopf-Rinow theorem, Theorem 2.10, the finiteness of distance balls is equivalent to the completeness of \((X,e)\). According to Lemma 3.3 the assumption \(\operatorname{cap}_{m}(\partial_{\sigma}X)=0\) yields a function \(f\in\mathfrak{D}(G)\) with \(\liminf_{x\to\partial_{\sigma}X}f(x)=\infty.\) We let \(e:=d_{\sigma+\sigma_{f}}\) be the path metric that is induced by the weight \(\sigma+\sigma_{f}\) with \(\sigma_{f}(x,y)=|f(x)-f(y)|\). Lemma 2.5 shows that the pseudometric \(\sigma_{f}\) is intrinsic with respect to a finite measure and so \(\sigma+\sigma_{f}\) is intrinsic with respect to a finite measure. We infer from Lemma 2.7 that also the induced path metric \(e\) is intrinsic with respect to a finite measure. It remains to show the completeness of \((X,e)\). Let \((x_{n})\) be Cauchy with respect to \(e\). Lemma 2.7 yields \(e\geq\sigma+\sigma_{f}\geq\sigma\), so that \((x_{n})\) must also be a Cauchy sequence with respect to \(\sigma\). Due to completeness it has a limit \(x\in\overline{X}^{\sigma}\). We show that \(x\in X\) and that \((x_{n})\) also converges to \(x\) with respect to \(e\) by considering two cases: Case 1: \(x\in\partial_{\sigma}X\): \(\liminf_{y\to\partial_{\sigma}X}f(y)=\infty\) yields \(\liminf_{n\to\infty}f(x_{n})=\infty\), so that for each \(m\in\mathbb{N}\) \[e(x_{m},x_{n})\geq\sigma_{f}(x_{n},x_{m})=|f(x_{n})-f(x_{m})|\] is unbounded in \(n\). In particular, this contradicts the assumption that \((x_{n})\) is Cauchy with respect to \(e\). Case 2: \(x\in X\): Since \(\sigma\) induces the discrete topology on \(X\), convergence with respect to \(\sigma\) to some point in \(X\) yields that \((x_{n})\) must eventually be constant. Hence, it also converges with respect to \(e\). (b): By assumption \(\overline{X}^{\sigma}\) is compact. Hence, vanishing capacity of \(\partial_{\sigma}X\) implies recurrence by Theorem 4.2. **Remark**.: 1. Part (a) of this Theorem is a generalization of [8, Theorem 3], which only treats certain path metrics. Note that for finite underlying measures the equality \(D(Q)=D(Q^{\max})\) discussed in this reference is equivalent to recurrence, see e.g. [19, Theorem 6.5]. 2. The condition of \((X,\sigma)\) being totally bounded means that it can be isometrically embedded into a compact metric space. Below we consider examples of bounded discrete \(X\subseteq\mathbb{R}^{2}\) equipped with the Euclidean metric. 3. For general graphs it remains an open question whether or not the previous theorem is true. 4. For locally finite \(G=(X,b)\) we established the equivalence of the following assertions. 1. \(G\) is recurrent. 2. For one/all intrinsic metrics \(\sigma\) with respect to a finite measure \(m\) that induce the discrete topology we have \[\operatorname{cap}_{m}(\partial_{\sigma}X)=0.\] 3. The set of all infinite paths is null. The implication (iii) \(\Rightarrow\) (ii) can be seen as a sort of converse to Theorem 3.6 when considering the set \(A=\partial_{\sigma}X\). ## 5. Resolvable graphs and harmonic functions In this section we turn to the study of transient graphs. By the results of the last section, transience is characterized by positivity of the capacity of (suitable) boundaries. Here, we turn to a different aspect based on a characterization of recurrence and transience in terms of superharmonic functions (read on for the precise definition). Specifically, a graph is transient if and only if it admits non-constant superharmonic functions of finite energy (see e.g. [12]). In general, these superharmonic functions will not be harmonic. This stimulates interest in those (transient) graphs which admit non-constant harmonic functions of finite energy. The aim of this section is to derive a capacity based sufficient condition for existence of such functions. To this end, we introduce strong resolvability for graphs, which is a somewhat stronger property than resolvability that was introduced in [2]. We prove that (most) transient strongly resolvable graphs admit harmonic functions. **Definition 5.1** (Resolvability).: A graph \(G=(X,b)\) is called _resolvable_ if there is an edge weight \(w\) with \[\sum_{x,y\in X}b(x,y)w(x,y)^{2}<\infty\] such that for every point \(x\in\partial_{d_{w}}X\) the set of paths converging to \(x\) with respect to \(d_{w}\) is null. In this case, \(w\) is called a _resolving weight_ for \(G\). The previous definition relied on the concept of path. We now aim at a path-free definition which captures essentially the same concept. This yields the following definition. **Definition 5.2** (Strong resolvability).: A graph \(G=(X,b)\) is called _strongly resolvable_, if there exists an intrinsic metric \(\sigma\) with respect to a finite measure \(m\) that induces the discrete topology such that \(\operatorname{cap}_{m}(\{x\})=0\) for all \(x\in\partial_{\sigma}X\). In this case, \(\sigma\) is called a _resolving metric_ for \(G\). **Proposition 5.3**.: _A strongly resolvable graph is resolvable._ Proof.: Let \(\sigma\) be a resolving metric for \(G\) that is intrinsic with respect to the finite measure \(m\). By Lemma 2.7, the path metric \(d_{\sigma}\) induced by \(\sigma\) satisfies \(\sigma\leq d_{\sigma}\). Hence, there is a continuous map \(\iota\colon\overline{X}^{d_{\sigma}}\to\overline{X}^{\sigma}\) that extends the identity on \(X\). Let \(x\in\partial_{d_{\sigma}}X\). We first show that \(\iota(x)\in\partial_{\sigma}X\). Suppose that this is not the case, i.e. \(\iota(x)\in X\). We choose a sequence \((x_{n})\) in \(X\) with \(x_{n}\to x\) with respect to \(d_{\sigma}\). Since \(x_{n}\to\iota(x)\) with respect to \(\sigma\) and \(\sigma\) induces the discrete topology, \((x_{n})\) must be eventually constant. Hence, \(x=\iota(x)\in X\), a contradiction. Any path converging to \(x\) with respect to \(d_{\sigma}\) converges to \(\iota(x)\) with respect to \(\sigma\). Hence, the set of all such paths is contained in \(\Gamma_{\{\iota(x)\},\sigma}\), the set of paths having \(\iota(x)\) as an accumulation point with respect to \(\sigma\). It therefore suffices to show that the latter set is null. Since \(\iota(x)\in\partial_{\sigma}X\), we have \(\operatorname{cap}_{m}(\{\iota(x)\})=0\) by assumption. Theorem 3.6 implies that \(\Gamma_{\{\iota(x)\},\sigma}\) is null. **Remark**.: 1. Strong resolvability transfers the geometric notion of resolvability introduced in [2] to a notion of potential theory. This has two advantages. With strong resolvability one can also treat non-locally finite graphs, as potential theory does not distinguish between locally finite and non-locally finite graphs. This is an advantage of potential theory. Indeed, for notions invoking infinite paths in general the non-locally finite case poses problems, as e.g. the discrete Hopf-Rinow theorem 2.7 does not hold on non-locally finite graphs, see the discussion in [8, Appendix A]. Moreover, strong resolvability is also available on more general spaces that admit a potential theory, e.g. Riemannian manifolds, fractals or metric graphs. 2. As discussed after Theorem 3.6 we believe that \(\operatorname{cap}_{m}(\{x\})=0\) for all \(x\in\partial_{\sigma}X\) is strictly stronger than \(\Gamma_{\{x\},\sigma}\) being null for all \(x\in\partial_{\sigma}X\). Hence, resolvability seems strictly stronger that resolvability (even though we do not have concrete examples). However, below we shall see that planar graphs, the main examples for resolvable graphs in [2], are also strongly resolvable. **Definition 5.4** ((Super)Harmonic functions).: Let \(G=(X,b)\) be a graph. A function \(f\colon X\to\mathbb{C}\) is called _superharmonic_ if for all \(x\in X\) it satisfies \[f(x)\geq\frac{1}{\deg(x)}\sum_{y\in X}b(x,y)f(y),\] where we assume absolute convergence of the sum on the right side of the equation. The function \(f\) is called harmonic if both \(f\) and \(-f\) are superharmonic. We write \(\mathfrak{H}(G)\) for the _space of harmonic functions_. An important property of these functions is that on transient graphs functions of finite energy are uniquely represented as sums of harmonic functions of finite energy and functions in \(\mathfrak{D}_{0}(G)\), see Theorem 6.3 in [22] for reference. **Theorem 5.5** (Royden decomposition).: _Let \(G=(X,b)\) be a transient graph. For all \(f\in\mathfrak{D}(G)\) there exists a unique \(f_{0}\in\mathfrak{D}_{0}(G)\) and a unique harmonic \(f_{h}\in\mathfrak{D}(G)\) such that_ \[f=f_{0}+f_{h}\] _and_ \[Q(f)=Q(f_{0})+Q(f_{h}).\] _The function \(f_{h}\) is the unique function in \(\mathfrak{D}(G)\) that satisfies_ \[Q(f_{h})=\inf\{Q(f-g)\mid g\in\mathfrak{D}_{0}(G)\}=\inf\{Q(f-g)\mid g\in C_{c} (X)\}.\] _Moreover, if \(f\) is bounded, then \(f_{0}\) and \(f_{h}\) are bounded as well._ Resolvability was introduced to prove the existence of non-constant harmonic functions on transient locally finite resolvable graphs. This result of Benjamini and Schramm carries over to strongly resolvable graphs that need not be locally finite. Before we prove this we need a result on harmonic functions induced by Lipschitz functions. Let \(\sigma\in\mathfrak{M}(G)\) which is intrinsic with respect to the finite measure \(m\) and suppose now that \((X,b)\) is transient. Using the Royden decomposition we define the map \[\Phi\colon\mathfrak{D}(G)\cap C_{b}(\overline{X}^{\sigma})\to\mathfrak{D}(G) \cap\mathfrak{H}(G),\quad f\mapsto f_{h}.\] Since the Royden decomposition preserves boundedness and \(m\) is finite, we even obtain that \(\Phi\) maps to \(H^{1}(G,m)\). The following is the main observation in this section, which will be used to construct many non-constant hamronic functions of finite energy. **Lemma 5.6**.: _Let \((X,b)\) be a graph and let \(\sigma\) be an intrinsic metric with respect to a finite measure \(m\) that induces the discrete topology such that \(\operatorname{cap}_{m}(\partial_{\sigma}X)>0\). Moreover, let_ \[S=\{x\in\partial_{\sigma}X\mid\operatorname{cap}_{m}(U\cap\partial_{\sigma}X) >0\text{ all open }x\in U\subseteq\overline{X}^{\sigma}\}.\] _Then_ \[\ker\Phi\subseteq\{f\in\mathfrak{D}(G)\cap C_{b}(\overline{X}^{\sigma})\mid f |_{S}=0\}.\] _In particular, if \(\Phi(f)\) is constant, then \(f\) is constant on \(S\)._ Proof.: Let \(f\in\mathfrak{D}(G)\cap C_{b}(\overline{X}^{\sigma})\) with \(f|_{S}\neq 0\). We show \(f_{h}=\Phi(f)\neq 0\). Without loss of generality there exists \(\varepsilon>0\) such that \(f(x)\geq\varepsilon\) for some \(x\in S\). Using Theorem 5.5 we choose a sequence \((g_{n})\) in \(C_{c}(X)\) with \(Q(f_{h})=\inf_{n\geq 1}Q(f-g_{n})=\lim_{n\to\infty}Q(f-g_{n})\). As noted above we have \(f_{h}\in H^{1}(G,m)\). _Claim:_ The sequence \((g_{n})\) can be chosen such that \(f-g_{n}\to f_{h}\) in \(H^{1}(G,m)\). _Proof of the claim._ It suffices to show that \((g_{n})\) can be chosen such that \(f-g_{n}\to f_{h}\) in \(\ell^{2}(X,m)\). We can assume \(\left\|g_{n}\right\|_{\infty}\leq 2\left\|f\right\|_{\infty}\), as otherwise we could write \[((f-g_{n})\wedge\left\|f\right\|_{\infty})\vee(-\left\|f\right\|_{\infty})=f-h _{n}\] with appropriate \(h_{n}\in C_{c}(X)\). These satisfy \(\left\|h_{n}\right\|_{\infty}\leq 2\left\|f\right\|_{\infty}\) and, using the compatibility of \(Q\) with contractions, also \[Q(f_{h})\leq Q(f-h_{n})\leq Q(f-g_{n}).\] Now assume \((g_{n})\) is chosen with \(\left\|g_{n}\right\|_{\infty}\leq 2\left\|f\right\|_{\infty}\). The Royden decomposition shows \(Q(f_{0}-g_{n})\to 0\). Moreover, our assumption \(\mathrm{cap}_{m}(\partial_{\sigma}X)>0\) implies transience of the graph, see Corollary 4.4. On transient graphs convergence on \(\mathfrak{D}_{0}(G)\) with respect to \(Q\) implies pointwise convergence, see e.g. [11, Theorem B.2]. Hence, we obtain \(f_{0}-g_{n}\to 0\) pointwise, which implies \(f-g_{n}\to f_{h}\) pointwise. Since the functions \(f-g_{n}\) are uniformly bounded by \(3\left\|f\right\|_{\infty}\) and since \(m\) is a finite measure, Lebesgue's dominated convergence theorem yields \(f-g_{n}\to f_{h}\) in \(\ell^{2}(X,m)\), which shows the claim. Let now \((g_{n})\) be a sequence as in the claim. Since \(\sigma\) induces the discrete topology, the compactly supported function \(g_{n}\) can be continuously extended to \(\overline{X}^{\sigma}\) by letting \(g_{n}=0\) on \(\partial_{\sigma}X\). Then \(f-g_{n}\) is continuous on \(\overline{X}^{\sigma}\) with \(f-g_{n}=f\) on \(\partial_{\sigma}X\). By the continuity of \(f\) there exists a relatively open neighborhood \(U_{x}\subseteq\partial_{\sigma}X\) of \(x\) in \(\partial_{\sigma}X\) with \(f-g_{n}=f\geq\varepsilon/2\) on \(U_{x}\) for all \(n\in\mathbb{N}\). Using that \(x\in S\) we obtain \(\mathrm{cap}_{m}(U_{x})>0\). By the continuity of \(f-g_{n}\) there exists an open \(O_{n}\subseteq\overline{X}^{\sigma}\) with \(U_{x}\subseteq O_{n}\) and \(f-g_{n}\geq\varepsilon/4\) on \(O_{n}\) such that \(4(f-g_{n})/\varepsilon\geq 1\) on \(X\cap O_{n}\). The way the capacity is defined for subsets of the boundary yields \[\frac{16}{\varepsilon^{2}}\left\|f-g_{n}\right\|_{Q,m}^{2}\geq\mathrm{cap}_{m }(X\cap O_{n})\geq\mathrm{cap}_{m}(U_{x}).\] Using \(f-g_{n}\to f_{h}\) in \(H^{1}(G,m)\), we obtain \[\left\|f_{h}\right\|_{Q,m}^{2}=\lim_{n\to\infty}\left\|f-g_{n}\right\|_{Q,m}^{ 2}\geq\frac{\varepsilon^{2}}{16}\mathrm{cap}_{m}(U_{x})>0\] and arrive at \(f_{h}\neq 0\). For the 'In particular'-part assume that \(\Phi(f)\) is constant equal to \(C\). Since the harmonic part of a constant function is just the constant function itself, we obtain \(\Phi(f-C)=\Phi(f)-C=0\). Hence, what we previously proved shows \(f=C\) on \(S\). **Remark**.: The set \(S\) in this lemma is the support of the outer measure \(\mathrm{cap}_{m}\) restricted to subsets of \(\partial_{\sigma}X\). Assume \(\sigma\in\mathfrak{M}(G)\) is intrinsic with respect to the finite measure \(m\). We denote the set of bounded Lipschitz functions with respect to \(\sigma\) by \(\mathrm{Lip}_{b}(X)=\mathrm{Lip}_{b,\sigma}(X)\). If \(f\in\mathrm{Lip}_{b}(X)\), then Lemma 2.4 shows that \(f\in H^{1}(G,m)\) (the lemma implies \(f\in\mathfrak{D}(G)\) and the boundedness of \(f\) yields \(f\in\ell^{2}(X,m)\)). Moreover, \(f\) can be uniquely extended to a Lipschitz function \(\overline{X}^{\sigma}\), which we also denote by \(f\) with a slight abuse of notation. Hence, \(\mathrm{Lip}_{b}(X)\subseteq\mathfrak{D}(G)\cap C(\overline{X}^{\sigma})\). This observation is used in the proof of the following theorem. **Theorem 5.7**.: _Let \((X,b)\) be a graph and let \(\sigma\) be an intrinsic metric with respect to a finite measure \(m\) that induces the discrete topology _such that \(\operatorname{cap}_{m}(\partial_{\sigma}X)>0\). Moreover, let_ \[S=\{x\in\partial_{\sigma}X\mid\operatorname{cap}_{m}(U\cap\partial_{\sigma}X)>0 \text{ all open }x\in U\subseteq\overline{X}^{\sigma}\}.\] _Then \(\dim(\mathfrak{D}(G)\cap\mathfrak{H}(G))\geq|S|\). In particular, if \(|S|\geq 2\), then the graph admits a non-constant harmonic function of finite energy._ Proof.: Without loss of generality we can assume \(|S|\geq 2\) for otherwise the statement is trivial because constant functions belong to \(\mathfrak{D}(G)\cap\mathfrak{H}(G)\). Let \(f_{1},\dots,f_{n}\in\operatorname{Lip}_{b}(X)\). The previous lemma shows that if \(f_{1}|_{S},\dots,f_{n}|_{S}\) are linearly independent, then \(\Phi(f_{1}),\dots,\Phi(f_{n})\) are linearly independent in \(\mathfrak{D}(G)\cap\mathfrak{H}(G)\). With this at hand the statement follows from \(\dim\operatorname{Lip}_{b}(X)\geq\dim\operatorname{Lip}_{b}(S)\geq|S|\) (use that any bounded Lipschitz function on \(S\) can be extended to a bounded Lipschitz function on \(\overline{X}^{\sigma}\)). **Corollary 5.8** (Existence of non-constant harmonic functions).: _Let \(G=(X,b)\) be a strongly resolvable graph and let \(\sigma\) be a resolving metric that is intrinsic with respect to the finite measure \(m\). If \(\operatorname{cap}_{m}(\partial_{\sigma}X)>0\), then the space of bounded harmonic functions of finite energy is infinite dimensional._ Proof.: Let \(S\) be the support of the capacity on the boundary introduced in the previous theorem. Its complement is given by \[\partial_{\sigma}X\backslash S=\{x\in\partial_{\sigma}X\mid\text{ ex. open }x\in U\subseteq\overline{X}^{\sigma}\text{ with }\operatorname{cap}_{m}(U\cap\partial_{\sigma}X)=0\}.\] Since \((\overline{X}^{\sigma},\sigma)\) is separable, its topology has a countable basis. Hence, the \(\sigma\)-sub-additivity of \(\operatorname{cap}_{m}\) yields \(\operatorname{cap}_{m}(\partial_{\sigma}X\setminus S)=0\). Using the subadditivity of the capacity again shows \[\operatorname{cap}_{m}(S)=\operatorname{cap}_{m}(\partial_{\sigma}X)>0.\] By our assumption every point in \(S\) has capacity \(0\). Hence, \(S\) must be uncountable for otherwise \(\sigma\)-subadditivity would imply \(\operatorname{cap}_{m}(S)=0\), which contradicts our previous considerations. With this at hand the claim follows from the previous theorem. This theorem is a version of [2, Theorem 3.1] for strongly resolvable but possibly non-locally finite graphs. We had to replace the transience assumption of [2] by the stronger \(\operatorname{cap}_{m}(\partial_{\sigma}X)>0\). As discussed in Theorem 4.5, for some classes of graphs transience implies this condition. We mention these situations in the following corollary. **Corollary 5.9**.: _Let \(G=(X,b)\) be a transient, strongly resolvable graph and let one of the following conditions be fulfilled:_ _(a)_ \(G\) _is locally finite._ _(b)_ \(\sigma\) _is a resolving metric and_ \((X,\sigma)\) _is totally bounded._ _Then the space of bounded harmonic functions of finite energy is infinite dimensional._ Proof.: By Theorem 4.5 both conditions imply \(\mathrm{cap}_{m}(\partial_{\sigma}X)>0\) with respect to a resolving metric \(\sigma\). Hence, the claim follows from the previous corollary. Constructing harmonic functions from functions on a potential theoretic boundary (the support of the capacity on the metric boundary) is reminiscent of solving the Dirichlet problem. Under suitable additional conditions on the graph and on the function on the boundary this can be made precise. **Remark** (Solving the Dirichlet problem on \(S\) for uniformly transient graphs).: Let \((X,b)\) be a graph with \(\mathfrak{D}_{0}(G)\subseteq C_{0}(X)\), where \(C_{0}(X)\) denotes the uniform closure of \(C_{c}(X)\). Graphs with this property are called _uniformly transient_. As the name suggests uniformly transient graphs are transient, see [11] for this fact and further background on uniform transience. Let \(\sigma\in\mathfrak{M}(G)\) be intrinsic with respect to the finite measure \(m\) and let \(S\) denote the support of the capacity on the boundary discussed above. Then for any bounded Lipschitz function \(\varphi\colon S\to\mathbb{R}\) the Dirichlet problem \[\begin{cases}h\in\mathfrak{H}(G)\cap\mathfrak{D}(G)\\ h\in C_{b}\left(\overline{X}^{\sigma}\right)\text{ with }h|_{S}=\varphi\end{cases}\] has a unique solution. Uniqueness: This follows directly from Lemma 5.6. Existence: The bounded Lipschitz function \(\varphi\colon S\to\mathbb{R}\) can be extended to a bounded Lipschitz function \(f\colon\overline{X}^{\sigma}\to\mathbb{R}\). Consider the Royden decomposition \(f=f_{0}+f_{h}\) with \(f_{0}\in\mathfrak{D}_{0}(G)\subseteq C_{0}(X)\) and harmonic \(f_{h}\in\mathfrak{D}(G)\). Any sequence in \(X\) converging to a point in \(\partial_{\sigma}X\) must eventually leave any finite set. Hence, \(f_{0}\) can be extended to a continuous function on \(\overline{X}^{\sigma}\) by letting \(f_{0}=0\) on \(\partial_{\sigma}X\). This shows that also \(f_{h}=f-f_{0}\) has a bounded continuous extension to \(\overline{X}^{\sigma}\) with \(f_{h}|_{\partial_{\sigma}X}=f|_{\partial_{\sigma}X}\). By constructions this yields \(f_{h}|_{S}=\varphi\). ## 6. Planar and canonically compactifiable graphs In this section we show that circle packings of bounded geometry and hence locally finite planar graphs of bounded geometry are always strongly resolvable. Moreover, we prove that canonically compactifiable graphs are never strongly resolvable showing that planar graphs of bounded geometry can never be canonically compactifiable. First we recall the notion of circle packings and their contact graphs. For an extensive background on these topics we refer to the book [15]. **Definition 6.1** (Circle packing and subordinated graphs).: A _circle packing_ is a set \(X\neq\emptyset\) and two maps \(r\colon X\to(0,\infty)\) and \(\varphi\colon X\to\mathbb{R}^{2}\) such that the collection of closed circles \(C_{x}=B_{r(x)}(\varphi(x))\), \(x\in X,\) in \(\mathbb{R}^{2}\) satisfies \(C_{x}^{\circ}\cap C_{y}^{\circ}=\emptyset\) whenever \(x\neq y\). It is called _bounded_ if is a bounded set. An edge weight \(b\) on \(X\) is called _subordinate_ to the circle packing if \(b(x,y)>0\) implies \(C_{x}\cap C_{y}\neq\emptyset.\) **Remark**.: In what follows we simply write \(C_{x},x\in X,\) to denote a circle packing. The _contact graph_ or _nerve_ of a circle packing \(C_{x},x\in X,\) is the combinatorial graph on \(X\) with \(x\sim y\) if \(C_{x}\cap C_{y}\neq\emptyset\). Hence, an edge weight \(b\) on \(X\) is subordinate to the circle packing if and only if the induced discrete graph is a subgraph of the contact graph. The following is our main observation in this section. **Theorem 6.2**.: _Let \(C_{x},x\in X,\) be a bounded circle packing and suppose \((X,b)\) is subordinate to the circle packing and has bounded geometry. Then \((X,b)\) is strongly resolvable. In particular, if \((X,b)\) is transient, then it possesses a non-constant harmonic function of finite energy._ Proof.: Let \(\Omega=\sup_{x\in X}\deg(x)\). Then \(\Omega<\infty\) due to \((X,b)\) having bounded geometry. As above we let \(r,\varphi\) denote the maps inducing the circle packing. We consider the metric \(\sigma\) on \(X\) defined by \(\sigma(x,y)=|\varphi(x)-\varphi(y)|.\) We first show that \(\sigma\) is an intrinsic metric with respect to a finite measure inducing the discrete topology. The assumption \(C_{x}\cap C_{y}\neq\emptyset\) for all \(x\neq y\) implies \[\sigma(x,y)\geq r(x)+r(y)>r(x)\] for all \(y\neq x\). Hence, \(\sigma\) induces the discrete topology. Since \(b\) is subordinate to the circle packing, we also have \(C_{x}\cap C_{y}\neq\emptyset\) whenever \(x\sim y\). For \(x\sim y\) this implies \(\sigma(x,y)=r(x)+r(y)\). We infer \[\sum_{x,y\in X}b(x,y)\sigma(x,y)^{2} \leq 2\sum_{x,y\in X}b(x,y)(r(x)^{2}+r(y)^{2})\] \[\leq 4\Omega\sum_{x\in X}r(x)^{2}\] \[\leq\frac{4\Omega}{\pi}\lambda(A)<\infty,\] with \(A=\bigcup_{x\in X}C_{x}\) and \(\lambda\) the Lebesgue measure. This shows that \(\sigma\) is intrinsic with respect to a finite measure \(m\). Using that \(\varphi\) is an isometry we identify \((X,\sigma)\) with \(\varphi(X)\) in \(\mathbb{R}^{2}\). In particular, the boundary with respect to \(\sigma\) is just the Euclidean boundary. Given \(w\in\partial X\) we show \(\operatorname{cap}_{m}(\{w\})=0\). For \(r>0\) we consider the function \[f_{r}\colon X\to\mathbb{R},\quad f_{r}(x)=\left(2-\frac{|x-w|}{r}\right)_{+} \wedge 1.\] It satisfies \(f_{r}=1\) on \(B_{r}(w)\cap X\) and \(f_{r}=0\) on \(X\setminus B_{2r}(w)\). Moreover, for \(x\sim y\) we have \[|f_{r}(x)-f_{r}(y)|^{2}\leq\frac{|x-y|^{2}}{r^{2}}=\frac{(r(x)+r(y))^{2}}{r^{2 }}.\] Next we compare \((r(x)+r(y))^{2}\) with \(\lambda((C_{x}\cup C_{y})\cap B_{2r}(w))\) as long as \(x\sim y\) and \(x,y\in B_{2r}(w)\). The boundary point \(w\) does not belong to the interior of the discs \(C_{x},C_{y}\). This leads to \(r(x),r(y)\leq 2r\). Using this observation and that \(C_{x},C_{y}\) are tangent, we obtain \[\lambda((C_{x}\cup C_{y})\cap B_{2r}(w)) =\lambda(C_{x}\cap B_{2r}(w))+\lambda(C_{y}\cap B_{2r}(w))\] \[\geq C(r(x)^{2}+r(y)^{2})\] for some constant \(C>0\) independent of \(x,y\) and \(r\) (for the last inequality we simply estimated the area of the intersection of two discs with the given parameters). Combining these estimates we infer \[Q(f_{r}) \leq\frac{1}{2r^{2}}\sum_{x,y\in X}b(x,y)(r(x)+r(y))^{2}\] \[\leq\frac{1}{Cr^{2}}\sum_{x,y\in X}b(x,y)\lambda((C_{x}\cup C_{y} )\cap B_{2r}(w))\] \[\leq\frac{2\Omega}{Cr^{2}}\lambda(B_{2r}(w))\] \[\leq\frac{8\pi\Omega}{C}.\] Since \(m\) is finite, we also have \(\left\|f_{r}\right\|_{m}\to 0\), as \(r\to 0+\). Both observations combined imply that \((f_{r})\) is bounded in the Hilbert space \(H^{1}(G,m)\). Using the Banach-Saks theorem we obtain a decreasing sequence \(r_{k}\to 0\) such that \[g_{n}=\frac{1}{n}\sum_{k=1}^{n}f_{r_{k}}\] converges in \(H^{1}(G,m)\) to some \(g\in H^{1}(G,m)\). Since convergence in \(H^{1}(G,m)\) implies \(\ell^{2}(X,m)\)-convergence and since \(f_{r_{k}}\to 0\) in \(\ell^{2}(X,m)\), we obtain \(g=0\). By construction we also have \(g_{n}\geq 1\) on \(B_{r_{n}}(w)\), which leads to \[\operatorname{cap}_{m}(\{w\})\leq\inf_{n\in\mathbb{N}}\left(Q(g_{n})+\left\| g_{n}\right\|_{m}^{2}\right)=0.\] The 'In particular'-part follows from Corollary 5.9 and the observation that \((X,\sigma)\) is totally bounded as it is isometric to a bounded and hence totally bounded subset of \(\mathbb{R}^{2}\). **Remark**.: We do not assume local finiteness in the previous theorem. If \(\bigcup_{x\in X}C_{x}\) is not dense in \(\mathbb{R}^{2}\), then the assumption on the boundedness of the circle packing can be dropped. In this case, one just uses inversion at a circle in the complement of \(\bigcup_{x\in X}C_{x}\) to obtain a bounded circle packing with isomorphic contact graph. For more details see also the proof of the following corollary. In the following corollary we call a weighted graph planar if the induced combinatorial graph is planar (for a precise definition of the latter see e.g. [14, Section 2.1]). **Corollary 6.3**.: _Let \(G=(X,b)\) be a locally finite planar graph of bounded geometry. Then \(G\) is strongly resolvable. In particular, if \(G\) is transient, then \(G\) possesses a non-constant harmonic function of finite energy._ Proof.: According to Claim 4.3 in [15] any locally finite graph is isomorphic to the contact graph of a circle packing. We show that the circle packing can be chosen to be bounded. With this at hand the claim follows from the previous theorem. We add one additional point \(o\) and one edge from \(o\) to a point in \(X\) such that the resulting graph \((X^{\prime},b^{\prime})\) is still planar. According to Claim 4.3 in [15] the graph \((X^{\prime},b^{\prime})\) is isomorphic to a contact graph of a circle packing \(C_{x},x\in X^{\prime}\). In order to make this circle packing bounded, we use inversion at the circle \(C_{o}\) corresponding to the new vertex \(o\). We denote the inversion map by \(\psi\). Since inversions map circles to circles, \(\psi(C_{x}),x\in X\), is a circle packing inside the bounded set \(C_{o}\). By construction its contact graph is the combinatorial graph underlying \((X,b)\). **Remark**.: The existence of non-trivial harmonic functions on transient planar graphs of bounded geometry was one of the main results [2]. Subsequently, even more explicit description of all harmonic functions of planar graphs were given via boundaries of sphere packings [1] or square tilings [5]. For a unified approach we refer to [9]. Recently the class of canonically compactifiable graphs (see below for a definition) has gathered some attention. Our previous considerations allow us to show that locally finite planar graphs of bounded geometry are never canonically compactifiable. According to [6] a graph \(G=(X,b)\) is called _canonically compactifiable_ if \(\mathfrak{D}(G)\subseteq\ell^{\infty}(X)\) (see [18] for different equivalent characterizations as well). Examples are \(\mathbb{Z}^{n}\) with \(n\geq 3\), see [11, Section 6], or graphs \((X,b)\) for which \[\sum_{x,y\in X}\frac{1}{b(x,y)}<\infty,\] see [6, Example 4.6]. Note that the latter condition implies very large vertex degrees. We note the following. **Theorem 6.4**.: _Infinite canonically compactifiable graphs are not strongly resolvable. In particular, locally finite infinite planar graphs of bounded geometry are not canonically compactifiable._ Proof.: Let \((X,b)\) be an infinite canonically compactifiable graph and let \(m\) be a finite measure on \(X\). Canonical compactifiability yields \(H^{1}(G,m)\subseteq\ell^{\infty}(X)\). The closed graph theorem implies the existence of \(C>0\) such that \[\left\|f\right\|_{\infty}^{2}\leq C(Q(f)+\left\|f\right\|_{m}^{2})\] for all \(f\in H^{1}(G,m)\). This implies \(\operatorname{cap}_{m}(U)\geq 1/C\) for any \(\emptyset\neq U\subseteq X\) such that points in any metric boundary have a capacity at least \(1/C\). It remains to prove that for any intrinsic metric \(\sigma\) with respect to \(m\), which induces the discrete topology, the space \((X,\sigma)\) is not complete (and hence it has at least one boundary point). According to [18]\((X,b)\) being canonically compactifiable and \(\sigma\) being an intrisic metric with respect to a finite measure imply that \((X,\sigma)\) is totally bounded. Hence, \(\overline{X}^{\sigma}\) is compact. But \((X,\sigma)\) is not compact as an infinite set with the discrete topology. This shows \(\partial_{\sigma}X=\overline{X}^{\sigma}\setminus X\neq\emptyset\). The 'In particular'-part follows from the previous corollary.
2304.12688
Leveraging Audio-Tagging Assisted Sound Event Detection using Weakified Strong Labels and Frequency Dynamic Convolutions
Jointly learning from a small labeled set and a larger unlabeled set is an active research topic under semi-supervised learning (SSL). In this paper, we propose a novel SSL method based on a two-stage framework for leveraging a large unlabeled in-domain set. Stage-1 of our proposed framework focuses on audio-tagging (AT), which assists the sound event detection (SED) system in Stage-2. The AT system is trained utilizing a strongly labeled set converted into weak predictions referred to as weakified set, a weakly labeled set, and an unlabeled set. This AT system then infers on the unlabeled set to generate reliable pseudo-weak labels, which are used with the strongly and weakly labeled set to train a frequency dynamic convolutional recurrent neural network-based SED system at Stage-2 in a supervised manner. Our system outperforms the baseline by 45.5% in terms of polyphonic sound detection score on the DESED real validation set.
Tanmay Khandelwal, Rohan Kumar Das, Andrew Koh, Eng Siong Chng
2023-04-25T09:43:40Z
http://arxiv.org/abs/2304.12688v1
Leveraging Audio-Tagging Assisted Sound Event Detection using Weakified Strong Labels and Frequency Dynamic Convolutions ###### Abstract Jointly learning from a small labeled set and a larger unlabeled set is an active research topic under semi-supervised learning (SSL). In this paper, we propose a novel SSL method based on a two-stage framework for leveraging a large unlabeled in-domain set. Stage-1 of our proposed framework focuses on audio-tagging (AT), which assists the sound event detection (SED) system in Stage-2. The AT system is trained utilizing a strongly labeled set converted into weak predictions referred to as weakified set, a weakly labeled set, and an unlabeled set. This AT system then infers on the unlabeled set to generate reliable pseudo-weak labels, which are used with the strongly and weakly labeled set to train a frequency dynamic convolutional recurrent neural network-based SED system at Stage-2 in a supervised manner. Our system outperforms the baseline by 45.5% in terms of polyphonic sound detection score on the DESED real validation set. semi-supervised learning, sound event detection, two-stage setup, pseudo-labels ## I Introduction Sound aids us in perceiving environmental changes and comprehending our surroundings. Humans have an in-built system for detecting and categorizing sound events in our various environments. The SED applications include audio surveillance in a variety of environments, such as smartphones [1, 2] and cities [3, 4]. The models developed for SED require strongly labeled data to accurately predict the temporal onset and offset. The manual annotation process for generating strong labels is expensive and time-consuming, and the annotations vary greatly due to the subjective judgment of the annotators. On the other hand, annotating the entire clip with audio labels to generate weak labels is much easier. Furthermore, collecting unlabeled datasets in-domain is equally simple. To leverage this readily available unlabeled set with a small amount of labeled set, several previous works have employed semi-supervised learning (SSL) techniques. The authors of [5] used mean-teacher (MT) learning method that employs exponential moving average, whereas an unsupervised data-augmentation is used in [6], which enforces the model to be consistent with respect to the noise added using data-augmentation (DA) techniques. In [7], the authors used interpolation consistency training (ICT) and shift consistency training, whereas in [8], they self-trained to produce pseudo-labels and train on them. To use the unlabeled set for supervised learning, the model generates pseudo-labels [9, 10, 11]. The pseudo-labeling process is similar to entropy minimization [12] and helps in cases where it can recover the cluster structure among the various classes [13]. It requires a sufficient number of labeled points to effectively learn the differentiation between the clusters. The labels for pseudo-labels are determined by the confidence threshold. The clip-wise labels above the confidence threshold are used as true labels for clips [11] in the typical supervised loss function. The model is then trained using labeled and unlabeled sets simultaneously. In addition to SSL techniques, past works have employed various DA techniques like SpecAugment [14], time-shift [9], pitch-shift [15], and mixup [16] to increase diversity and reduce overfitting. The detection and classification of acoustic scenes and events (DCASE) 2022 Task 4 focuses on SED-based SSL to utilize labeled and unlabeled data. We make the following contributions in this work to effectively exploit the unlabeled in-domain set provided in DCASE 2022 Task 4 by generating pseudo-labels: * Proposal of a two-stage framework [17] that performs audio-tagging (AT) at the first stage to estimate reliable pseudo-labels on unlabeled data used to train the SED system at the second stage in a supervised manner. * A novel weak training strategy to create weakified labels, where the strong labels are converted to weak predictions. The objective is to supply more weak labels for Stage-1 system training to lessen the model's inclination to predict inactive frames [18] when trained with strong labels. * Utilize pre-trained audio neural networks (PANNs) for Stage-1 to further improve the reliability of the pseudo-weak labels used to train the Stage-2 system. We used several DA techniques, pooling functions, and adaptive post-processing to improve the robustness of the developed systems, evaluate the proposed method, and make fair comparisons with other state-of-the-art methods. ## II Sound Event Detection System In this section, we briefly review the baseline system and the proposed, two-stage framework for sound event detection. ### _Baseline_ The baseline [19] architecture is a combination of convolutional neural network (CNN) and recurrent neural network (RNN) called convolutional recurrent neural network (CRNN), as depicted in Fig. 1 (a). The CNN part is made up of 7-blocks, each with 16, 32, 64, 128, 128, 128, and 128 filters, respectively. It has a kernel size of \(3\times 3\) and an average-pooling of [2, 2], [2, 2], [2, 1], [2, 1], [2, 1], [2, 1], [2, 1] per layer. The RNN is composed of two layers of 128 bidirectional gated recurrent units (Bi-GRU) [20]. The RNN block is followed by an attention pooling layer, which is a multiplication of a linear layer with softmax activations and a linear layer with sigmoid activations. The baseline employs the MT [5] strategy, which is a hybrid of two models: the student model and the teacher model (both having the same architecture). The student model is the final model used for inference, whereas the teacher model is designed to help the student model during training. Its weights are an exponential moving average of the student model's weights. ### _Proposed two-stage framework_ The objective of any SSL algorithm is to utilize labeled and unlabeled data to learn the underlying structure of the dataset effectively. The small amount of labeled data helps the model learn discrete or non-overlapping clusters for different labels. The cluster assumption [7] states that close points have the same class and points in different classes are more widely separated, therefore true decision boundaries flow through low-density input space. As training progresses, these clusters improve their cluster boundary, improving model predictions on the unlabeled in-domain set. To mitigate the problem of a small labeled set and to utilize the unlabeled set by learning the discrete clusters for each class, we propose a two-stage framework [17], shown in Fig. 2. Stage-1 utilizes the proposed weak training method to focus on AT, and Stage-2 then utilizes the reliable pseudo-labels generated from Stage-1 to have an improved SED performance. Furthermore, each stage makes use of MT adopted from the baseline. In addition to MT, we use another method used for SSL, called ICT [21], in both stages of the two-stage framework. The ICT substitutes all input samples with interpolated samples, helping the model to improve the generalization ability. A detailed description of the models used in each stage is given in the following subsections. #### Iii-B1 Stage-1 In order to have an effective AT in Stage-1, [9] showed the importance of deeper neural network models compared to the baseline CRNN. As feature extractor, we used CNN-14-based PANNs [22] with 118M parameters for pre-trained embeddings. The parameters of the PANNs-based embeddings are unfrozen and trained. The 14-layer CNN feature extractor consists of 6 convolutional blocks. Each convolutional block consists of 2 convolutional layers with a kernel size of \(3\times 3\). In addition, each convolutional layer is followed by batch normalization and rectified linear unit [23] non-linearity to stabilize the training. Average pooling [24] of \(2\times 2\) is applied to each convolutional block for down-sampling. The feature extractor is followed by 2-layers of Bi-GRU with 1024 hidden units. For frame-level predictions, the RNN output is multiplied by a dense layer with sigmoid activation, and for clip-level predictions, the linear layer is multiplied by a dense layer with softmax activation. Based on previous work [18] that uses only the weakly labeled data, we suggest a weak training strategy to improve Stage-1 AT systems. We converted the strongly labeled set into a weakly labeled set by removing the onset and offset and keeping the event labels, which we refer to as _weakified labels_. Then we trained the AT system using the weakified labels, weakly labeled set, and unlabeled set as illustrated in Fig. 2. #### Iii-B2 Stage-2 In this work, we used the AT (Stage-1) based system to make predictions on the unlabeled set to use them as pseudo-weak labels in Stage-2 training, as shown in Fig. 2. We believe this way, we can generate reliable pseudo-labels, which can help the SED model at Stage-2. The baseline CRNN's standard 2D convolutional block enforces translation equivariance on sound events along both the time and frequency axes, despite the fact that frequency is not a shift-invariant dimension. To focus on frequency-dependent patterns and to further improve the SED performance, we employed frequency dynamic (FDY)-convolutions proposed in [25] as it applies Fig. 1: Architecture of (a) CRNN (Baseline) (b) FDY-CRNN. Fig. 2: Proposed two-stage learning setup, with Stage-1 focusing on AT and Stage-2 focusing on SED. frequency adaptive kernels to enforce frequency dependency on 2D convolution. We replaced the baseline's standard 2D convolutional blocks with FDY-convolutional blocks, which have the same number of layers and feature maps as that of the baseline, as illustrated in Fig. 1 (b). Then it was trained on a pseudo-weakly labeled set, in addition to the strongly labeled set and the weakly labeled set, in a supervised manner. ## III Additional Methods ### _Pooling function_ Motivated from a prior work [26], we used exponential softmax to replace the attention pooling used in the baseline. The exponential softmax function assigns a weight of \(\exp(y_{i})\) to the frame-level probability \(y_{i}\) as given below: \[y=\frac{\sum_{i}y_{i}\exp(y_{i})}{\sum_{i}\exp(y_{i})} \tag{1}\] where \(y_{i}\) is the predicted probability of an event occurring in the \(i^{\text{th}}\) frame. This implies that, with a higher prediction probability, the higher the exponential weight is assigned to the frame-level probability. Hence, it is better under the stringent evaluation criteria for the correctness of the category. ### _Asymmetric focal loss (AFL)_ AFL [27] function is used to control the training weight depending on the ease and difficulty of the model training. The AFL function for each \(k^{\text{th}}\) data point with target sound event as \(y_{k}\) and predicted sound event as \(p_{k}\) is given below: \[l_{AFL}(p,y)=\sum_{n=1}^{K}[(1-p_{k})^{\gamma}y_{k}\ln p_{k}+(p_{k})^{\zeta}(1 -y_{k})\ln(1-p_{k})] \tag{2}\] where the parameters \(\gamma\) and \(\zeta\) are the weighing hyperparameters given as the input to the function that controls the weight of active and inactivate frames. ### _Data-augmentation (DA)_ We used several DA techniques during the training in both stages, such as time-masking [14], frame-shifting [18], mixup [16], addition of Gaussian noise and filter augmentation [18]. Time-masking masks the sequential time frames (which means replacing the elements by zeros or other values), whereas frame-shifting shifts the features and labels along the time axis. Again, mixup randomly mixes selected samples with a mixing parameter, helping in linear interpolation to improve the robustness of the model. In addition, filter augmentation, which uses varying weights on random frequency regions, has been shown to significantly improve SED performance. ### _Adaptive post-processing_ We used adaptive post-processing [28] in all trials, where the median filter window sizes \((Win)\) are different for each event category \(c\) based on the real-life event lengths as shown below: \[Win_{c}=duration_{c}\times\beta_{c} \tag{3}\] For several event categories with high duration variance, we used \(duration_{c}\) as the median duration. Here, we used \(\beta_{c}=\frac{1}{3}\) and then slightly adjusted the window sizes on the validation set. ## IV Experimental Setup The next subsections outline our experimental setup to demonstrate the efficacy of the proposed methods. ### _Dataset_ The DCASE 2022 Task 4 dataset used in this work is composed of 10 seconds audio clips to simulate a domestic environment. The development training set is divided into 3 major subsets: * 1,578 real recordings with weak annotations. * 14,412 real recordings, unlabeled in-domain training set * 10,000 synthetic recordings with strong annotations [29]. * An additional subset from the recently released strongly labeled AudioSet [30] subset of 3,470 real recordings with strong annotations is released as external data. The development validation set has 1,168 real recordings with strong annotations, and the public evaluation ("YouTube") set has 692 YouTube clips. ### _Pre-processing_ The audio clips are re-sampled at 16 kHz to a mono channel. Then, log-mel spectrograms are produced using mel-filters in the frequency domain from 0 to 8 kHz with a window size of 2048 samples and a hop size of 256 samples. Stage-1 employed 64 mel-filters, while Stage-2 used 128 mel-filters. The clips with a duration of less than 10 seconds are padded with silence. ### _Training process_ The batch size for all the experiments is 48 (1/4 strong set, 1/4 weak set, 1/2 unlabeled set). We employed Adam optimizer [31] with a learning rate of 0.001 and an exponential warmup for the first 50 epochs with no early stopping. ### _Evaluation metrics_ We used polyphonic sound event detection scores (PSDS) [32] as a performance metric in our studies. The PSDS is more resistant to labeling subjectivity, allowing for ground truth interpretation and temporal structure detection. The single PSDS is computed using polyphonic receiver operating characteristic curves, allowing comparison independent of the operating point. Additionally, it can be adapted for various applications to ensure the appropriate user experience. As a result, it overcomes the limitations of traditional collars-based event F-scores. Using hyperparameters values adopted from the DCASE 2022 Task 4 for the Detection Tolerance Criterion (\(\rho_{DTC}\)) and Ground Truth intersection Criterion (\(\rho_{GTC}\)) mentioned in Table I, we compute the PSDS on two scenarios that stress distinct system features. The system must react fast to event detection in Scenario-1, hence it focuses on sound event temporal localization. Scenario-2 focuses less on reaction time and more on class confusion. ### _Developed system_ The models, DA methods, and experimental settings of our two-stage system are given in Table II. In our two-stage study, Stage-1 uses PANNs while Stage-2 uses FDY-CRNN. ## V Results and Analysis In this section, we report the studies of the proposed two-stage framework in a stage-wise manner, with ablation studies. ### _Stage-1 comparison_ We are first interested in assessing the contribution of each component to our system at Stage-1. Table III shows the SED performance of Stage-1 trained on a real strong set, synthetic strong set, weak set, unlabeled set, and using CNN-14-based PANNs as the pre-trained embeddings. Experimental results show that pre-trained models as feature extractors trained on larger datasets exceed the DCASE 2022 Task 4 organizers' baseline (Baseline), which uses an external dataset. We also observe that our weak training with the PANNs method significantly improves PSDS2 from 0.552 to 0.831 compared to the baseline and drastically decreases PSDS1 from 0.351 to 0.057. PSDS2 increases due to low tolerance in \(\rho_{DTC}\) and \(\rho_{GTC}\)[32], as seen in Table I. As per the parameters for PSDS2, the tolerance value is 0.1, thus the prediction is regarded as true positive even when there is at least one ground truth greater than 1 second out of the 10 seconds clip [18]. Thus, having a higher PSDS2 is equivalent to having a better AT system. This relation is extended to train Stage-2 using weak pseudo-labels from PANNs-based Stage-1 with a higher PSDS2. Further, the decrease in PSDS1 can be attributed to it specifically focusing on temporal localization with a tolerance value of 0.7. Using a pre-trained model also sped up training because the model converged faster with optimized weights. The results show that DA approaches mentioned in Table II and adaptive post-processing improve performance slightly. We also demonstrate the performance of our Stage-1 on the public evaluation set later in Table VI. ### _Stage-2 comparison_ To assess the two-stage framework's importance, we constructed the baseline (CRNN) system from the organizers in a two-stage setup. Table IV demonstrates a 5.8% improvement in total PSDS (PSDS1 + PSDS2) over its result without a two-stage setup in Table III. We then used the best Stage-1 CNN-14-based PANNs model (PSDS2 = 0.840) to infer on the unlabeled data to create pseudo-weak labels for Stage-2. Using CRNN in Stage-2, resulted in a PSDS1 of 0.437 and PSDS2 of 0.681. Replacing with FDY-convolutions in Stage-2 improved the SED performance, resulting in a PSDS1 of 0.450, and with additional methods resulted in a PSDS1 of 0.472, a 45.5% improvement in overall PSDS (PSDS1 + PSDS2) for the two-stage setup. Table V shows the comparison of our system with other single systems (without ensembling) on the real validation set. The same Stage-2's PSDS1 was 0.479 and PSDS2 was 0.733 on the public evaluation set, as shown in Table VI.
2310.19642
Consistent Query Answering for Primary Keys on Rooted Tree Queries
We study the data complexity of consistent query answering (CQA) on databases that may violate the primary key constraints. A repair is a maximal subset of the database satisfying the primary key constraints. For a Boolean query q, the problem CERTAINTY(q) takes a database as input, and asks whether or not each repair satisfies q. The computational complexity of CERTAINTY(q) has been established whenever q is a self-join-free Boolean conjunctive query, or a (not necessarily self-join-free) Boolean path query. In this paper, we take one more step towards a general classification for all Boolean conjunctive queries by considering the class of rooted tree queries. In particular, we show that for every rooted tree query q, CERTAINTY(q) is in FO, NL-hard $\cap$ LFP, or coNP-complete, and it is decidable (in polynomial time), given q, which of the three cases applies. We also extend our classification to larger classes of queries with simple primary keys. Our classification criteria rely on query homomorphisms and our polynomial-time fixpoint algorithm is based on a novel use of context-free grammar (CFG).
Paraschos Koutris, Xiating Ouyang, Jef Wijsen
2023-10-30T15:31:54Z
http://arxiv.org/abs/2310.19642v1
# Consistent Query Answering for Primary Keys on Rooted Tree Queries ###### Abstract We study the data complexity of consistent query answering (CQA) on databases that may violate the primary key constraints. A repair is a maximal subset of the database satisfying the primary key constraints. For a Boolean query \(q\), the problem \(\mathsf{CERTAINTY}(q)\) takes a database as input, and asks whether or not each repair satisfies \(q\). The computational complexity of \(\mathsf{CERTAINTY}(q)\) has been established whenever \(q\) is a self-join-free Boolean conjunctive query, or a (not necessarily self-join-free) Boolean path query. In this paper, we take one more step towards a general classification for all Boolean conjunctive queries by considering the class of rooted tree queries. In particular, we show that for every rooted tree query \(q\), \(\mathsf{CERTAINTY}(q)\) is in **FO**, **NL**-hard \(\cap\)**LFP**, or **coNP**-complete, and it is decidable (in polynomial time), given \(q\), which of the three cases applies. We also extend our classification to larger classes of queries with simple primary keys. Our classification criteria rely on query homomorphisms and our polynomial-time fixpoint algorithm is based on a novel use of context-free grammar (CFG). ## 1 Introduction A relational database is _inconsistent_ if it violates one or more integrity constraints that are supposed to be satisfied. Database inconsistency is a common issue when integrating datasets from heterogeneous sources. In this paper, we focus on what are probably the most commonly imposed integrity constraints on relational databases: primary keys. A primary key constraint enforces that no two distinct tuples in the same relation agree on all primary key attributes. A _repair_ of such an inconsistent database instance is naturally defined as a maximal consistent subinstance of the database. Two approaches are then possible. In _data cleaning_, the objective is to single out the "best" repair, which however may not be practically possible. In _consistent query answering_ (CQA) [2], instead of cleaning the inconsistent database instance, we attempt to query _every_ possible repair of the database and obtain the _consistent_ (or _certain_) answers that are returned across all repairs. In computational complexity studies, consistent query answering is commonly defined as the following decision problem, for a fixed Boolean query \(q\) and fixed primary keys for all relation names occurring in \(q\): **PROBLEM \(\mathsf{CERTAINTY}(q)\)** **Input**: A database instance **db**. **Question**: Does \(q\) evaluate to true on every repair of **db**? The CQA problem for queries \(q(\vec{x})\) with free variables is to find all sequences of constants \(\vec{c}\), of the same length as \(\vec{x}\), such that \(q(\vec{c})\) is true in every repair. We often do not need separate treatment for different constants, in which case we can handle \(q(\vec{x})\) as Boolean by treating free variables as if they were constants [17, 27]. The problem \(\mathsf{CERTAINTY}(q)\) is obviously in **coNP** for every Boolean first-order query \(q\). It has been extensively studied for \(q\) in the class of Boolean conjunctive queries, denoted \(\mathsf{BCQ}\). Despite significant research efforts (see Section 2), the following dichotomy conjecture remains notoriously open. **Conjecture 1.1**.: _For every query \(q\) in \(\mathsf{BCQ}\), \(\mathsf{CERTAINTY}(q)\) is either in **PTIME** or **coNP**-complete._ An ever stronger conjecture is that the dichotomy of Conjecture 1.1 extends to unions of conjunctive queries. Fontaine [19] showed that this stronger conjecture implies the dichotomy theorem for conservative _Constraint Satisfaction Problems_ (CSP) [7, 54]. On the other hand, for self-join-free queries \(q\) in \(\mathsf{BCQ}\), the complexity of \(\mathsf{CERTAINTY}(q)\) is well established by the next theorem. **Theorem 1.1** ([39]).: _For every self-join-free query \(q\) in \(\mathsf{BCQ}\), \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\), \(\mathbf{L}\)-complete, or \(\mathbf{coNP}\)-complete, and it is decidable in polynomial time in the size of \(q\) which of the three cases applies._ Past research has indicated that the tools for proving Theorem1.1 largely fall short in dealing with difficulties caused by self-joins. A notable example concerns _path queries_, i.e., queries of the form: \[\exists x_{1}\cdots\exists x_{k+1}(R_{1}(\underline{x}_{1},x_{2})\wedge R_{2 }(\underline{x}_{2},x_{3})\wedge\cdots\wedge R_{k}(\underline{x}_{k},x_{k+1})).\] If a query of this form is self-join-free (i.e., if \(R_{i}\neq R_{j}\) whenever \(i\neq j\)), then the "attack graph" tool [39] immediately tells us that \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\). However, for path queries \(q\) with self-joins, \(\mathsf{CERTAINTY}(q)\) exhibits a tetrachotomy between \(\mathbf{FO}\), \(\mathbf{NL}\)-complete, \(\mathbf{PTIME}\)-complete, and \(\mathbf{coNP}\)-complete [32], and the complexity classification requires sophisticated tools. Note incidentally that self-join-freeness is a simplifying assumption that is also frequent outside \(\mathrm{CQA}\) (e.g., [20, 5, 21, 1]). A natural question is to extend the complexity classification for path queries to queries that are syntactically less constrained. In particular, while path queries are restricted to binary relation names, we aim for unrestricted arities, as in practical database systems, which brings us to the construct of tree queries. A query \(q\) in \(\mathsf{BCQ}\) is a _rooted (ordered) tree query_ if it is uniquely (up to a variable renaming) representable by a rooted ordered tree in which each non-leaf vertex is labeled by a relation name, and each leaf vertex is labeled by a unary relation name, a constant, or \(\bot\). The query \(q\) is read from this tree as follows: each vertex labeled by either a relation name or \(\bot\) is first associated with a fresh variable, and each vertex labeled by a constant is associated with that same constant; then, a vertex labeled with relation name \(R\) and associated with variable \(x\) represents the query atom \(R(\underline{x},y_{1},\ldots,y_{n})\), where \(y_{1},\ldots,y_{n}\) are the symbols (variables or constants) associated with the left-to-right ordered children of the vertex \(x\). The underlined position is the primary key. Note that a vertex labeled with a relation name of arity \(n+1\) must have \(n\) children. For example, consider the rooted tree in Fig.1(a) and associate fresh variables to its vertices as depicted in Fig.1(b). The rooted tree thus represents a query \(q_{1}\) that contains, among others, the atoms \(C(\underline{x},y,z)\) and \(R(\underline{y},u_{1},v_{1})\). It is easy to see that every path query is a rooted tree query. The class of all rooted tree queries is denoted \(\mathsf{TreeBCQ}\). We can now present our main results. **Theorem 1.2**.: _For every query \(q\) in \(\mathsf{TreeBCQ}\), \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\), \(\mathbf{NL}\)-hard \(\cap\)\(\mathbf{LFP}\), or \(\mathbf{coNP}\)-complete, and it is decidable in polynomial time in the size of \(q\) which of the three cases applies._ Here \(\mathbf{LFP}\) denotes least fixed point logic as defined in [40, p. 181] (a.k.a. \(\mathbf{FO}[\mathbf{LFP}]\)), and \(\mathbf{NL}\) denotes the class of problems decidable by a non-deterministic Turing machine using only logarithmic space. The classification criteria implied in Theorem1.2 are explicitly stated in Theorem4.1. It will turn out that subtree homomorphisms play a crucial role in the complexity classification of \(\mathsf{CERTAINTY}(q)\) for queries \(q\) in \(\mathsf{TreeBCQ}\). For example, our results show that for the queries \(q_{1}\) and \(q_{2}\) represented in, respectively, Fig.1(a) and (c), \(\mathsf{CERTAINTY}(q_{1})\) is \(\mathbf{coNP}\)-complete, while \(\mathsf{CERTAINTY}(q_{2})\) is in \(\mathbf{FO}\). The difference occurs because the two ordered subtrees rooted at \(R\) are isomorphic in \(q_{2}\) (\(A\) precedes \(B\) in both subtrees), but not in \(q_{1}\). Another novel and useful tool in the complexity classification is a context-free grammar (CFG) that generalizes the NFA for path queries used in [32]. Once Theorem 1.2 is proved, it is natural to generalize rooted tree queries further by allowing queries that can be represented by graphs that are not trees. We thereto define GraphBCQ (Definition 9.1), a subclass of BCQ that extends TreeBCQ. In GraphBCQ queries, two distinct atoms can share a variable occurring at non-primary-key positions, which requires representations by DAGs rather than trees. Moreover, GraphBCQ gives up on the acyclicity requirement that is cooked into TreeBCQ. Significantly, we were able to establish the **FO**-boundary in the set \(\{\textsf{CERTAINTY}(q)\mid q\in\textsf{GraphBCQ}\}\). **Theorem 1.3**.: _For every query \(q\) in GraphBCQ, it is decidable whether or not \(\textsf{CERTAINTY}(q)\) is in **FO**; and when it is, a first-order rewriting can be effectively constructed._ So far, we have not achieved a fine-grained complexity classification of all problems in \(\{\textsf{CERTAINTY}(q)\mid q\in\textsf{GraphBCQ}\}\). However, we were able to do so for the set of Berge-acyclic queries in GraphBCQ, denoted \(\textsf{GraphBergeBCQ}\). Recall that a conjunctive query is Berge-acyclic if its incidence graph (i.e., the undirected bipartite graph that connects every variable \(x\) to all query atoms in which \(x\) occurs) is acyclic. **Theorem 1.4**.: _For every query \(q\) in \(\textsf{GraphBergeBCQ}\), the problem \(\textsf{CERTAINTY}(q)\) is in **FO**, **NL**-hard \(\cap\)_**LFP**_, or **coNP**-complete, and it is decidable in polynomial time in the size of \(q\) which of the three cases applies._ Since TreeBCQ\(\subsetneqq\)GraphBergeBCQ\(\subsetneqq\)GraphBCQ, Theorem 1.2 is subsumed by Theorem 1.4. We nevertheless provide Theorem 1.2 explicitly, because its proof makes up the main part of this paper. In Section 9.2, we will discuss the challenges in extending Theorem 1.4 beyond \(\textsf{GraphBergeBCQ}\). ## 2 Related Work Inconsistency management has been studied in various database contexts (e.g., graph databases [4, 3], medical databases [25], online databases [26], spatial databases [47]), and under different repair semantics (e.g., [13, 41, 50]). Arenas, Bertossi, and Chomicki initiated Consistent Query Answering (CQA) in 1999 [2]. Twenty years later, their contribution was acknowledged in a _Gems of PODS session_[6]. An overview of complexity classification results in CQA appeared in the _Database Principles_ column of SIGMOD Record [53]. The term \(\textsf{CERTAINTY}(q)\) was coined in [51] to refer to CQA for Boolean queries \(q\) on databases that violate primary keys, one per relation, which are fixed by \(q\)'s schema. The complexity classification of \(\textsf{CERTAINTY}(q)\) for the class of self-join-free Boolean conjunctive queries underwent a series of efforts [22, 30, 33, 34, 37], until it was revealed that the complexity of \(\textsf{CERTAINTY}(q)\) for self-join-free conjunctive queries displays a trichotomy between **FO**, **L**-complete, and **coNP**-complete [35, 39]. A few extensions beyond this trichotomy result are known. Under the requirement of self-join-freeness, it remains decidable whether or not \(\textsf{CERTAINTY}(q)\) is in **FO** in the presence of negated atoms [36], multiple keys [38], and unary foreign keys [24]. Little is known concerning the complexity classification of the problem \(\textsf{CERTAINTY}(q)\) beyond self-join-free conjunctive queries. For the restricted class of Boolean path queries \(q\), \(\textsf{CERTAINTY}(q)\) already exhibits a trichotomy between **FO**, **NL**-complete, **PTIME**-complete and **coNP**-complete [32]. Figueira et al. [18] have recently discovered a simple fixpoint algorithm that solves \(\textsf{CERTAINTY}(q)\) when \(q\) is a self-join free conjunctive query or a path query such that \(\textsf{CERTAINTY}(q)\) is in **PTIME**. As already discussed in Section 1, relationships have been found between CQA and CSP [19, 42]. The counting variant of the problem \(\textsf{CERTAINTY}(q)\), denoted \(\sharp\textsf{CERTAINTY}(q)\), asks to count the number of repairs that satisfy some Boolean query \(q\). For self-join-free Boolean conjunctive queries, \(\sharp\textsf{CERTAINTY}(q)\) exhibits a dichotomy between **FP** and \(\sharp\textsf{PTIME}\)-complete [45]. This dichotomy has been shown to extend to queries with self-joins if primary keys are singletons [46], and to functional dependencies [11]. Calautti, Console, and Pieris present in [8] a complexity analysis of these counting problems under many-one logspace reductions and conducted an experimental evaluation of randomized approximation schemes for approximating the percentage of repairs that satisfy a given query [9]. CQA is also studied under different notions of repairs like operational repairs [10, 12] and preferred repairs [48, 29]. CQA has also been studied for queries with aggregation, in both theory and practice [16, 28]. Theoretical research in CQA has stimulated implementations and experiments in prototype system, using different target languages and engines: SAT [15], ASP [44, 43, 27], BIP [31], SQL [17], logic programming [23], and hypergraph algorithms [14]. Preliminaries We assume disjoint sets of _variables_ and _constants_. A _valuation_ over a set \(U\) of variables is a total mapping \(\theta\) from \(U\) to the set of constants. **Atoms and key-equal facts**. Every relation name has a fixed arity, and a fixed set of primary-key positions. We will underline primary-key positions and assume w.l.o.g. that all primary-key positions precede all other positions. An _atom_ is then an expression \(R(s_{1},\ldots,s_{k},s_{k+1},\ldots,s_{n})\) where each \(s_{i}\) is a variable or a constant for \(1\leq i\leq n\). The sequence \(s_{1},\ldots,s_{k}\) is called the _primary key_ (of the atom). This primary key is called _simple_ if \(k=1\), and _constant-free_ if no constant occurs in it. An atom without variables is a _fact_. Two facts are _key-equal_ if they use the same relation name and agree on the primary key. **Database instances, blocks, and repairs**. A _database schema_ is a finite set of relation names. All constructs that follow are defined relative to a fixed database schema. A _database instance_ (or _database_ for short) is a finite set \(\mathbf{db}\) of facts using only the relation names of the schema. We write \(\mathsf{adom}(\mathbf{db})\) for the active domain of \(\mathbf{db}\) (i.e., the set of constants that occur in \(\mathbf{db}\)). A _block_ of \(\mathbf{db}\) is a maximal set of key-equal facts of \(\mathbf{db}\). Whenever a database instance \(\mathbf{db}\) is understood, we write \(R(\vec{c},*)\) for the block that contains all facts with relation name \(R\) and primary-key value \(\vec{c}\), where \(\vec{c}\) is a sequence of constants. A database instance \(\mathbf{db}\) is _consistent_ if it contains no two distinct facts that are key-equal (i.e., if no block of \(\mathbf{db}\) contains more than one fact). A _repair_ of \(\mathbf{db}\) is an inclusion-maximal consistent subset of \(\mathbf{db}\). **Boolean conjunctive queries**. A _Boolean conjunctive query_ is a finite set \(q=\{R_{1}(\vec{x}_{1},\vec{y}_{1}),\,\ldots,\,R_{n}(\vec{x}_{n},\vec{y}_{n})\}\) of atoms. The set \(q\) represents the first-order sentence with no free-variables \[\exists u_{1}\cdots\exists u_{k}(R_{1}(\vec{x}_{1},\vec{y}_{1})\wedge\cdots \wedge R_{n}(\vec{x}_{n},\vec{y}_{n})),\] and we denote \(\mathbf{vars}(q)=\{u_{1},\ldots,u_{k}\}\), the set of variables that occur in \(q\) and denote \(\mathsf{const}(q)\) as the set of constants that occur in \(q\). We write \(\mathsf{BCQ}\) for the class of Boolean conjunctive queries. Let \(q\) be a query in \(\mathsf{BCQ}\). We say that \(q\) has a _self-join_ if some relation name occurs more than once in \(q\). If \(q\) has no self-joins, it is called _self-join-free_. We say that \(q\) is _minimal_ if it is not equivalent to a query in \(\mathsf{BCQ}\) with a strictly smaller number of atoms. **Consistent query answering**. For every query \(q\) in \(\mathsf{BCQ}\), the decision problem \(\mathsf{CERTAINTY}(q)\) takes as input a database instance \(\mathbf{db}\), and asks whether \(q\) is satisfied by every repair of \(\mathbf{db}\). It is straightforward that \(\mathsf{CERTAINTY}(q)\) is \(\mathsf{coNP}\) for every \(q\in\mathsf{BCQ}\). **Rooted relation trees**. A _rooted relation tree_ is a (directed) rooted ordered tree where each internal vertex is labeled by a relation name, and each leaf vertex is labeled with either a unary relation name, a constant, or \(\bot\), such that every two vertices sharing the same label have the same number of children. We denote by \(\tau^{u}_{\Delta}\) the subtree rooted at vertex \(u\) in \(\tau\). Any rooted relation tree \(\tau\) has a string representation recursively defined as follows: the string representation of a tree with only one vertex is the label of that vertex; otherwise, if the root of \(\tau\) is labeled \(R\) and has the following ordered children \(v_{1},v_{2},\ldots,v_{n}\), then \(\tau\)'s string representation is \(R(s_{1},s_{2},\ldots,s_{n})\), where \(s_{i}\) is the string representation of \(\tau^{v_{i}}_{\Delta}\). For example, the tree in Fig. 1(a) has string representation \(C(R(A,B),R(B,A))\). We will often blur the distinction between rooted relation trees and their string representation. **Rooted tree query and rooted tree sets**. A _querification_ of a rooted relation tree \(\tau\) is a total function \(f\) with domain \(\tau\)'s vertex set that maps each vertex labeled by a constant to that same constant, and injectively maps all other vertices to variables. Such a querification naturally extends to a mapping \(f(\tau)\) of the entire tree: if \(u\) is a vertex in \(\tau\) with label \(R\) and children \(v_{1}\), \(v_{2}\),..., \(v_{n}\), then \(f(\tau)\) contains the atom \(R(\underline{f(u)},f(v_{1}),f(v_{2}),\ldots,f(v_{n}))\). A Boolean conjunctive query is a _rooted tree query_ if it is equal to \(f(\tau)\) for some querification \(f\) of some rooted relation tree \(\tau\). If \(q=f(\tau)\), we also say that \(q\) is _represented_ by \(\tau\), in which case we often blur the distinction between \(q\) and \(\tau\). We write \(R[x]\) for the unique \(R\)-atom in \(q\) with primary key variable \(x\). \(\mathsf{TreeBCQ}\) denotes the class of rooted tree queries. It can be verified that every rooted tree query is minimal. Every query \(q\) in \(\mathsf{TreeBCQ}\) is represented by a unique rooted relation tree. Conversely, every rooted relation tree represents a query in \(\mathsf{TreeBCQ}\) that is unique up to a variable renaming. When \(f(\tau)=q\), by a slight abuse of terminology, we may use \(q\) to refer to \(\tau\), and use the query variable \(x\) (or the expression \(R[x]\)) to refer to the vertex \(u\) in \(\tau\) that satisfies \(f(u)=x\) and whose label is \(R\). The variable \(r\) is the _root variable_ of a query \(q\) in \(\mathsf{TreeBCQ}\) if \(r\) is the root vertex of \(q\)'s rooted relation tree. For two distinct vertices \(x\) and \(y\), we write \(x<_{q}y\) if the vertex \(x\) is an ancestor of \(y\) in \(q\), and write \(x\parallel_{q}y\) if neither \(x<_{q}y\) nor \(y<_{q}x\). When \(x\) and \(y\) have the same label \(R\), we can also write \(R[x]<_{q}R[y]\) and \(R[x]\parallel_{q}R[y]\) instead of \(x<_{q}y\) and \(x\parallel_{q}y\) respectively. For every variable \(x\) in a rooted tree query \(q\), we write \(q^{x}_{\Delta}\) for the subquery of \(q\) whose rooted relation tree is the subtree rooted at vertex \(x\) in \(q\). A variable \(x\) is a leaf variable in \(q\) if \(q^{x}_{\Delta}=\bot\), \(q^{x}_{\Delta}=c\), or \(q^{x}_{\Delta}=A\), for some constant \(c\) or unary relation name \(A\). An _instantiation_ of a rooted relation tree \(\tau\) is a total function \(g\) from \(\tau\)'s vertex set to constants such that each vertex labeled by a constant \(c\) is mapped to \(c\). Such an instantiation naturally extends to a mapping \(g(\tau)\) of the entire tree: if \(u\) is a vertex in \(\tau\) with label \(R\) and children \(v_{1}\), \(v_{2}\),..., \(v_{n}\), then \(g(\tau)\) contains the fact \(R(\underline{g}(u),g(v_{1}),g(v_{2}),\ldots,g(v_{n}))\). A subset \(S\) of \(\mathbf{db}\) is a _rooted tree set in \(\mathbf{db}\) starting in \(c\)_ if \(S=g(\tau)\) for some instantiation \(g\) of \(\tau\) that maps \(\tau\)'s root to \(c\). A case of particular interest is when \(\mathbf{db}\) is consistent, in particular, when \(\mathbf{db}\) is a repair. It can be verified that a rooted tree set in a repair \(\mathbf{r}\) is uniquely determined by a constant \(c\) and a rooted tree \(\tau\) (because only one instantiation is possible); by overloading terminology, \(\tau\) is also called a rooted tree set in \(\mathbf{r}\) starting in \(c\). For convenience, an empty rooted tree set, denoted by \(\bot\), starts in any constant \(c\). **Homomorphism**. Let \(p,q\in\mathsf{BCQ}\). We write \(p\leq_{\to}q\) if there exists a homomorphism from \(p\) to \(q\), i.e., a mapping \(h:\mathsf{vars}(p)\rightarrow\mathsf{vars}(q)\cup\mathsf{const}(q)\) that acts as identity when applied on constants, such that for every atom \(R(\underline{x},\overline{y})\) in \(p\), \(R(h(\underline{x}),h(\overline{y}))\) is an atom of \(q\). For \(u\in\mathsf{vars}(p)\) and \(v\in\mathsf{vars}(q)\), we write \(p\leq_{u\to v}q\) if there exists a homomorphism \(h\) from \(p\) to \(q\) with \(h(u)=v\). It can now be verified that for rooted tree queries \(p\) and \(q\), there is a homomorphism \(h\) from \(p\) to \(q\) if and only if there is a label-preserving graph homomorphism from the rooted relation tree of \(p\) to that of \(q\) (we assume that a leaf vertex with label \(\bot\) can map to a vertex with any label). Since rooted relation trees are _ordered_ trees, graph homomorphisms must evidently be order-preserving. For example, there is no homomorphism between the trees \(R(A,B)\) and \(R(B,A)\). **Example 3.1**.: The following rooted tree query and its rooted relation tree are depicted in Fig. 2: \[q=\{ A(\underline{x_{0}},x_{1},x_{2}),R(\underline{x_{1}},x_{3},x_{4}),R( \underline{x_{2}},x_{5},x_{6}),\] \[R(\underline{x_{3}},x_{7},x_{8}),U(\underline{x_{7}}),\] \[X(\underline{x_{4}},c_{1}),Y(\underline{x_{5}},x_{9}),Z( \underline{x_{6}},c_{2},x_{10})\}.\] We have: \[q^{x_{1}}_{\triangle} =R(\underline{x_{1}},x_{3},x_{4}),R(\underline{x_{3}},x_{7},x_{8} ),U(\underline{x_{7}}),X(\underline{x_{4}},c_{1})\] \[=R(R(U,\bot),X(c_{1})),\] \[q^{x_{2}}_{\triangle} =R(\underline{x_{2}},x_{5},x_{6}),Y(\underline{x_{5}},x_{9}),Z( \underline{x_{6}},c_{2},x_{10})\] \[=R(Y(\bot),Z(c_{2},\bot)),\] \[q^{x_{3}}_{\triangle} =R(\underline{x_{3}},x_{7},x_{8}),U(\underline{x_{7}})\] \[=R(U,\bot).\] In this query \(q\), we have \(R[x_{1}]\parallel_{q}R[x_{2}]\), \(R[x_{1}]<_{q}R[x_{3}]\), and \(R[x_{2}]\parallel_{q}R[x_{3}]\). ## 4 The Complexity Classification Our classification focuses on rooted tree queries (TreeBCQ). We will extend to GraphBergeBCQ and GraphBCQ in Section 9. The classification of path queries in [32] uses a notion of "rewinding" to deal with self-joins: a path query \(u\cdot Rv\cdot Rw\) rewinds to \(u\cdot Rv\cdot Rv\cdot Rw\). Very informally, rewinding captures that query atoms with the same relation name can be "confused" with one another (or "rewind" to one another in our terminology) during query evaluation: in \(u\cdot Rv\cdot Rw\), once we have evaluated the prefix \(u\cdot Rv\cdot R\), the last \(R\) can be confused with the first one, in which case we continue with the suffix \(Rv\cdot Rw\) (instead of merely \(Rw\)). We generalize the notion of rewinding from path queries to rooted tree queries. Figure 2: An example rooted relation tree, where \(c_{1}\) and \(c_{2}\) are constants. **Definition 4.1** (Rewinding).: Let \(q\) be a query in \(\mathsf{TreeBCQ}\). Let \(R(\underline{x},\dots)\) and \(R(\underline{y},\dots)\) be two (not necessarily distinct) atoms in \(q\). We define \(q^{R:y\loop x}\) as the following rooted tree query \[q^{R:y\loop x}:=(q\setminus q_{\triangle}^{y})\cup f(q_{\triangle}^{x}),\] for some isomorphism \(f\) that maps \(x\) to \(y\) (i.e., \(f(x)=y\)), and maps every other variable in \(q_{\triangle}^{x}\) to a fresh variable. Intuitively, the rooted tree query \(q^{R:y\loop x}\) can be obtained by replacing \(q_{\triangle}^{y}\) with a fresh copy of \(q_{\triangle}^{x}\). Fig. 3 presents some rooted tree queries obtained from rewinding on the rooted tree \(q\) in Fig. 2. The classification criteria in [32] uses the notions of factors and prefixes that are specific to words, which can be generalized using homomorphism on rooted tree queries. Consider the following syntactic conditions on a rooted tree query \(q\) with root variable \(r\): * \(\mathsf{C}_{2}\) : for every two atoms \(R(\underline{x},\dots)\) and \(R(\underline{y},\dots)\) in \(q\), either \(q\leq_{\to}q^{R:y\loop x}\) or \(q\leq_{\to}q^{R:x\loop y}\). * \(\mathsf{C}_{1}\) : for every two atoms \(R(\underline{x},\dots)\) and \(R(\underline{y},\dots)\) in \(q\), either \(q\leq_{r\to r}q^{R:y\loop x}\) or \(q\leq_{r\to r}q^{R:x\loop y}\). It is easy to see that conditions \(\mathsf{C}_{1}\) and \(\mathsf{C}_{2}\) are decidable in polynomial time in the size of the query. We may restate \(\mathsf{C}_{2}\) and \(\mathsf{C}_{1}\) using more fine-grained syntactic conditions below. * \(\mathsf{C}_{\mathsf{branch}}\) : for every two atoms \(R[x]\parallel_{q}R[y]\) in \(q\), either \(q_{\triangle}^{y}\leq_{y\to x}q_{\triangle}^{x}\) or \(q_{\triangle}^{x}\leq_{x\to y}q_{\triangle}^{y}\). * \(\mathsf{C}_{\mathsf{factor}}\) : for every two atoms \(R[x]<_{q}R[y]\) in \(q\), we have \(q\leq_{\to}q^{R:y\loop x}\). * \(\mathsf{C}_{\mathsf{prefix}}\) : for every two atoms \(R[x]<_{q}R[y]\) in \(q\), we have \(q\leq_{r\to r}q^{R:y\loop x}\). **Lemma 4.1**.: _For every two atoms \(R[x]\parallel_{q}R[y]\) in a rooted tree query \(q\), we have \(q\leq_{\to}q^{R:y\loop x}\) if and only if \(q_{\triangle}^{y}\leq_{y\to x}q_{\triangle}^{x}\)._ For the sake of simplicity, we postpone the proof of Lemma 4.1 to Appendix A. Lemma 4.1 implies the following connections among the syntactic conditions. **Proposition 4.1**.: \(\mathsf{C}_{2}=\mathsf{C}_{\mathsf{factor}}\wedge\mathsf{C}_{\mathsf{branch}}\)_, \(\mathsf{C}_{1}=\mathsf{C}_{\mathsf{prefix}}\wedge\mathsf{C}_{\mathsf{branch}}\)._ **Example 4.1**.: Let \(q\) be as in Fig. 2. We have that \(q\) violates \(\mathsf{C}_{\mathsf{branch}}\) (and therefore \(\mathsf{C}_{2}\)), since there is no homomorphism from \(q\) to neither \(q^{R:x_{1}\loop x_{2}}\) nor \(q^{R:x_{2}\loop x_{1}}\). Fig. 4 shows some example rooted relation trees annotated with the syntactic conditions they satisfy or violate. Our main classification result can now be stated. **Theorem 4.1** (Trichotomy Theorem).: _For every query \(q\) in \(\mathsf{TreeBCQ}\),_ * _if_ \(q\) _satisfies_ \(\mathsf{C}_{2}\)_, then the problem_ \(\mathsf{CERTAINTY}(q)\) _is in_ \(\mathbf{LFP}\)_; otherwise it is_ \(\mathbf{coNP}\)_-complete; and_ * _if_ \(q\) _satisfies_ \(\mathsf{C}_{1}\)_, then the problem_ \(\mathsf{CERTAINTY}(q)\) _is in_ \(\mathbf{FO}\)_; otherwise it is_ \(\mathbf{NL}\)_-hard._ Figure 3: An illustration of rewinding for the query of Fig. 2; the modified subtrees are highlighted in red. Let us provide some intuitions behind Theorem 4.1. Both \(\mathsf{C}_{\mathsf{prefix}}\) and \(\mathsf{C}_{\mathsf{factor}}\) concern the homomorphism from \(q\) to the rooted tree query obtained by rewinding from a subtree to its ancestor subtree, which resembles the case on path queries. The condition \(\mathsf{C}_{\mathsf{branch}}\) is vacuously satisfied for path queries, but is crucial to the classification of rooted tree queries. For the complexity lower bound, if \(q\) violates \(\mathsf{C}_{\mathsf{branch}}\), then \(\mathsf{C}\mathsf{C}\mathsf{C}\mathsf{C}\mathsf{A}\mathsf{N}\mathsf{T}\mathsf{ T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T }\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T}\mathsf{T} \mathsf{T}\mathsf * Case that \(q^{y}_{\triangle}\leq_{y\to x}q^{x}_{\triangle}\) and \(q^{z}_{\triangle}\leq_{z\to y}q^{y}_{\triangle}\). Then we have \(q^{z}_{\triangle}\leq_{z\to x}q^{x}_{\triangle}\), as desired. * Case that \(R[x]<_{q}R[y]\) and \(q^{z}_{\triangle}\leq_{z\to y}q^{y}_{\triangle}\). The claim follows if \(R[x]<_{q}R[z]\). Suppose for contradiction that \(R[z]<_{q}R[x]\). Then \(R[z]<_{q}R[y]\), and \(q^{z}_{\triangle}\) contains more atoms than \(q^{y}_{\triangle}\). However, we have \(q^{z}_{\triangle}\leq_{z\to y}q^{y}_{\triangle}\), a contradiction. It then must be that \(R[x]\parallel_{q}R[z]\). Suppose for contradiction that \(q^{x}_{\triangle}\leq_{z\to y}q^{x}_{\triangle}\). Then we have \(q^{x}_{\triangle}\leq_{x\to y}q^{x}_{\triangle}\), but \(R[x]<_{q}R[y]\), a contradiction. Since \(q\) satisfies \(\mathsf{C_{branch}}\), we have \(q^{z}_{\triangle}\leq_{z\to x}q^{x}_{\triangle}\), as desired. * Case that \(q^{y}_{\triangle}\leq_{y\to x}q^{x}_{\triangle}\) and \(R[y]<_{q}R[z]\). The claim follows if \(R[x]<_{q}R[z]\). Suppose for contradiction that \(R[z]<_{q}R[x]\). Then \(R[y]<_{q}R[x]\), and \(q^{y}_{\triangle}\) contains more atoms than \(q^{x}_{\triangle}\). However, we have \(q^{y}_{\triangle}\leq_{y\to x}q^{x}_{\triangle}\), a contradiction. It then must be that \(R[x]\parallel_{q}R[z]\). Suppose for contradiction that \(q^{x}_{\triangle}\leq_{x\to x}q^{x}_{\triangle}\). Then we have \(q^{y}_{\triangle}\leq_{y\to x}q^{x}_{\triangle}\), but \(R[y]<_{q}R[z]\), a contradiction. Since \(q\) satisfies \(\mathsf{C_{branch}}\), it follows that \(q^{z}_{\triangle}\leq_{z\to x}q^{x}_{\triangle}\). This concludes the proof. The remainder of this paper is organized as follows. Section 5 defines a context-free grammar \(\mathsf{CFG}^{\clubsuit}(q)\) for each \(q\in\mathsf{TreeBCQ}\), and the problem \(\mathsf{CERTAIN}_{\mathsf{tr}}(q)\) that concerns \(\mathsf{CFG}^{\clubsuit}(q)\). Lemma 5.1 concludes the equivalence of \(\mathsf{CERTAINTY}(q)\) and \(\mathsf{CERTAIN}_{\mathsf{tr}}(q)\) if \(q\) satisfies \(\mathsf{C_{2}}\) (or \(\mathsf{C_{1}}\)). Section 6, we show that \(\mathsf{CERTAIN}_{\mathsf{tr}}(q)\) is in \(\mathbf{LFP}\) (and in \(\mathbf{PTIME}\)) if \(q\) satisfies \(\mathsf{C_{branch}}\). In Sections 7 and 8, we show the upper bounds and lower bounds in Theorem 4.1 respectively. In Section 9, we prove Theorems 1.3 and 1.4. ## 5 Context-Free Grammar We first generalize NFAs used in the study of path queries [32] to context-free grammars (CFGs). **Definition 5.1** (\(\mathsf{CFG}^{\clubsuit}(q)\)).: Let \(q\) be a query in \(\mathsf{TreeBCQ}\) with root variable \(r\). We define a context-free grammar \(\mathsf{CFG}^{\clubsuit}(q)\) over the string representations of rooted relation trees for each rooted tree query \(q\). The alphabet \(\Sigma\) of \(\mathsf{CFG}^{\clubsuit}(q)\) contains every relation symbol and constant in \(q\), open/close parentheses, \(\bot\) and comma. Whenever \(v\) is a variable or a constant in \(q\), there is a nonterminal symbol \(S_{v}\). Every symbol in \(\Sigma\) is a terminal symbol. The rules of \(\mathsf{CFG}^{\clubsuit}(q)\) are as follows: * for each atom \(R[y]=R(\underline{y},y_{1},y_{2},\ldots,y_{n})\) in \(q\), there is a forward production rule \[S_{y}\to_{q}R(S_{y_{1}},S_{y_{2}},\ldots,S_{y_{n}})\] (1) * whenever \(R[x]\) and \(R[y]\) are atoms in \(q\) such that \(R[x]<_{q}R[y]\), there is a backward production rule \[S_{y}\to_{q}S_{x}\] (2) * for every leaf variable \(u\) whose label \(L\) is either \(\bot\) or a unary relation name, there is a rule \[S_{u}\to_{q}L\] (3) * for each constant \(c\) in \(q\), there is a rule \[S_{c}\to_{q}c\] (4) The starting symbol of \(\mathsf{CFG}^{\clubsuit}(q)\) is \(S_{r}\) where \(r\) is the root variable of \(q\). A rooted relation tree \(\tau\) is accepted by \(\mathsf{CFG}^{\clubsuit}(q)\), denoted as \(\tau\in\mathsf{CFG}^{\clubsuit}(q)\), if the string representation of \(\tau\) can be derived from \(S_{r}\), written as \(S_{r}\stackrel{{*}}{{\to}}_{q}\tau\). **Example 5.1**.: Let \(q\) be as in Fig. 2(a) with variables labeled as in Fig. 2(b). The rooted relation tree \(\tau\) in Fig. 3(c) has string representation \(\tau=A(\tau_{1},\tau_{2})\) where \[\tau_{1} =R(R(R(U,\bot),X(c_{1})),X(c_{1})),\] \[\tau_{2} =R(Y(\bot),Z(c_{2},\bot)).\] We have \(S_{x_{2}}\stackrel{{*}}{{\to}}_{q}\tau_{2}\) by applying only forward rewrite rules. We show next \(S_{x_{1}}\stackrel{{*}}{{\to}}_{q}\tau_{1}\), using a backward rewrite rule \(S_{x_{3}}\to_{q}S_{x_{1}}\) at some point: \[S_{x_{1}} \rightarrow_{q}R(S_{x_{3}},S_{x_{4}})\] \[\rightarrow_{q}R(S_{x_{1}},X(S_{c_{1}}))\] \[\rightarrow_{q}R(R(S_{x_{3}},S_{x_{4}}),X(c_{1}))\] \[\rightarrow_{q}R(R(R(S_{x_{7}},S_{x_{8}}),X(S_{c_{1}})),X(c_{1}))\] \[\rightarrow_{q}R(R(R(U,\bot)),X(c_{1})),X(c_{1}))=\tau_{1}.\] Thus \(S_{x_{0}}\rightarrow_{q}A(S_{x_{1}},S_{x_{2}})\stackrel{{*}}{{ \rightarrow}}_{q}A(\tau_{1},\tau_{2})=\tau\). So it is correct to conclude that \(\tau\) is accepted by \(\mathsf{CFG}^{\clubsuit}(q)\). Recall from Section 3 that a rooted tree set in a repair \(\mathbf{r}\) is uniquely determined by a rooted tree \(\tau\) and a constant \(c\); such a rooted tree set is said to be accepted by \(\mathsf{CFG}^{\clubsuit}(q)\) if \(\tau\in\mathsf{CFG}^{\clubsuit}(q)\). For our technical treatment later, we next define modifications of \(\mathsf{CFG}^{\clubsuit}(q)\) by changing its starting terminal. **Definition 5.2** (\(\mathsf{S-CFG}^{\clubsuit}(q,u)\)).: For a query \(q\) in \(\mathsf{TreeBCQ}\) and a variable \(u\) in \(q\), we define \(\mathsf{S-CFG}^{\clubsuit}(q,u)\) as the context-free grammar that accepts a rooted relation tree \(\tau\) if and only if \(S_{u}\stackrel{{*}}{{\rightarrow}}_{q}\tau\). We now introduce the _certain trace problem_. For each \(q\) in \(\mathsf{TreeBCQ}\), \(\mathsf{CERTAIN}_{\mathbf{r}}(q)\) is defined as the following decision problem: **PROBLEM**\(\mathsf{CERTAIN}_{\mathbf{r}}(q)\) **Input**: A database instance \(\mathbf{db}\). **Question**: Is there a constant \(c\in\mathbf{db}\), such that for every repair \(\mathbf{r}\) of \(\mathbf{db}\), there is a rooted tree set \(\tau\) in \(\mathbf{r}\) starting in \(c\) with \(\tau\in\mathsf{CFG}^{\clubsuit}(q)\)? The problems \(\mathsf{CERTAINITY}(q)\) and \(\mathsf{CERTAIN}_{\mathbf{r}}(q)\) reduce to each other if \(q\) satisfies \(\mathsf{C}_{2}\). **Lemma 5.1**.: _Let \(q\) be a query in \(\mathsf{TreeBCQ}\) satisfying \(\mathsf{C}_{2}\). Let \(\mathbf{db}\) be a database instance. Then the following statements are equivalent:_ 1. \(\mathbf{db}\) _is a "yes"-instance of_ \(\mathsf{CERTAINITY}(q)\)_; and_ 2. \(\mathbf{db}\) _is a "yes"-instance of_ \(\mathsf{CERTAIN}_{\mathbf{r}}(q)\)_._ The proof of Lemma 5.1 is deferred to Section 7 since it requires some useful results to be developed in the subsequent sections. ## 6 Membership of \(\mathsf{CERTAIN}_{\mathbf{r}}(q)\) in LFP In this section, we show that the problem \(\mathsf{CERTAIN}_{\mathbf{r}}(q)\) is expressible in \(\mathbf{LFP}\) (and thus in \(\mathbf{PTIME}\)) if \(q\) satisfies \(\mathsf{C}_{\mathsf{branch}}\). Let \(\mathbf{db}\) be a database instance. Consider the algorithm in Fig. 6, following a dynamic programming fashion. The algorithm iteratively computes a set \(B\) of pairs \(\langle c,y\rangle\) until it reaches a fixpoint, ensuring that whenever \(\langle c,y\rangle\) is added to \(B\), then every repair of \(\mathbf{db}\) contains a rooted tree set starting in \(c\) that is accepted by \(\mathsf{S-CFG}^{\clubsuit}(q,y)\). Intuitively, this holds true because \(\langle c,y\rangle\) is added to \(B\) if for every possible fact \(f=R(\underline{c},\vec{d})\) that can be chosen by a repair of \(\mathbf{db}\), the context-free grammar \(\mathsf{S-CFG}^{\clubsuit}(q,y)\) can proceed by firing forward rule with nonterminal \(S_{y}\) that consumes \(f\) from the rooted tree set, or by non-deterministically firing some backward rule of the form \(S_{y}\rightarrow_{q}S_{x}\). The formal semantics for each pair \(\langle c,y\rangle\) is stated in Lemma 6.1. **Lemma 6.1**.: _Let \(q\) be a query in \(\mathsf{TreeBCQ}\) satisfying \(\mathsf{C}_{\mathsf{branch}}\). Let \(\mathbf{db}\) be a database instance. Let \(B\) be the output of the algorithm in Fig. 6. Then for any constant \(c\in\mathsf{adom}(\mathbf{db})\) and a variable \(y\) in \(q\), the following statements are equivalent:_ 1. \(\langle c,y\rangle\in B\)_; and_ 2. _for every repair_ \(\mathbf{r}\) _of_ \(\mathbf{db}\)_, there exists a rooted tree set_ \(\tau\) _in_ \(\mathbf{r}\) _starting in_ \(c\) _such that_ \(\tau\in\mathsf{S-CFG}^{\clubsuit}(q,y)\)_._ The crux in the proof of Lemma 6.1 relies on the existence of repairs called _frugal_: to show item (2) of Lemma 6.1, it will be sufficient to show that it holds true for frugal repairs. Frugal repairs also turn out to be useful in proving Lemma 5.1 and offer an alternative perspective to the algorithm, as stated in Corollary 7.1. ### Frugal repairs We first show that the evaluation result of the predicate "fact" and the membership in \(B\) in the algorithm of Fig. 6 propagate along the total preorder \(\preceq_{q}\). **Lemma 6.2**.: _Let \(q\) be a query in \(\mathsf{TreeBCQ}\) satisfying \(\mathsf{C_{branch}}\), and \(\mathbf{db}\) a database instance. Let \(R[x],R[y]\) be two atoms of \(q\). Then for every fact \(R(\underline{c},\vec{d})\) in \(\mathbf{db}\) and two atoms \(R[x]\preceq_{q}R[y]\),_ 1. _if_ \(\mathsf{fact}(R(\underline{c},\vec{d}),x)\) _is true, then_ \(\mathsf{fact}(R(\underline{c},\vec{d}),y)\) _is true, where_ \(\mathsf{fact}\) _is defined by the algorithm of Fig._ 6_; and_ 2. _if_ \(\langle c,x\rangle\in B\)_, then_ \(\langle c,y\rangle\in B\)_, where_ \(B\) _is the output of the algorithm of Fig._ 6_; and_ The technical proof of Lemma 6.2 is deferred to Appendix B. **Definition 6.1** (Frugal Set).: Let \(q\) be a query in \(\mathsf{TreeBCQ}\) satisfying \(\mathsf{C_{branch}}\), and \(\mathbf{db}\) a database instance. Let \(f=R(\underline{c},\vec{d})\) be an \(R\)-fact in \(\mathbf{db}\). We define the frugal set of \(f\) in \(\mathbf{db}\) with respect to \(q\) as \[\mathsf{FrugalSet}_{q}(f,\mathbf{db})=\{R[x]\in\mathsf{atoms}(q)\mid\mathsf{ fact}(R(\underline{c},\vec{d}),x)\text{ is true}\}.\] **Lemma 6.3**.: _Let \(q\) be a query in \(\mathsf{TreeBCQ}\) satisfying \(\mathsf{C_{branch}}\), and \(\mathbf{db}\) a database instance. For every two key-equal facts \(f\) and \(g\) in \(\mathbf{db}\), the sets \(\mathsf{FrugalSet}_{q}(f,\mathbf{db})\) and \(\mathsf{FrugalSet}_{q}(g,\mathbf{db})\) are comparable by \(\subseteq\)._ Proof.: Suppose for contradiction that there exists two key-equal facts \(f=R(\underline{c},\vec{d_{1}})\) and \(g=R(\underline{c},\vec{d_{2}})\) in \(\mathbf{db}\) such that \(R[x]\in\mathsf{FrugalSet}_{q}(f,\mathbf{db})\setminus\mathsf{FrugalSet}_{q}( g,\mathbf{db})\) and \(R[y]\in\mathsf{FrugalSet}_{q}(g,\mathbf{db})\setminus\mathsf{FrugalSet}_{q}( f,\mathbf{db})\). By Proposition 4.2, assume without loss of generality that \(R[x]\preceq_{q}R[y]\). Then since \(R[x]\in\mathsf{FrugalSet}_{q}(f,\mathbf{db})\), we have \(\mathsf{fact}(R(\underline{c},\vec{d_{1}}),x)\) is true, and thus \(\mathsf{fact}(R(\underline{c},\vec{d_{1}}),y)\) is true by Lemma 6.2, and hence \(R[y]\in\mathsf{FrugalSet}_{q}(f,\mathbf{db})\), a contradiction. A similar contradiction can also be reached if \(R[y]\preceq_{q}R[x]\). This completes the proof. Informally, by Lemma 6.3, among all facts of a non-empty block \(R(\underline{c},*)\) in \(\mathbf{db}\), there is a (not necessarily unique) fact \(R(\underline{c},\vec{d})\) with a \(\subseteq\)-minimal frugal set in \(\mathbf{db}\). The repair \(\mathbf{r}^{*}\) of \(\mathbf{db}\) containing all such facts is frugal in the sense that each fact in it satisfies as few \(R\)-atoms as possible; and if \(\mathbf{r}^{*}\) contains a rooted tree set \(\tau\) starting in \(c\) accepted by \(\mathsf{S}\mathsf{-CFG}^{\clubsuit}(q,y)\), so should every repair of \(\mathbf{db}\). We now formalize this idea. **Definition 6.2** (Frugal repair).: Let \(q\) be a query in \(\mathsf{TreeBCQ}\) satisfying \(\mathsf{C_{branch}}\). Let \(\mathbf{db}\) be a database instance. A _frugal repair_\(\mathbf{r}^{*}\) of \(\mathbf{db}\) with respect to \(q\) is constructed by picking, from each block \(R(\underline{c},*)\) of \(\mathbf{db}\), a fact \(R(\underline{c},\vec{d})\) which \(\subseteq\)-minimizes \(\mathsf{FrugalSet}_{q}(R(\underline{c},\vec{d}),\mathbf{db})\). Lemma 6.4 is then straightforward by construction of a frugal repair. **Lemma 6.4**.: _Let \(q\) be a rooted tree query satisfying \(\mathsf{C_{branch}}\). Let \(\mathbf{db}\) be a database instance. Let \(\mathbf{r}^{*}\) be a frugal repair of \(\mathbf{db}\) with respect to \(q\) and let \(R(\underline{c},\vec{d})\in\mathbf{r}^{*}\). Let \(R[u]\) be an atom in \(q\). If \(\mathsf{fact}(R(\underline{c},\vec{d}),u)\) is true, then \(\langle c,u\rangle\in B\)._ Figure 6: A fixpoint algorithm for computing a set \(B\), for a fixed rooted tree \(q\). Proof.: Let \(R(\underline{c},\underline{b})\) be an arbitrary factor in the block \(R(\underline{c},*)\) in \(\mathbf{db}\). By construction of a frugal repair, we have that \(\mathsf{FrugalSet}_{q}(R(\underline{c},\vec{d}),\mathbf{db})\subseteq\mathsf{ FrugalSet}_{q}(R(\underline{c},\vec{b}),\mathbf{db})\). Since \(R(\underline{c},\vec{d})\in\mathbf{r}^{*}\) and \(\mathsf{fact}(R(\underline{c},\vec{d}),u)\) is true, we have \(R[u]\in\mathsf{FrugalSet}_{q}(R(\underline{c},\vec{d}),\mathbf{db})\). Thus, \(R[u]\in\mathsf{FrugalSet}_{q}(R(\underline{c},\vec{b}),\mathbf{db})\) and \(\mathsf{fact}(R(\underline{c},\vec{b}),u)\) is true. Hence \(\langle c,u\rangle\in B\). Lemma 6.5 shows a desirable property of frugal repairs. **Lemma 6.5**.: _Let \(q\) be a query in \(\mathsf{TreeBCQ}\) satisfying \(\mathsf{C_{branch}}\). Let \(\mathbf{db}\) be a database instance. Let \(\mathbf{r}^{*}\) be a frugal repair of \(\mathbf{db}\) with respect to \(q\). If there is a rooted tree set \(\tau\) in \(\mathbf{r}^{*}\) starting in \(c\) such that \(\tau\in\mathsf{S}\mbox{-}\mathsf{CFG}^{\clubsuit}(q,y)\), then \(\langle c,y\rangle\in B\)._ Proof.: Let \(\tau\) be a rooted tree set starting in \(c\) in \(\mathbf{r}^{*}\) such that \(\tau\in\mathsf{S}\mbox{-}\mathsf{CFG}^{\clubsuit}(q,y)\). We recursively define a tree trace \(\mathcal{T}\) on nodes of the form \((c,x)\), where \(c\in\mathsf{adom}(\mathbf{r}^{*})\) and \(x\) is a variable in \(q\), as follows: * the root node of \(\mathcal{T}\) is \((c,y,\tau)\); and * whenever \((a,u,\sigma)\) is a node in \(\mathcal{T}\) with a rooted tree set \(\sigma\) starting in \(a\) in \(\mathbf{r}^{*}\) for an atom \(R(\underline{u},u_{1},u_{2},\ldots,u_{n})\) in \(q\) and fact \(R(\underline{a},b_{1},b_{2},\ldots,b_{n})\) in \(\mathbf{r}^{*}\), 1. if \(\mathsf{S}\mbox{-}\mathsf{CFG}^{\clubsuit}(q,y)\) invokes a forward production rule \[S_{u}\to_{q}R(S_{u_{1}},S_{u_{2}},\ldots,S_{u_{n}}),\] then the node \((a,u,\sigma)\) has \(n\) outgoing \(R\)-edges to its children \((b_{1},u_{1},\tau_{1})\), \((b_{2},u_{2},\tau_{2})\),..., \((b_{n},u_{n},\tau_{n})\); or 2. if \(\mathsf{S}\mbox{-}\mathsf{CFG}^{\clubsuit}(q,y)\) invokes a backward production rule \(S_{u}\to_{q}S_{v}\), then the node \((a,u,\sigma)\) has a single outgoing \(\varepsilon\)-edge to its only child \((a,v,\sigma)\). The tree trace \(\mathcal{T}\) succinctly records the rule productions that witness \(\tau\in\mathsf{S}\mbox{-}\mathsf{CFG}^{\clubsuit}(q,y)\) in \(\mathbf{r}^{*}\). We use a structural induction to show that for every node \((a,u,\sigma)\) in \(\mathcal{T}\), \(\langle a,u\rangle\in B\). * Basis. The claim holds for every leaf node \((a,u,\sigma)\) in \(\mathcal{T}\), since if \(\sigma=\bot\), then \(\langle a,u\rangle\in B\), or otherwise \(\sigma=L\) starting in \(a\) in \(\mathbf{r}^{*}\) for some unary relation name \(L\), and we have \(L(a)\) is in \(\mathbf{db}\). * Inductive step. Let \((a,u,\sigma)\) be a node in \(\mathcal{T}\). Assume that for any child node \((b,w,\sigma^{\prime})\) of \((a,u)\) in \(\mathcal{T}\) (possibly \(b=a\)), \(\langle b,w\rangle\in B.\) It suffices to argue that for the atom \(R[u]=R(\underline{u},u_{1},u_{2},\ldots,u_{n})\) in \(q\), \(\langle a,u\rangle\in B\). 1. Case that \((a,u,\sigma)\) has child nodes \((b_{1},u_{1},\tau_{1})\), \((b_{2},u_{2},\tau_{2})\),..., \((b_{n},u_{n},\tau_{n})\) in \(\mathcal{T}\) with \(\sigma=R(\tau_{1},\tau_{2},\ldots,\tau_{n})\). By the inductive hypothesis \(\langle b_{i},u_{i}\rangle\in B\) for every \(1\leq i\leq n\), which yields that \(\mathsf{fact}(R(\underline{a},\vec{b}),u)\) is true, where \(\vec{b}=\langle b_{1},b_{2},\ldots,b_{n}\rangle\). Then by Lemma 6.4, \(\langle a,u\rangle\in B\). 2. Case that \((a,u,\sigma)\) has a child node \((a,v,\sigma)\) in \(\mathcal{T}\) connected with an \(\varepsilon\)-edge. Then there is some atom \(R[v]\) with \(R[v]<_{q}R[u]\). By the inductive hypothesis on the child \((a,v,\sigma)\), \(\langle a,v\rangle\in B\). Hence \(\langle a,u\rangle\in B\) by Lemma 6.2. This completes the proof. The proof of Lemma 6.1 can now be given. Proof of Lemma 6.1.: [2 \(\Longrightarrow\) 1] Let \(\mathbf{r}^{*}\) be a frugal repair of \(\mathbf{db}\) with respect to \(q\). Then there is a rooted tree set \(\tau\) starting in \(c\) in \(\mathbf{r}^{*}\) with \(\tau\in\mathsf{S}\mbox{-}\mathsf{CFG}^{\clubsuit}(q,y)\). The claim follows by Lemma 6.5. [1 \(\Longrightarrow\) 2] Assume that \(\langle c,y\rangle\in B\). We use induction on \(k\) to show that if \(\langle c,y\rangle\) is added to \(B\) at the \(k\)-th iteration, then for any repair \(\mathbf{r}\) of \(\mathbf{db}\), there exists a rooted tree set \(\tau\) starting in \(c\) in \(\tau\) with \(\tau\in\mathsf{S}\mbox{-}\mathsf{CFG}^{\clubsuit}(q,y)\). * Basis \(k=0\). Then every \(\langle c,u\rangle\) is added to \(B\) for every leaf variable \(u\) of \(q\) such that either the label of \(u\) in \(q\) is \(\bot\), or a unary relation name \(L\). If the label of \(u\) is \(\bot\), the empty rooted tree set \(\tau=\emptyset\) starting in \(c\) with string representation \(\bot\) is accepted by \(\mathsf{S}\mbox{-}\mathsf{CFG}^{\clubsuit}(q,u)\), Otherwise, we must have \(L(c)\in\mathbf{db}\), and the rooted tree set \(\tau=L\) starting in \(c\) is accepted by \(\mathsf{S}\mbox{-}\mathsf{CFG}^{\clubsuit}(q,u)\). * Inductive step. Assume that \(\langle c,y\rangle\) is added to \(B\) in the \(k\)-th iteration, and for any tuple \(\langle b,x\rangle\) added to \(B\) prior to the addition of \(\langle c,y\rangle\), any repair of \(\mathbf{db}\) contains a rooted tree set \(\tau\in\mathsf{S}\mbox{-}\mathsf{CFG}^{\clubsuit}(q,x)\) starting in \(b\). Let \(\mathbf{r}\) be any repair of \(\mathbf{db}\). It suffices to construct a rooted tree set \(\tau\) in \(\mathbf{r}\) starting in \(c\) such that \(\tau\in\mathsf{S}\mbox{-}\mathsf{CFG}^{\clubsuit}(q,y)\). Let \(R[y]=R(\underline{y},y_{1},y_{2},\ldots,y_{n})\). Let \(R(\underline{c},d_{1},d_{2},\ldots,d_{n})\in\mathbf{r}\) and let \(\vec{d}=\langle d_{1},d_{2},\ldots,d_{n}\rangle\). Since \(\langle c,y\rangle\in B\), \(\mathsf{fact}(R(\underline{c},\vec{d}),y)\) is true. Consider two cases. Case that \(\langle d_{i},y_{i}\rangle\in B\) for every \(1\leq i\leq n\). Since each \(\langle d_{i},y_{i}\rangle\) was added to \(B\) in an iteration \(<k\), by the inductive hypothesis, there is a rooted tree set \(\tau_{i}\) starting in \(d_{i}\) in \(\mathbf{r}\) with \(\tau_{i}\in\mathsf{S\mbox{-}CFG}^{\clubsuit}(q,y_{i})\), i.e., \(S_{y_{i}}\stackrel{{*}}{{\rightarrow}}_{q}\tau_{i}\). Consider the rooted tree set \(\tau=\{R(\underline{c},\vec{d})\}\cup\bigcup_{1\leq i\leq n}\tau_{i}\), starting in \(c\) in \(\mathbf{r}\) with a string representation \(\tau=R(\tau_{1},\tau_{2},\ldots,\tau_{n})\). From \[S_{y} \rightarrow_{q}R(S_{y_{1}},S_{y_{2}},\ldots,S_{y_{n}})\] \[\stackrel{{*}}{{\rightarrow}}_{q}R(\tau_{1},\tau_{2 },\ldots,\tau_{n})=\tau,\] we conclude that \(\tau\in\mathsf{S\mbox{-}CFG}^{\clubsuit}(q,y)\). * Case that \(\mathsf{fact}(R(\underline{c},\vec{d}),x)\) is true for some \(R[x]<_{q}R[y]\). Without loss of generality, we assume that \(x\) is the smallest with respect to \(<_{q}\) for the atom \(R(\underline{x},x_{1},x_{2},\ldots,x_{n})\). Hence we must have \(\langle d_{i},x_{i}\rangle\in B\) for every \(1\leq i\leq n\), and by the previous case, there exists a rooted tree set \(\tau_{i}\) starting in \(d_{i}\) such that \(\tau_{i}\in\mathsf{S\mbox{-}CFG}^{\clubsuit}(q,x_{i})\), i.e., \(S_{x_{i}}\stackrel{{*}}{{\rightarrow}}_{q}\tau_{i}\). Since \(R[x]<_{q}R[y]\), we have \[S_{y} \rightarrow_{q}S_{x}\] \[\rightarrow_{q}R(S_{x_{1}},S_{x_{2}},\ldots,S_{x_{n}})\] \[\stackrel{{*}}{{\rightarrow}}_{q}R(\tau_{1},\tau_{2 },\ldots,\tau_{n})=\tau,\] and therefore \(\tau\in\mathsf{S\mbox{-}CFG}^{\clubsuit}(q,y)\). The proof is hence complete. ### Expressibility in LFP and FO **Lemma 6.6**.: _For every query \(q\) in \(\mathsf{TreeBCQ}\) that satisfies \(\mathsf{C_{branch}}\), \(\mathsf{CERTAIN}_{\mathsf{r}}(q)\) is expressible in \(\mathbf{LFP}\) (and thus is in \(\mathbf{PTIME}\))._ Proof.: Let \(r\) be the root variable of \(q\). Our algorithm first computes the set \(B\), and then checks \(\exists c:\langle c,r\rangle\in B\). The algorithm is correct by Lemma 6.1. For a rooted tree query \(q\), define the following formula in \(\mathbf{LFP}\)[40]: \[\psi_{q}(s,t):=\left[\mathbf{Ifp}_{B,x,z}\varphi_{q}(B,x,z)\right](s,t), \tag{5}\] where we have \[\varphi_{q}(B,x,z):=\left(\begin{array}{rl}&\alpha(x)\wedge(z=R[u])\\ \wedge&\exists yR(\underline{x},\vec{y})\\ \wedge&\forall\vec{y}\left(R(\underline{x},\vec{y})\rightarrow\mathsf{fact}(R( \underline{x},\vec{y}),u)\right)\end{array}\right)\] and the formula \(\mathsf{fact}(R(\underline{x},\vec{y}),u)\) is defined in Fig. 6. The initialization step of \(B\) in Fig. 6 is expressible in \(\mathbf{FO}\). Herein, \(\alpha(x)\) denotes a first-order query that computes the active domain. That is, for every database instance \(\mathbf{db}\) and constant \(c\), \(\mathbf{db}\models\alpha(c)\) if and only if \(c\in\mathsf{adom}(\mathbf{db})\). It is easy to verify that the LFP formula in (5) computes the set \(B\) in Fig. 6. We now show that if \(q\) satisfies \(\mathsf{C}_{1}\), we can safely remove the recursion from the algorithm in Fig. 6. **Lemma 6.7**.: _Let \(q\) be a rooted tree query satisfying \(\mathsf{C}_{1}\). Let \(\mathbf{db}\) be a database. Let \(R[y]=R(\underline{y},y_{1},y_{2},\ldots,y_{n})\) be an atom in \(q\) and let \(R(\underline{c},\vec{d})=R(\underline{c},d_{1},d_{2},\ldots,d_{n})\) be a fact in \(\mathbf{db}\). Then \(\mathsf{fact}(R(\underline{c},\vec{d}),y)\) is true if and only if for every atom \(T_{i}[y_{i}]\) in \(q\), \(\langle d_{i},y_{i}\rangle\in B\)._ Proof.: Immediate by definition of \(\mathsf{fact}(R(\underline{c},\vec{d}),y)\). Assume that \(\mathsf{fact}(R(\underline{c},\vec{d}),y)\) is true. Let \(R[x]\) be a minimal atom with respect to \(<_{q}\) such that \(R[x]<_{q}R[y]\) and \(\mathsf{fact}(R(\underline{c},\vec{d}),x)\) is true. If such an atom \(R[x]\) does not exist, then the claim follows by definition of \(\mathsf{fact}(R(\underline{c},\vec{d}),y)\). Otherwise, since \(R[x]\) is minimal with respect to \(<_{q}\), for every atom \(T_{i}[x_{i}]\) in \(q\), \(\langle d_{i},x_{i}\rangle\in B\), where \(R(\underline{x},\vec{x})=R(\underline{x},x_{1},x_{2},\ldots,x_{n})\). It suffices to show that for every atom \(T_{i}[y_{i}]\) in \(q\), \(\langle d_{i},y_{i}\rangle\in B\). Let \(T_{i}[y_{i}]\) be an atom in \(q\). From \(\mathsf{C}_{1}\) and \(R[x]<_{q}R[y]\), \(q_{\triangle}^{y_{i}}\leq_{y_{i}\to x_{i}}q_{\triangle}^{x_{i}}\). Thus there is some atom \(T_{i}[x_{i}]\) in \(q\) with \(T_{i}[x_{i}]<_{q}T_{i}[y_{i}]\). Since \(\langle d_{i},x_{i}\rangle\in B\), by Lemma B.1, \(\langle d_{i},y_{i}\rangle\in B\) **Lemma 6.8**.: _For every \(q\) in \(\mathsf{TreeBCQ}\) that satisfies \(\mathsf{C}_{1}\), \(\mathsf{CERTAIN}_{\mathsf{tr}}(q)\) is in \(\mathbf{FO}\)._ Proof.: Consider the following variant of the algorithm in Fig. 6, where we simply have \[\mathsf{fact}(R(\underline{c},\vec{d}),y)=\bigwedge_{1\leq i\leq n}\langle d_{ i},y_{i}\rangle\in B.\] The variant algorithm is correct for \(\mathsf{CERTAIN}_{\mathsf{tr}}(q)\) by Lemma 6.7. Since the size of the query \(q\) is fixed, for every constant \(c\) and variable \(y\) in \(q\), deciding whether \(\langle c,y\rangle\in B\) is in \(\mathbf{FO}\) since the algorithm in Fig. 6 can be expanded into a sentence of fixed size. So is our algorithm, which checks \(\exists c:\langle c,r\rangle\in B\). ## 7 Complexity Upper Bounds In this section, we prove the upper bound results in Theorem 4.1. First, we shall prove Lemma 5.1. **Lemma 7.1**.: _Let \(q\) be a rooted tree query. Then \(q\) satisfies \(\mathsf{C}_{\mathsf{factor}}\) if and only if \(q\leq_{\rightarrow}\tau\) for every \(\tau\in\mathsf{CFG}^{\clubsuit}(q)\)._ Proof of Lemma 7.1.: Consider two directions. [leftmargin=*] Let \(R[x]\) and \(R[y]\) be two atoms in \(q\) with \(R[x]<_{q}R[y]\). It suffices to show that \(q^{R:y\diamond x}\in\mathsf{CFG}^{\clubsuit}(q)\). Indeed, there is an execution of \(S_{r}(q^{R:y\diamond x})\) that follows exactly \(S_{r}(q)\), until it invokes \(S_{y}(q_{\triangle}^{x})\), instead of \(S_{y}(q_{\triangle}^{y})\) in \(S_{r}(q)\). Note that \(S_{y}\rightarrow_{q}S_{x}\stackrel{{*}}{{\rightarrow}}_{q}q_{ \triangle}^{x}\). Thus \(S_{r}\stackrel{{*}}{{\rightarrow}}_{q}q^{R:y\diamond x}\), concluding that \(q^{R:y\diamond x}\in\mathsf{CFG}^{\clubsuit}(q)\). [leftmargin=*] Let \(\tau\in\mathsf{CFG}^{\clubsuit}(q)\) with \(S_{r}\stackrel{{*}}{{\rightarrow}}_{q}\tau\). We use an induction on the number \(k\) of backward transitions in \(S_{r}\stackrel{{*}}{{\rightarrow}}_{q}\tau\) to show that \(q\leq_{\rightarrow}\tau\). * Basis \(k=0\). We have \(\tau=q\), and the claim follows. * Inductive step \(k\to k+1\). Assume that if \(S_{r}\stackrel{{*}}{{\rightarrow}}_{q}\sigma\) uses \(k\) backward transitions, then \(q\leq_{\rightarrow}\sigma\). Let \(\tau\in\mathsf{CFG}^{\clubsuit}(q)\) such that \(S_{r}\stackrel{{*}}{{\rightarrow}}_{q}\tau\) uses \(k+1\) backward transitions. Let \(\sigma\) be a subtree of \(\tau\) such that the execution of \(S_{r}(\sigma)\) invokes exactly \(1\) backward transition \(S_{y}\rightarrow_{q}S_{x}\stackrel{{*}}{{\rightarrow}}_{q}\sigma\). Hence \(\sigma=q_{\triangle}^{x}\). Consider the rooted tree \(\tau^{*}\), obtained by replacing \(\sigma=q_{\triangle}^{x}\) with \(\sigma^{*}=q_{\triangle}^{y}\). We have \(\tau^{*}\in\mathsf{CFG}^{\clubsuit}(q)\), since \(S_{r}\stackrel{{*}}{{\rightarrow}}_{q}\tau\) would invoke \(S_{y}\stackrel{{*}}{{\rightarrow}}_{q}\sigma^{*}\) and use exactly \(k\) backward transitions. By the inductive hypothesis, there is a homomorphism \(h\) from \(q\) to \(\tau^{*}\). If \(h(q)\cap\sigma^{*}=\emptyset\), then \(h(q)\) is still present in \(\tau\), and thus \(q\leq_{\rightarrow}\tau\). Otherwise, assume that the homomorphism \(h\) maps \(q_{\triangle}^{z}\) in \(q\) to \(\sigma^{*}\). Hence \(R[x]<_{q}R[y]<_{q}R[z]\). Since \(q\) satisfies \(\mathsf{C}_{\mathsf{factor}}\), there is a homomorphism \(g\) from \(q\) to \(q^{R:z\diamond x}\), and thus a homomorphism from \(q\) to \(\tau\). The proof is now complete. The following definition is helpful in our exposition. **Definition 7.1**.: Let \(q\) be a rooted tree query. Let \(\mathbf{db}\) be a database. For each repair \(\mathbf{r}\) of \(\mathbf{db}\), we define \(\mathsf{start}(q,\mathbf{r})\) as the set containing all (and only) constants \(c\in\mathsf{adom}(\mathbf{r})\) such that there is a rooted tree set \(\tau\) in \(\mathbf{r}\) starting in \(c\) with \(\tau\in\mathsf{CFG}^{\clubsuit}(q)\). The problem \(\mathsf{CERTAIN}_{\mathsf{tr}}(q)\) essentially asks whether there is some constant \(c\) such that for every repair \(\mathbf{r}\) of \(\mathbf{db}\), \(c\in\mathsf{start}(q,r)\). Surprisingly, the frugal repair \(\mathbf{r}^{*}\) of \(\mathbf{db}\) minimizes \(\mathsf{start}(q,\cdot)\) across all repairs of \(\mathbf{db}\). **Lemma 7.2**.: _Let \(q\) be a rooted tree query satisfying \(\mathsf{C}_{\mathsf{branch}}\). Let \(\mathbf{db}\) be a database. Let \(\mathbf{r}^{*}\) be a frugal repair of \(\mathbf{db}\). Then for any repair \(\mathbf{r}\) of \(\mathbf{db}\), \(\mathsf{start}(q,\mathbf{r}^{*})\subseteq\mathsf{start}(q,\mathbf{r})\)._ Proof of Lemma 7.2.: Let \(B\) be the output of the algorithm in Fig. 6. Let \(\mathbf{r}^{*}\) be a frugal repair of \(\mathbf{db}\). Let \(\mathbf{r}\) be any repair of \(\mathbf{db}\). We show that \(\mathsf{start}(q,\mathbf{r}^{*})\subseteq\mathsf{start}(q,\mathbf{r})\). Let \(r\) be the root variable of \(q\). Assume that \(c\in\mathsf{start}(q,\mathbf{r}^{*})\). Then there exists a rooted tree set \(\tau\) starting in \(c\) in \(\mathbf{r}^{*}\) with \(\tau\in\mathsf{CFG}^{\clubsuit}(q)=\mathsf{S-CFG}^{\clubsuit}(q,r)\). By Lemma 6.5, we have \(\langle c,r\rangle\in B\). By Lemma 6.1, there exists a rooted tree set \(\tau^{\prime}\) starting in \(c\) in \(\mathbf{r}\) with \(\tau^{\prime}\in\mathsf{S-CFG}^{\clubsuit}(q,r)=\mathsf{CFG}^{\clubsuit}(q)\). Thus \(c\in\mathsf{start}(q,\mathbf{r})\). The proof of Lemma 5.1 can now be given. Proof of Lemma 5.1.: \(\left[\begin{array}{c}1\Longrightarrow 2\end{array}\right]\) Assume (1). Let \(\mathbf{r}^{*}\) be a frugal repair of \(\mathbf{db}\). Since \(\mathbf{r}^{*}\) satisfies \(q\), there is a rooted tree set starting in \(c\) isomorphic to \(q\) in \(\mathbf{r}^{*}\). Since \(q\in\mathsf{CFG}^{\clubsuit}(q)\), we have \(c\in\mathsf{start}(q,\mathbf{r}^{*})\). By Lemma 7.2, for every repair \(\mathbf{r}\) of \(\mathbf{db}\), \(\mathsf{start}(q,\mathbf{r}^{*})\subseteq\mathsf{start}(q,\mathbf{r})\). It follows that \(c\in\mathsf{start}(q,\mathbf{r})\) for every repair \(\mathbf{r}\) of \(\mathbf{db}\). \(\left[\begin{array}{c}2\Longrightarrow 1\end{array}\right]\) Let \(\mathbf{r}\) be any repair of \(\mathbf{db}\). By our hypothesis that (2) holds true, there is some \(c\in\mathsf{start}(q,\mathbf{r})\). Let \(\tau\) be a rooted tree set in \(\mathbf{r}\) starting in \(c\) with \(\tau\in\mathsf{CFG}^{\clubsuit}(q)\). Since \(q\) satisfies \(\mathsf{C}_{2}\) by the hypothesis of the current lemma, it follows by Lemma 7.1 that \(q\leq_{-}\tau\). Consequently, \(\mathbf{r}\) satisfies \(q\). The upper bounds in Theorem 4.1 thus follows. **Proposition 7.1**.: _For every \(q\) in \(\mathsf{TreeBCQ}\),_ 1. _if_ \(q\) _satisfies_ \(\mathsf{C}_{2}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is in_ \(\mathbf{LFP}\)_; and_ 2. _if_ \(q\) _satisfies_ \(\mathsf{C}_{1}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is in_ \(\mathbf{FO}\)_._ Proof.: Immediate from Lemmas 5.1, 6.6, and 6.8 by noting that \(\mathsf{C}_{1}\) implies \(\mathsf{C}_{2}\). Interestingly, for each query \(q\) in \(\mathsf{TreeBCQ}\) satisfying \(\mathsf{C}_{2}\), "checking the frugal repair is all you need". A repair with this property is known as a "universal repair" in [49]. **Corollary 7.1**.: _Let \(q\) be a query in \(\mathsf{TreeBCQ}\), and let \(\mathbf{db}\) be a database instance. Let \(\mathbf{r}^{*}\) be a frugal repair of \(\mathbf{db}\) with respect to \(q\). If \(q\) satisfies \(\mathsf{C}_{2}\), then \(\mathbf{db}\) is a "yes"-instance of \(\mathsf{CERTAINTY}(q)\) if and only if \(\mathbf{r}^{*}\) satisfies \(q\)._ Proof.: Let \(\mathbf{r}^{*}\) be a frugal repair of \(\mathbf{db}\) with respect to \(q\). \(\left[\begin{array}{c}\Longrightarrow\\ \Longrightarrow\end{array}\right]\) This direction is straightforward. Assume that \(\mathbf{r}^{*}\) satisfies \(q\). Let \(r\) be the root variable of \(q\). Hence there exists a constant \(c\) in \(\mathbf{db}\), such that there exists a rooted relation tree \(\tau\) in \(\mathbf{r}^{*}\) that is isomorphic to \(q\) accepted by \(\mathsf{S}\)-\(\mathsf{CFG}^{\clubsuit}(q,r)\). Then by Lemma 6.5, \(\langle c,r\rangle\in B\), where \(B\) is the output of algorithm in Fig. 6. Hence \(\mathbf{db}\) is a "yes"-instance for \(\mathsf{CERTAIN}_{\mathbf{r}}(q)\), and by Lemma 5.1, a "yes"-instance for \(\mathsf{CERTAINTY}(q)\). ## 8 Complexity Lower Bounds In this section, we show the hardness results in Theorem 4.1. We define a _canonical copy_ of a query \(q\) as a set of facts \(\mu(q)\), where \(\mu\) maps each variable in \(q\) to a unique constant. The following notation will be central in all our reductions. For a query \(q\), variables \(x_{i}\) in \(q\) and distinct constants \(c_{i}\), we denote \[\langle q,[x_{1},x_{2},\dots,x_{n}\to c_{1},c_{2},\dots,c_{n}]\rangle\] as the canonical copy \(\mu(q)\), where \[\mu(z)=\begin{cases}c_{i}&\text{if }z=x_{i}\\ \text{a fresh distinct constant}&\text{otherwise.}\end{cases}\] **Lemma 8.1**.: \(\mathsf{CERTAINTY}(q)\) _is \(\mathbf{coNP}\)-hard for each \(q\) in \(\mathsf{TreeBCQ}\) that violates \(\mathsf{C}_{2}\)._ Proof.: Since \(q\) violates \(\mathsf{C}_{2}\), there exist two atoms \(R(\underline{p},\dots)\) and \(R(\underline{n},\dots)\) in \(q\) such that there is no homomorphism from \(q\) to neither \(q^{R:p\mapsto n}\) nor \(q^{R:n\looparrow p}\). Consider now the root atom \(A(\underline{r},\dots)\). It must be that \(r\neq p\), since otherwise, there would be a homomorphism from \(q\) to \(q^{R:n\looparrow p}\), a contradiction. Similarly, we have that \(r\neq n\). Hence, the root atom is distinct from \(R(\underline{p},\dots)\) and \(R(\underline{n},\dots)\). We also have that \(r<_{q}p\) and \(r<_{q}n\). We present a reduction from MonotoneSAT: Given a monotone CNF formula \(\varphi\), i.e., each clause in \(\varphi\) contains either all positive literals or all negative literals, does \(\varphi\) has a satisfying assignment? Let \(\varphi\) be a monotone CNF formula. We construct an instance \(\mathbf{db}\) for \(\mathsf{CERTAINTY}(q)\) as follows. * for each variable \(z\) in \(\varphi\), we introduce the facts \(\langle q^{p}_{\triangle},[p\to z]\rangle\) and \(\langle q^{n}_{\triangle},[n\to z]\rangle\); * for each positive literal \(z\) in clause \(C\), we introduce the facts \(\langle q\setminus q^{p}_{\triangle},[r,p\to C,z]\rangle\); * for each negative literal \(z\) in clause \(\overline{C}\), we introduce the facts \(\langle q\setminus q^{n}_{\triangle},[r,n\to\overline{C},z]\rangle\); Observe that the instance \(\mathbf{db}\) has two types of inconsistent blocks. For relation \(A\), we have a block for each positive or negative clause, where the primary key position is the clause. For relation \(R\), for every variable \(z\) we have a block of size two, which corresponds to choosing a true/false assignment for \(z\). All the other relations are consistent. Additionally, for a positive literal \(z\in C\), the set of facts \(\langle q^{n}_{\triangle},[n\to z]\rangle\cup\langle q\setminus q^{p}_{ \triangle},[r,p\to C,z]\rangle\) make \(q\) true; similarly for a negative literal \(z\in\overline{C}\), the facts \(\langle q^{n}_{\triangle},[n\to z]\rangle\cup\langle q\setminus q^{n}_{ \triangle},[n,p\to\overline{C},z]\rangle\) make \(q\) true. Note also that \(\langle q^{n}_{\triangle},[n\to z]\rangle\cup\langle q\setminus q^{p}_{ \triangle},[r,p\to C,z]\rangle\) is a canonical copy of \(q^{R:n\mapsto n}\) (and hence cannot satisfy \(q\)), while \(\langle q^{p}_{\triangle},[p\to z]\rangle\cup\langle q\setminus q^{n}_{ \triangle},[r,n\to\overline{C},z]\rangle\) is a canonical copy of \(q^{R:n\mapsto p}\) (which also cannot satisfy \(q\)). Now we argue that \(\varphi\) has a satisfying assignment \(\chi\) if and only if \(\mathbf{db}\) has a repair \(\mathbf{r}\) that does not satisfy \(q\). \(\implies\) Assume that \(\varphi\) has a satisfying assignment \(\chi\). Consider the repair \(\mathbf{r}\) of \(\mathbf{db}\) that * for each variable \(z\), if \(\chi(z)=\mathsf{true}\), picks \(\langle q^{n}_{\triangle},[n\to z]\rangle\), or otherwise \(\langle q^{p}_{\triangle},[p\to z]\rangle\); * for each positive clause \(C\), picks \(\langle q\setminus q^{p}_{\triangle},[r,p\to C,z]\rangle\) where \(z\) is a positive literal in \(C\) with \(\chi(z)=\mathsf{true}\); and * for each negative clause \(\overline{C}\), picks \(\langle q\setminus q^{n}_{\triangle},[r,n\to\overline{C},\overline{z}]\rangle\) where \(\overline{z}\) is a negative literal in \(\overline{C}\) with \(\chi(z)=\mathsf{false}\). We show that \(\mathbf{r}\) does not satisfy \(q\). Indeed, for each positive clause \(C\), there is a literal \(z\in C\) with \(\chi(z)=\mathsf{true}\), and thus \(\langle q\setminus q^{p}_{\triangle},[r,p\to C,z]\rangle\subseteq\mathbf{r}\). However, we have \(\langle q^{n}_{\triangle},[n\to z]\rangle\subseteq\mathbf{r}\), and thus \(q\) is not satisfied. Similarly, for each negative clause \(\overline{C}\), there is a literal \(\overline{z}\in\overline{C}\) with \(\chi(z)=\mathsf{false}\), and thus \(\langle q\setminus q^{n}_{\triangle},[n,p\to\overline{C},z]\rangle\subseteq \mathbf{r}\). However, we have \(\langle q^{p}_{\triangle},[p\to z]\rangle\subseteq\mathbf{r}\) and hence this part also cannot satisfy \(q\). Hence \(\mathbf{r}\) does not satisfy \(q\). \(\iff\) Now assume that \(\mathbf{db}\) has a repair \(\mathbf{r}\) that does not satisfy \(q\). Consider the assignment \(\chi\) that sets \(\langle z\rangle=\mathsf{true}\) if \(\langle q^{n}_{\triangle},[n\to z]\rangle\subseteq\mathbf{r}\), or otherwise \(\chi(z)=\mathsf{false}\). We argue that \(\varphi\) is satisfied. For each positive clause \(C\), there exists some \(z\in C\) such that \(\langle q\setminus q^{p}_{\triangle},[r,p\to C,z]\rangle\subseteq\mathbf{r}\). Since \(\mathbf{r}\) does not satisfy \(q\), it must be that \(\langle q^{p}_{\triangle},[p\to z]\rangle\nsubseteq\mathbf{r}\) and thus \(\langle q^{n}_{\triangle},[n\to z]\rangle\subseteq\mathbf{r}\). By construction, \(z\) is true and the clause \(C\) is satisfied. Similarly, the negative clauses are all satisfied by the assignment. **Lemma 8.2**.: _Let \(q\) be a rooted tree query. If there exist two distinct atoms \(R(\underline{x},\dots)\) and \(R(\underline{y},\dots)\) such that \(x<_{q}y\) and there is no root homomorphism from \(q^{y}_{\triangle}\) to \(q^{x}_{\triangle}\) (i.e., it does not hold that \(q^{y}_{\triangle}\leq_{y\to x}q^{x}_{\triangle}\)), then \(\mathsf{CERTAINTY}(q)\) is \(\mathbf{NL}\)-hard._ Proof.: We may assume without loss of generality two things \((i)\) there is no atom \(R(\underline{z},\dots)\) such that \(z\notin\{x,y\}\), \(x<_{q}z<_{q}y\) (we then say that the two \(R\)-atoms are consecutive), and \((ii)\) for any \(y<_{q}z\), \(z\neq y\), we have \(q^{z}_{\triangle}\leq_{z\to y}q^{y}_{\triangle}\). Indeed, we can pick \(R(\underline{x},\dots)\) and \(R(\underline{y},\dots)\) to be the pair of consecutive \(R\)-atoms that violates the root homomorphism condition and occurs lowest in the rooted tree. Such a pair must always exists, since the root homomorphism property is transitive, i.e., if \(q^{y}_{\triangle}\leq_{y\to z}q^{z}_{\triangle}\) and \(q^{z}_{\triangle}\leq_{z\to x}q^{x}_{\triangle}\), then we also have that \(q^{y}_{\triangle}\leq_{y\to x}q^{x}_{\triangle}\). We present a reduction from the complement of \(\mathsf{READABILITY}\) problem, which is \(\mathbf{NL}\)-hard: Given a directed acyclic graph \(G=(V,E)\) and \(s,t\in V\), is there a directed path from \(s\) to \(t\) in \(G\)? We construct an instance \(\mathbf{db}\) for \(\mathsf{CERTAINTY}(q)\) as follows. First, we introduce two new constants \(s^{\prime}\) and \(t^{\prime}\). Then: * for each \(u\in V\cup\{s^{\prime}\}\), introduce \(\langle q\setminus q^{x}_{\triangle},[x\to u]\rangle\); * for every edge \((u,v)\in E\cup\{(s^{\prime},s),(t,t^{\prime})\}\), introduce \(\langle q^{x}_{\triangle}\setminus q^{y}_{\triangle},[x,y\to u,v]\rangle\); * for every vertex \(u\in V\), introduce \(\langle q^{y}_{\triangle},[y\to u]\rangle\). Note that the above construction guarantees that only \(R\) has inconsistent blocks. We now argue that there is a directed path \((u_{1},u_{2},\dots,u_{k})\) with \((u_{i},u_{i+1})\in E\), \(u_{1}=s\) and \(u_{k}=t\) in \(G\) if and only if there is a repair of \(\mathbf{db}\) that does not satisfy \(q\). \(\implies\) Assume that there exists a directed path \((u_{1},u_{2},\dots,u_{k})\) with \((u_{i},u_{i+1})\in E\), \(u_{1}=s\) and \(u_{k}=t\) in \(G\). Denote \(u_{0}=s^{\prime}\) and \(u_{k+1}=t^{\prime}\). Let \(\mathbf{r}\) be the repair that picks \(\langle q^{x}_{\triangle}\setminus q^{y}_{\triangle},[x,y\to u_{i},u_{i+1}]\rangle\) for every \(1\leq i\leq k-1\), and \(\langle q^{y}_{\triangle},[y\to u]\rangle\) for any other vertex \(u\). Suppose for contradiction that \(\mathbf{r}\) satisfies \(q\) with a valuation \(\theta\). It is not possible that \(\theta(q)\subseteq\langle q^{y}_{\triangle},[y\to u]\rangle\) for any \(u\notin V\) since the size does not fit. We argue that we must have \(\theta(x)=u_{i}\) and \(\theta(y)=u_{i+1}\) for some \(0\leq i<k\). If \(\theta(x)=u_{i}\in\{u_{0},u_{1},\dots,u_{k}\}\), then we must have \(\theta(y)=u_{i+1}\) since \(\langle q^{x}_{\triangle}\setminus q^{y}_{\triangle},[x,y\to u,v]\rangle\) is a canonical copy. Suppose for contradiction that \(\theta(x)\notin\{u_{0},u_{1},\dots,u_{k}\}\). It is not possible that \(\theta(x)=u_{k+1}=t^{\prime}\) since by construction, there is no rooted tree set rooted at \(t^{\prime}\). Note that there is no atom \(R(\underline{z},\dots)\) such that \(z\notin x,y\), \(x<_{q}z<_{q}y\). Hence \(\theta(x)\) cannot fall on the path connecting any \(u_{i}\) and \(u_{i+1}\), and \(\theta(q^{x}_{\triangle})\) must be contained in some \(\langle q^{x}_{\triangle}\setminus q^{y}_{\triangle},[x,y\to u_{i},u_{i+1}]\rangle\). Then, there must be an atom \(R(\underline{z},\dots)\) such that (i) \(x<_{q}z\), (ii) \(z\parallel_{q}y\) and (iii) \(\theta(q_{\Delta}^{x})\) is contained in \(\langle q_{\Delta}^{z},[z\to\theta(x)]\rangle\), which is impossible since the sizes do not fit. By construction, there is a canonical copy of \(q_{\Delta}^{y}\) rooted at \(u_{i+1}\). If this canonical copy is contained in \(\langle q_{\Delta}^{x}\setminus q_{\Delta}^{y},[x,y\to u_{i+1},u_{i+2}]\rangle\), then there is a root homomorphism from \(q_{\Delta}^{y}\) to \(q_{\Delta}^{x}\setminus q_{\Delta}^{y}\), and so from \(q_{\Delta}^{y}\) to \(q_{\Delta}^{x}\), a contradiction. Otherwise, there exists some atom \(R(\underline{z},\dots)\) such that \((i)\)\(y<_{q}z\) and \((ii)\)\(q_{\Delta}^{y}\setminus q_{\Delta}^{z}\) has a root homomorphism to \(q_{\Delta}^{x}\setminus q_{\Delta}^{y}\). Recall now that from our initial assumption we must have that \(q_{\Delta}^{z}\leq_{z\to y}q_{\Delta}^{y}\). This implies that we can now generate a root homomorphism from \(q_{\Delta}^{y}\) to \(q_{\Delta}^{x}\), a contradiction. Assume that there is no directed path from \(s\) to \(t\) in \(G\). Consider any repair \(\mathbf{r}\) of \(\mathbf{db}\). Since \(G\) is acyclic, there exists a maximal sequence \(u_{0},u_{1},\dots,u_{k}\) with \(k\geq 1\) such that \(u_{0}=s^{\prime}\), \(u_{1}=s\), \(\langle q_{\Delta}^{x}\setminus q_{\Delta}^{y},[x,y\to u_{i},u_{i+1}]\rangle \subseteq\mathbf{r}\) for \(0\leq i<k\) and \(\langle q_{\Delta}^{y},[y\to u_{k}]\rangle\subseteq\mathbf{r}\). Then, the following set of facts satisfies \(q\): \[\langle q\setminus q_{\Delta}^{x},[x\to u_{k-1}]\rangle\cup\] \[\langle q_{\Delta}^{x}\setminus q_{\Delta}^{y},[x,y\to u_{k-1},u_{k}]\rangle\cup\] \[\langle q_{\Delta}^{y},[y\to u_{k}]\rangle.\] This shows that \(\mathsf{CERTAINTY}(q)\) is \(\mathbf{NL}\)-hard since \(\mathbf{NL}\) is closed under complement. **Lemma 8.3**.: \(\mathsf{CERTAINTY}(q)\) _is \(\mathbf{NL}\)-hard for each \(q\) in \(\mathsf{TreeBCQ}\) that violates \(\mathsf{C}_{1}\)._ Proof.: Assume that \(q\) violates \(\mathsf{C}_{1}\). Then there exist two distinct atoms \(R(\underline{x},\dots)\) and \(R(\underline{y},\dots)\) in \(q\) such that there is no root homomorphism from \(q_{\Delta}^{y}\) to \(q_{\Delta}^{x}\) or from \(q_{\Delta}^{x}\) to \(q_{\Delta}^{y}\). If \(x\parallel_{q}y\), Lemma 4.1 implies that \(\mathsf{C}_{2}\) is also violated, so \(\mathsf{CERTAINTY}(q)\) is \(\mathbf{coNP}\)-hard from Lemma 8.1. Otherwise, \(\mathsf{CERTAINTY}(q)\) is \(\mathbf{NL}\)-hard by Lemma 8.2. We conclude with the desired lower bounds. **Proposition 8.1**.: _For every \(q\) in \(\mathsf{TreeBCQ}\),_ 1. _if_ \(q\) _violates_ \(\mathsf{C}_{2}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is_ \(\mathbf{coNP}\)_-hard; and_ 2. _if_ \(q\) _violates_ \(\mathsf{C}_{1}\)_, then_ \(\mathsf{CERTAINTY}(q)\) _is_ \(\mathbf{NL}\)_-hard._ Proof.: Immediate from Lemma 8.1 and 8.3. ## 9 Extending the Trichotomy In this section, we extend the complexity classification for rooted tree queries to larger classes of Boolean conjunctive queries. We postpone most proofs to Appendix C. ### From \(\mathsf{TreeBCQ}\) to \(\mathsf{GraphBCQ}\) We define \(\mathsf{GraphBCQ}\), a subclass of \(\mathsf{BCQ}\) that extends \(\mathsf{TreeBCQ}\). **Definition 9.1** (\(\mathsf{GraphBCQ}\)).: \(\mathsf{GraphBCQ}\) is the class of Boolean conjunctive queries \(q\) satisfying the following conditions: 1. every atom in \(q\) is of the form \(R(\underline{x},y_{1},\dots,y_{n})\) where \(x\) is a variable and \(y_{1},\dots,y_{n}\) are symbols (variables or constants) such that no variable occurs twice in the atom; and 2. if \(R(\underline{x},y_{1},\dots,y_{n})\) and \(S(\underline{u},v_{1},\dots,v_{m})\) are distinct atoms of \(q\), then \(x\neq u\). Note that \(R\) and \(S\) need not be distinct. For a query \(q\) in \(\mathsf{BCQ}\), we define \(\mathcal{G}(q)\) as the undirected graph whose vertices are the atoms of \(q\); two atoms are adjacent if they have a variable in common. The connected components of \(q\) are the connected components of \(\mathcal{G}(q)\). Note that queries in \(\mathsf{GraphBCQ}\), unlike \(\mathsf{TreeBCQ}\), can have more than one connected component. The following lemma implies that the complexity of \(\mathsf{CERTAINTY}(q)\) is equal to the highest complexity of \(\mathsf{CERTAINTY}(q^{\prime})\) over every connected component \(q^{\prime}\) of \(q\). **Lemma 9.1**.: _Let \(q\) be a minimal query in \(\mathsf{BCQ}\) with connected components \(q_{1}\), \(q_{2}\),..., \(q_{n}\). Then:_ 1. _for every_ \(1\leq i\leq n\)_, there exists a first order reduction from the problem_ \(\mathsf{CERTAINTY}(q_{i})\) _to_ \(\mathsf{CERTAINTY}(q_{j})\) 2. _for every database instance_ \(\mathbf{db}\)_,_ \(\mathbf{db}\) _is a_ "_yes_"-instance of the problem_ \(\mathsf{CERTAINTY}(q)\) _if and only if for every_ \(1\leq i\leq n\)_,_ \(\mathbf{db}\) _is a "yes"-instance of_ \(\mathsf{CERTAINTY}(q_{i})\)_._ **Proposition 9.1**.: _If \(q\) is a connected minimal conjunctive query in \(\mathsf{GraphBCQ}\setminus\mathsf{TreeBCQ}\), then \(\mathsf{CERTAINTY}(q)\) is \(\mathbf{L}\)-hard (and not in \(\mathbf{FO}\)); if \(q\) is also Berge-acyclic, then \(\mathsf{CERTAINTY}(q)\) is \(\mathbf{coNP}\)-hard._ Proof of Theorems 1.3 and 1.4.: Let \(q\) be a query in \(\mathsf{GraphBCQ}\). Then the minimal query \(q^{*}\) of \(q\) is also in \(\mathsf{GraphBCQ}\). If every connected component of \(q^{*}\) is in \(\mathsf{TreeBCQ}\) and satisfies \(\mathsf{C}_{1}\), then \(\mathsf{CERTAINTY}(q)\) is in \(\mathbf{FO}\). Otherwise, there exists some connected component \(q^{\prime}\) of \(q^{*}\) that is either not in \(\mathsf{TreeBCQ}\), or violates \(\mathsf{C}_{1}\), and \(\mathsf{CERTAINTY}(q)\) is \(\mathbf{L}\)-hard or \(\mathbf{NL}\)-hard by Lemma 9.1, Proposition 9.1, and Theorem 4.1. Assume that \(q\) is also Berge-acyclic. If some connected component \(q^{\prime}\) of \(q^{*}\) is not in \(\mathsf{TreeBCQ}\), then \(\mathsf{CERTAINTY}(q)\) is \(\mathbf{coNP}\)-complete; or otherwise, \(\mathsf{CERTAINTY}(q)\) exhibits a trichotomy by Theorem 4.1. Lemma 9.2 (adapted from [52]) is essential to the proof of Proposition 9.1, but is of independent interest. It relates the complexity of \(\mathsf{CQA}\) on queries with self-joins to that on self-join-free queries. Given a query \(q\) in \(\mathsf{BCQ}\), a _self-join-free version of \(q\)_, denoted \(q^{\mathsf{sif}}\), is any self-join-free Boolean conjunctive query obtained from \(q\) by (only) renaming relation names. For example, a self-join-free version of \(\{R(\underline{x},y),R(\underline{y},x)\}\) is \(\{R(\underline{x},y),S(\underline{y},x)\}\). **Lemma 9.2** (Bridging Lemma).: _Let \(q\) be a minimal query in \(\mathsf{BCQ}\) and \(\mathcal{C}\) a complexity class. If \(\mathsf{CERTAINTY}(q^{\mathsf{sif}})\) is \(\mathcal{C}\)-hard, then \(\mathsf{CERTAINTY}(q)\) is \(\mathcal{C}\)-hard._ The Bridging Lemma is illustrated by Example 9.1. **Example 9.1**.: For \(q_{1}=\{R(\underline{x},y,z),R(\underline{z},x,y)\}\), we have \(q_{1}\mathsf{sif}=\{R_{1}(\underline{x},y,z),R_{2}(\underline{z},x,y)\}\). By Theorem 1.1[34], \(\mathsf{CERTAINTY}(q_{1}\mathsf{sif})\) is \(\mathbf{L}\)-complete, and thus \(\mathsf{CERTAINTY}(q_{1})\) is \(\mathbf{L}\)-hard by Lemma 9.2. For \(q_{2}=\{R(\underline{x},z),R(\underline{y},z)\}\), we have \(q_{2}\mathsf{sif}=\{R_{1}(\underline{x},z),R_{2}(\underline{y},z)\}\). Although by Theorem 1.1[34], \(\mathsf{CERTAINTY}(q_{2}\mathsf{sif})\) is \(\mathbf{coNP}\)-complete, \(\mathsf{CERTAINTY}(q_{2})\) is in \(\mathbf{FO}\) because \(q_{2}\equiv q_{2}^{\prime}\) where \(q_{2}^{\prime}=\{R(\underline{x},z)\}\). Lemma 9.2 does not apply here because \(q_{2}\) is not minimal. ### Open Challenges So far, we have established the \(\mathbf{FO}\)-boundary of \(\mathsf{CERTAINTY}(q)\) for all queries \(q\) in \(\mathsf{GraphBCQ}\), and a fine-grained complexity classification for all Berge-acyclic queries in \(\mathsf{GraphBCQ}\), which include all rooted tree queries. We briefly discuss the remaining syntactic restrictions. The complexity classification of \(\mathsf{CERTAINTY}(q)\) for queries \(q\) in \(\mathsf{GraphBCQ}\) that are not Berge-acyclic is likely to impose new challenges. In particular, Figueira et al. [18] showed that for \(q_{1}\) in Example 9.1 (that is not Berge-acyclic), the complement of \(\mathsf{CERTAINTY}(q_{1})\) is complete for \(\mathsf{Bipartite}\) Matching under \(\mathsf{LOGSPACE}\)-reductions. The restriction imposed by \(\mathsf{GraphBCQ}\) that every variable occurs at most once at a primary-key position allows for an elegant graph representation. We found that dropping this requirement imposes serious challenges. The following Proposition 9.2 hints at the difficulty of having to "correlate two rooted tree branches" that share the same primary-key variable. **Proposition 9.2**.: _Consider the following queries:_ * \(q_{1}=\{R(\underline{u},x_{1}),R(\underline{x_{1}},x_{2}),S(\underline{u},y_{1 }),S(\underline{y_{1}},y_{2})\}\)_;_ * \(q_{2}=q_{1}\cup\{X(\underline{x_{2}},x_{3})\}\)_; and_ * \(q_{3}=q_{1}\cup\{X(\underline{x_{2}},x_{3}),Y(\underline{y_{2}},y_{3})\}\)_._ _Then we have \(\mathsf{CERTAINTY}(q_{1})\) is in \(\mathbf{FO}\), \(\mathsf{CERTAINTY}(q_{2})\) is in \(\mathbf{NL}\)-hard \(\cap\)\(\mathbf{LFP}\), and \(\mathsf{CERTAINTY}(q_{3})\) is \(\mathbf{coNP}\)-complete._ The restrictions that no atom contains repeated variables, and that no constant occurs at a primary-key position ease the technical treatment, but it is likely that they can be dropped at the price of some technical involvement. On the other hand, all our techniques fundamentally rely on the restriction that primary keys are simple. Conclusion We established a fine-grained complexity classification of the problem \(\mathsf{CERTAINTY}(q)\) for all rooted tree queries \(q\). We then lifted our complexity classification to a larger class of queries. A notorious open problem in consistent query answering is Conjecture 1.1, which conjectures that for every query \(q\) in \(\mathsf{BCQ}\), \(\mathsf{CERTAINTY}(q)\) is either in \(\mathbf{PTIME}\) or \(\mathbf{coNP}\)-complete. Despite our progress, this problem remains open even under the restriction that all primary keys are simple. **Acknowledgements.** The authors thank the anonymous reviewers for their constructive feedback and comments. This work is supported by the National Science Foundation under grant IIS-1910014 and the Anthony Klug NCR Fellowship.
2310.16571
On the average hitting times of weighted Cayley graphs
In the present paper, we give the exact formula for the average hitting time (HT, as an abbreviation) of random walks from one vertex to any other vertex on the weighted Cayley graph Cay($Z_{2n},\{\pm1\}$). Furthermore, we also give the exact formula for the HT's of random walks on the weighted Cayley graph Cay($Z_N,\{+1,+2\}$).
Yuuho Tanaka
2023-10-25T11:58:11Z
http://arxiv.org/abs/2310.16571v1
# On the average hitting times of weighted Cayley graphs ###### Abstract. In the present paper, we give the exact formula for the average hitting time (HT, as an abbreviation) of random walks from one vertex to any other vertex on the weighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{2n},\{\pm 1\})\). Furthermore, we also give the exact formula for the HT's of random walks on the weighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\})\). Keywords: random walk, hitting time, weighted graph, Cayley graph ## 1. Introduction Let \(G\) be a graph. A _weight_ on a graph \(G\) is a function \(w:V(G)\times V(G)\to[0,+\infty]\) such that for \(u,v\in V(G)\) \[\begin{cases}w(u,v)>0&\text{if $u$ is adjacent to $v$},\\ w(u,v)=0&\text{otherwise}.\end{cases}\] We then say \(G\) is a _weighted graph_. If a weight \(w(u,v)\) on a graph \(G\) is given by \[w(u,v)=\begin{cases}1&\text{if $u$ is adjacent to $v$},\\ 0&\text{otherwise},\end{cases}\] then we say that \(G\) is a _unweighted graph_. A random walk on a graph \(G\) is a discrete stochastic model such that a random walker on a vertex \(u\in V(G)\) moves to a vertex \(v\) adjacent to the vertex \(u\) at the next step with the probability of \(\frac{w(u,v)}{W(u)}\), where \(W(u)=\sum_{x\in V(G)}w(u,x)\). The number of steps required until the random walker starting from a vertex \(u\) of \(G\) will first arrive at a vertex \(v\) of \(G\) is called _the hitting time_ from \(u\) to \(v\) of the random walk on \(G\). _The average hitting time_ (HT, as an abbreviation) from \(u\) to \(v\) on \(G\) which is denoted by \(h(G;u,v)\), means the expected value of the hitting times from \(u\) to \(v\) of random walks on \(G\). Let \(G\) be a finite group and \(S\subseteq G\) be a subset. The corresponding _Cayley graph_\(\operatorname{Cay}(G,S)\) has vertex set equal to \(G\). Two vertices \(g,h\in G\) are joined by a directed edge from \(g\) to \(h\) if and only if there exists \(s\in S\) such that \(g=sh\). Y. Doi et al. [2] gave the exact formula for the HT's of random walks on the unweighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{N},\{\pm 1,\pm 2\})\). Since the symmetry of the Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{N},\{\pm 1,\pm 2\})\), they have that \[h(\operatorname{Cay}(\mathbb{Z}_{N},\{\pm 1,\pm 2\});0,\ell) =h(\operatorname{Cay}(\mathbb{Z}_{N},\{\pm 1,\pm 2\});k,k+\ell)\] \[=h(\operatorname{Cay}(\mathbb{Z}_{N},\{\pm 1,\pm 2\});k+\ell,k),\] for all \(k,\ell\in\mathbb{Z}_{N}\). Note that \(h(\operatorname{Cay}(\mathbb{Z}_{N},\{\pm 1,\pm 2\});0,0)=0\). By using an elementary method, they gave the exact formula for the HT's of random walks on the unweighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{N},\{\pm 1,\pm 2\})\), as follows: **Theorem 1.1** (Y. Doi et al., 2022 [2]).: _The exact formula for the HT's of random walks on the unweighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{N},\{\pm 1,\pm 2\})\)\((N\geq 5)\) is,_ \[h(\operatorname{Cay}(\mathbb{Z}_{N},\{\pm 1,\pm 2\});0,\ell)=\frac{2}{5}\left( \ell\left(N-\ell\right)+2N\frac{F_{\ell}F_{N-\ell}}{F_{N}}\right),\] _where \(F_{i}\) is the \(i\)-th Fibonacci number._ Y. Tanaka [3] gave the exact formula for the HT's of random walks on the unweighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\})\) likewise. Since the symmetry of the Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\})\), they have that \[h(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\});0,\ell) =h(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\});k,k+\ell)\] \[=h(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\});k+N-\ell,k),\] for all \(k,\ell\in\mathbb{Z}_{N}\). Note that \(h(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\});0,0)=0\). By using the same method, they gave the exact formula for the HT's of random walks on the unweighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\})\), as follows: **Theorem 1.2** (Y. Tanaka, 2023 [3]).: _The exact formula for the HT's of random walks on the unweighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\})\) is,_ \[h(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\});0,\ell)=\frac{1}{3J_{N}}(2J_{ \ell-1}(3\ell J_{N-\ell-1}+2\ell J_{N-\ell})\] \[+J_{\ell}((N+\ell+3)J_{N-\ell-1}+(N+3\ell+1)J_{N-\ell})),\] _where \(J_{i}\) is the \(i\)-th Jacobsthal number._ The purpose of the present paper is to give the exact formula for the HT's of random walk on weighted Cayley graphs by using the same method. X. Chang et al. [1] derived explicit formulas of HT's of random walks on weighted undirected graphs through the enumeration of spanning trees. We give the exact formula for the HT's of random walk on weighted Cayley graphs \(\operatorname{Cay}(\mathbb{Z}_{2n},\{\pm 1\})\) and \(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\})\) in a special case. Our proof is considerably short and fully combinatorial. In particular, it has no-need of any spectral graph theoretical arguments. This paper is organized as follows. In Section 2, we give some definitions of weighted graphs. In Section 3, we give the exact formula for the HT's of random walk on the weighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{2n},\{\pm 1\})\) in a special case. Finally, in Section 4, we give the exact formula for the HT's of random walk on the weighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\})\) in a special case. ## 2. Preliminary The laplacian matrix of a weighted graph \(G\) is a matrix \(L_{G}\) whose entries \(L_{G}(i,j)\) are given by \[L_{G}(i,j)=\begin{cases}W(i)&\text{if $i=j$,}\\ -w(i,j)&\text{if $i\neq j$ and $i$ is adjacent to $j$,}\\ 0&\text{otherwise.}\end{cases}\] Let \(L^{\prime}_{G}\) (resp. \(L^{\prime\prime}_{G}\)) denote the matrix obtained from \(L_{G}\) by deleting its first (resp. last) row and column. Let \(\vec{1}\) be a all one vector. ## 3. Weighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{2n},\{\pm 1\})\) We defined a weight \(w(u,v)\) (\(u,v\in\mathbb{Z}_{2n}\)) on the weighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{2n},\{\pm 1\})\) as follows. \[w(u,v)=\begin{cases}p&\text{if $v-u\equiv 1$}\pmod{2n},\,u\text{ is even,}\\ &\text{$v-u\equiv-1$}\pmod{2n},\,u\text{ is odd,}\\ q&\text{if $v-u\equiv 1$}\pmod{2n},\,u\text{ is odd,}\\ &\text{$v-u\equiv-1$}\pmod{2n},\,u\text{ is even,}\\ 0&\text{otherwise,}\end{cases}\] Let us denote by \(h_{2n}(k,\ell)\) the HT for random walks from the vertex \(k\) to the vertex \(\ell\) of the weighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{2n},\{\pm 1\})\). Note that \(h_{2n}(0,0)=0\). Since the symmetry of the weighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{2n},\{\pm 1\})\), we have that \[h_{2n}(0,\ell) =h_{2n}(2k,2k+\ell)=h_{2n}(2k,2k-2n+\ell),\] \[h_{2n}(1,\ell) =h_{2n}(2k+1,2k+\ell)=h_{2n}(2k+1,2k-2n+\ell),\] for all \(k,\ell\in\mathbb{Z}_{2n}\). Let \(\vec{h}\) be the column vector whose \(i\)-th entry is \[\begin{cases}h_{2n}(1,2n-i+1)&\text{if $1\leq i\leq 2n-1$, $i$ is odd},\\ h_{2n}(0,2n-i)&\text{if $1\leq i\leq 2n-1$, $i$ is even},\\ h_{2n}(1,4n-i-1)&\text{if $2n\leq i\leq 2(2n-1)$, $i$ is odd},\\ h_{2n}(0,4n-i)&\text{if $2n\leq i\leq 2(2n-1)$, $i$ is even}.\end{cases}\] For the case of a random walk on the weighted Cayley graph \(\text{Cay}(\mathbb{Z}_{2n},\{\pm 1\})\), a random walker moves, with the probability of \(\frac{p}{p+q}\) or \(\frac{q}{p+q}\), to an arbitrary vertex adjacent to the vertex where the walker is. Hence, we have that \(h_{2n}(0,\ell)=\frac{1}{p+q}\left(q(1+h_{2n}(-1,\ell))+p(1+h_{2n}(1,\ell))\right)\) for all \(\ell\in\mathbb{Z}_{2n}\setminus\{0\}\), and hence \((p+q)h_{2n}(0,\ell)-qh_{2n}(-1,\ell)-ph_{2n}(1,\ell)=p+q\) for all \(\ell\in\mathbb{Z}_{2n}\setminus\{0\}\). Thus, we have \(L^{\prime}_{\text{Cay}(\mathbb{Z}_{2n},\{\pm 1\})}\vec{h}=(p+q)\vec{1}\). Similarly, we have that \(h_{2n}(1,\ell)=\frac{1}{p+q}\left(p(1+h_{2n}(0,\ell))+q(1+h_{2n}(2,\ell))\right)\) for all \(\ell\in\mathbb{Z}_{2n}\setminus\{1\}\), and hence \((p+q)h_{2n}(1,\ell)-ph_{2n}(0,\ell)-qh_{2n}(2,\ell)=p+q\) for all \(\ell\in\mathbb{Z}_{2n}\setminus\{1\}\). Thus, we have \(L^{\prime\prime}_{\text{Cay}(\mathbb{Z}_{2n},\{\pm 1\})}\vec{h}=(p+q)\vec{1}\). Let \(L^{\prime}(i,j)\) denote the \((i,j)\)-th entry of \(L^{\prime}_{\text{Cay}(\mathbb{Z}_{2n},\{\pm 1\})}\). Let \(L^{\prime\prime}(i,j)\) denote the \((i,j)\)-th entry of \(L^{\prime\prime}_{\text{Cay}(\mathbb{Z}_{2n},\{\pm 1\})}\). Let \(H_{N}\) be the \(2(2n-1)\times 2(2n-1)\)-matrix whose \((i,j)\)-th entry is \[\begin{cases}L^{\prime}(i,j)&\text{if $1\leq i,j\leq 2n-1$},\\ L^{\prime\prime}(i,j)&\text{if $2n\leq i,j\leq 2(2n-1)$},\\ 0&\text{otherwise}.\end{cases}\] Then we have \(H_{N}\vec{h}=(p+q)\vec{1}\). For our problem, it suffices to solve this matrix equation. Let \(p+q=1\). Let \(U_{2n}\) be the upper triangular matrix whose \((i,j)\)-th entry is \(1\) if \(j\geq i\) and \(R_{2n}\) be the following matrix: \[R_{2n}(i,j):=\begin{cases}2&\text{if $i$ is odd, $1\leq i=j<2n$},\\ 3-2p&\text{if $i$ is even, $1\leq i=j<2n$},\\ 2p&\text{if $i$ is odd, $2n\leq i=j\leq 2(2n-1)$},\\ 2-p&\text{if $i\neq j$, $1\leq i,j<2n$},\\ p&\text{if $i\neq j$, $2n<i\leq 2(2n-1)$},\\ &\text{$i\neq j$, $2n<j\leq 2(2n-1)$},\\ 1&\text{otherwise}.\end{cases}\] Then its inverse \(R_{2n}^{-1}\) is as follows: \[R_{2n}^{-1}(i,j)=\begin{cases}\frac{n-(1-p)}{np}&\text{if $i$ is odd, $i=j$,}\\ \frac{n-p}{n(1-p)}&\text{if $i$ is even, $i=j\neq 2n$,}\\ \frac{2(n-p)}{n(1-p)}&\text{if $i=j=2n$,}\\ -\frac{1-p}{np}&\text{if $i,j$ are odd, $i<2n<j$,}\\ &\text{$i,j$ are odd, $j<2n<i$,}\\ -\frac{p}{n(1-p)}&\text{if $i,j$ are even, $i\neq j$, $i\leq 2n\leq j$,}\\ &\text{$i,j$ are even, $i\neq j$, $j\leq 2n\leq i$,}\\ -\frac{1}{n}&\text{if $i\not\equiv j$ \pmod 2$, $i\leq 2n\leq j$,}\\ &\text{$i\not\equiv j$ \pmod 2$, $j\leq 2n\leq i$,}\\ 0&\text{otherwise.}\end{cases}\] Then we have the following: **Theorem 3.1**.: \(H_{2n}=U_{2n}^{-1}\,R_{2n}\,^{t}U_{2n}^{-1}\) Proof.: Let \(S_{2n}\) denote the matrix \(U_{2n}^{-1}{R_{2n}}^{t}U_{2n}^{-1}\). By simple matrix-computation, the entries of \(S_{2n}\) can be calculated as follows. \[S_{2n}(i,j)=\begin{cases}2-(2-p)-(2-p)+3-2p&\text{if $1\leq i=j<2n-1$,$\,$i$ is odd},\\ 2-1-1+1&\text{if $i=j=2n-1$}\\ 2p-p-p+1&\text{if $2n-1<i=j<2(2n-1)$,$\,$i$ is odd},\\ 3-2p-(2-p)-(2-p)+2&\text{if $1\leq i=j<2n$,$\,$i$ is even},\\ 1-p-p+2p&\text{if $2n\leq i=j<2(2n-1)$,$\,$i$ is even},\\ 1&\text{if $i=j=2(2n-1)$,}\\ 2-p-(2-p)-(3-2p)+2-p&\text{if $i=j-1,\,1\leq i<2n-2$,$\,$i$ is odd},\\ 2-p-(2-p)-2+2-p&\text{if $i=j-1,\,1\leq i<2n-2$,$\,$i$ is even},\\ 2-p-1-2+1&\text{if $(i,j)=(2n-2,2n-1)$},\\ 1-p-1+p&\text{if $(i,j)=(2n-1,2n)$},\\ p-p-1+p&\text{if $i=j-1,\,2n-1<i<4n-3$,$\,$i$ is odd},\\ p-p-2p+p&\text{if $i=j-1,\,2n-1<i\leq 2(2n-1)$,$\,$i$ is even},\\ p-1&\text{if $(i,j)=(4n-3,4n-2)$},\\ 2-p-2-(2-p)+2-p&\text{if $i=j+1,\,1\leq i<2n-1$,$\,$i$ is odd},\\ 2-p-(3-2p)-(2-p)+2-p&\text{if $i=j+1,\,1\leq i<2n-1$,$\,$i$ is even},\\ 2-p-2-1+1&\text{if $(i,j)=(2n-1,2n-2)$},\\ 1-1-p+p&\text{if $(i,j)=(2n,2n-1)$},\\ p-2p-p+p&\text{if $i=j+1,2n<i\leq 2(2n-1)$,$\,$i$ is odd},\\ p-1-p+p&\text{if $i=j+1,2n<i<2(2n-1)$,$\,$i$ is even},\\ p-1&\text{if $(i,j)=(2(2n-1),4n-3)$},\\ 2-p-(2-p)-(2-p)+(2-p)&\text{if $|i-j|>1,\,1\leq i,j<2n-1$},\\ 2-p-(2-p)-1+1&\text{if $i-j>1,\,i=2n-1$},\\ 2-p-1-(2-p)+1&\text{if $j-i>1,\,j=2n-1$},\\ 1-p-1+p&\text{if $j-i>1,\,\,,j=2n$},\\ 1-1-p+p&\text{if $i-j>1,\,i=2n$},\\ p-p-p+p&\text{if $|i-j|>1,\,2n<j<2(2n-1)$},\\ &\text{$|i-j|>1,\,2n<i<2(2n-1)$},\\ p-p&\text{if $|i-j|>1,\,i=2(2n-1)$},\\ &\text{$|i-j|>1,\,j=2(2n-1)$},\\ 0&\text{otherwise}.\end{cases}\] Therefore, we have \(S_{2n}=H_{2n}\). Now let us define the new variable vector \(\vec{h^{\prime}}={}^{t}U_{2n}^{-1}\vec{h}\). Then the matrix equation \(H_{2n}\vec{h}=\vec{1}\) can be expressed by the following single matrix equation. \[\vec{h^{\prime}}=\begin{bmatrix}h_{1}^{\prime}\\ h_{2}^{\prime}\\ \vdots\\ h_{2n-1}^{\prime}\\ h_{2n}^{\prime}\\ h_{2n+1}^{\prime}\\ \vdots\\ h_{2(2n-1)-1}^{\prime}\\ h_{2(2n-1)}^{\prime}\end{bmatrix}:=\begin{bmatrix}h_{1,2n}\\ h_{0,2n-2}-h_{1,2n}\\ \vdots\\ h_{1,2}-h_{0,2}\\ h_{0,2n-1}-h_{1,2}\\ h_{1,2n-1}-h_{0,2n-1}\\ \vdots\\ h_{1,3}-h_{0,3}\\ h_{0,1}-h_{1,3}\end{bmatrix}=R_{2n}^{-1}\begin{bmatrix}2(2n-1)\\ 2(2n-1)-1\\ \vdots\\ 2n\\ 2n-1\\ \vdots\\ 2\\ 1\end{bmatrix}.\] By solving this matrix equation, we obtain the following. **Lemma 3.1**.: _For \(1\leq\ell\leq 2(2n-1)\),_ \[h_{\ell}^{\prime}=\begin{cases}\frac{n-\ell+p}{p}&\text{if $1\leq\ell\leq 2n-1$, $\ell$ is odd,}\\ \frac{n-\ell+p}{1-p}&\text{if $1\leq\ell\leq 2n-1$, $\ell$ is even,}\\ 0&\text{if $\ell=2n$,}\\ \frac{3n-\ell-p}{p}&\text{if $2n+1\leq\ell\leq 2(2n-1)$, $\ell$ is odd,}\\ \frac{3n-\ell-p}{1-p}&\text{if $2n+1\leq\ell\leq 2(2n-1)$, $\ell$ is even.}\end{cases}\] Proof.: For the case that \(1\leq\ell\leq 2n-1\) (\(\ell\) is odd), we have \[h_{\ell}^{\prime} =R_{2n}^{-1t}\left[2(2n-1),4n-3,\cdots,1\right]\] \[=\sum_{k=1}^{2n-1}R_{2n}^{-1}(\ell,k)(4n-k-2)\] \[=\frac{n-(1-p)}{np}(4n-\ell-1)-\frac{1-p}{np}\sum_{\begin{subarray} {c}1\leq k\leq n\\ k\neq\frac{\ell+1}{2}\end{subarray}}2(2n-k)-\frac{1}{n}\sum_{k=1}^{n}(2(2n-k)-1)\] \[=\frac{4n-\ell-1}{p}-\frac{1-p}{np}\sum_{k=1}^{n}2(2n-k)-\frac{1}{ n}\sum_{k=1}^{n}(2(2n-k)-1)\] \[=\frac{4n-\ell-1}{p}-\frac{1-p}{np}(4n^{2}-n(n+1))-\frac{1}{n}(n( 4n-1)-n(n+1))\] \[=\frac{4n-\ell-1}{p}-\frac{(3n-1)(1-p)}{p}-(3n-2)\] \[=\frac{n-\ell+p}{p}.\] For the case that \(1\leq\ell\leq 2n-1\) (\(\ell\) is even), we have \[h^{\prime}_{\ell} =R_{2n}^{-1t}\left[2(2n-1),4n-3,\cdots,1\right]\] \[=\sum_{k=1}^{2n-1}R_{2n}^{-1}(\ell,k)(4n-k-2)\] \[=\frac{n-p}{n(1-p)}(4n-\ell-1)-\frac{p}{n(1-p)}\sum_{\begin{subarray} {c}1\leq k\leq n\\ k\neq\frac{\ell}{2}\end{subarray}}(2(2n-k)-1)-\frac{1}{n}\sum_{k=1}^{n}2(2n-k)\] \[=\frac{4n-\ell-1}{1-p}-\frac{p}{n(1-p)}\sum_{k=1}^{n}(2(2n-k)-1)- \frac{1}{n}\sum_{k=1}^{n}2(2n-k)\] \[=\frac{4n-\ell-1}{1-p}-\frac{p}{n(1-p)}(n(4n-1)-n(n+1))-\frac{1}{ n}(4n^{2}-n(n+1))\] \[=\frac{4n-\ell-1}{1-p}-\frac{(3n-2)p}{1-p}-(3n-1)\] \[=\frac{n-\ell+p}{1-p}\] In the case of \(\ell=2n\), we have \[h^{\prime}_{\ell} =R_{2n}^{-1t}\left[2(2n-1),4n-3,\cdots,1\right]\] \[=\sum_{k=1}^{4n-3}R_{2n}^{-1}(2n,k)(4n-k-2)\] \[=\frac{2(n-p)}{n(1-p)}(2n-1)-\frac{p}{n(1-p)}\sum_{\begin{subarray} {c}1\leq k\leq 2n-1\\ k\neq n\end{subarray}}(2(2n-k)-1)-\frac{1}{n}\sum_{k=1}^{2n-1}2(2n-k)\] \[=\frac{(2n-1)(2n-p)}{n(1-p)}-\frac{p}{n(1-p)}((2n-1)(4n-1)-2n(2n- 1))\] \[-\frac{1}{n}(4n(2n-1)-2n(2n-1))\] \[=\frac{(2n-1)(2n-p)}{n(1-p)}-\frac{(2n-1)^{2}p}{n(1-p)}-2(2n-1)\] \[=0.\] For the case that \(2n+1\leq\ell\leq 2(2n-1)\) (\(\ell\) is even), we have \[h^{\prime}_{\ell} =R_{2n}^{-1t}\left[2(2n-1),4n-3,\cdots,1\right]\] \[=\sum_{k=2n-1}^{4n-3}R_{2n}^{-1}(\ell,k)(4n-k-2)\] \[=\frac{n-(1-p)}{np}(4n-\ell-1)-\frac{1-p}{np}\sum_{\begin{subarray} {c}n+1\leq k\leq 2n-1\\ k\neq\frac{\ell}{2}\end{subarray}}2(2n-k)-\frac{1}{n}\sum_{k=n}^{2n-1}(2(2n-k )-1)\] \[=\frac{4n-\ell-1}{p}-\frac{1-p}{np}\sum_{k=n+1}^{2n-1}2(2n-k)- \frac{1}{n}\sum_{k=n}^{2n-1}(2(2n-k)-1)\] \[=\frac{4n-\ell-1}{p}-\frac{1-p}{np}\sum_{k=1}^{n-1}2(n-k)-\frac{1 }{n}\sum_{k=0}^{n-1}(2(n-k)-1)\] \[=\frac{4n-\ell-1}{p}-\frac{1-p}{np}(2n(n-1)-n(n-1))-\frac{1}{n}( n(2n-1)-n(n-1))\] \[=\frac{4n-\ell-1}{p}-\frac{(n-1)(1-p)}{p}-n\] \[=\frac{3n-\ell-p}{p}.\] For the case that \(2n+1\leq\ell\leq 2(2n-1)\) (\(\ell\) is even), we have \[h^{\prime}_{\ell} =R_{2n}^{-1t}\left[2(2n-1),4n-3,\cdots,1\right]\] \[=\sum_{k=2n-1}^{4n-3}R_{2n}^{-1}(\ell,k)(4n-k-2)\] \[=\frac{n-p}{n(1-p)}(4n-\ell-1)-\frac{p}{n(1-p)}\sum_{\begin{subarray} {c}n\leq k\leq 2n-1\\ k\neq\frac{\ell}{2}\end{subarray}}(2(2n-k)-1)-\frac{1}{n}\sum_{k=n+1}^{2n-1}2(2 n-k)\] \[=\frac{4n-\ell-1}{1-p}-\frac{p}{n(1-p)}\sum_{k=n}^{2n-1}(2(2n-k)- 1)-\frac{1}{n}\sum_{k=n+1}^{2n-1}2(2n-k)\] \[=\frac{4n-\ell-1}{1-p}-\frac{p}{n(1-p)}\sum_{k=0}^{n-1}(2(n-k)-1) -\frac{1}{n}\sum_{k=1}^{n-1}2(n-k)\] \[=\frac{4n-\ell-1}{1-p}-\frac{p}{n(1-p)}(n(2n-1)-n(n-1))-\frac{1}{ n}(2n(n-1)-n(n-1))\] \[=\frac{4n-\ell-1}{1-p}-\frac{np}{1-p}-(n-1)\] \[=\frac{3n-\ell-p}{1-p}.\] Combining Lemmas 3.1 and the equations \(h_{0,\ell}=\begin{cases}\sum_{i=1}^{2n-\ell}h_{i}^{\prime}&\text{if $\ell$ is even,}\\ \sum_{i=1}^{4n-\ell-1}h_{i}^{\prime}&\text{if $\ell$ is odd,}\end{cases}\) and \(h_{1,\ell+1}=\begin{cases}\sum_{i=1}^{4n-\ell-1}h_{i}^{\prime}&\text{if $\ell$ is even,}\\ \sum_{i=1}^{2n-\ell}h_{i}^{\prime}&\text{if $\ell$ is odd,}\end{cases}\) for \(1\leq\ell\leq 2n-1\), we obtain the exact formula for \(h_{2n}(0,\ell)\), \(h_{2n}(1,\ell+1)\) as follows: **Theorem 3.2**.: _For \(1\leq\ell\leq 2n-1\), the exact formula for the HT's of random walks on the weighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{2n},\{\pm 1\})\) is,_ \[h_{2n}(0,\ell) =\frac{1}{4p(1-p)}\begin{cases}\ell(2n-\ell)&\text{if $\ell$ is even,}\\ (\ell-1)(2n-\ell+1)+4(1-p)(n-\ell+p)&\text{if $\ell$ is odd,}\end{cases}\] \[h_{2n}(1,\ell+1) =\frac{1}{4p(1-p)}\begin{cases}\ell(2n-\ell)&\text{if $\ell$ is even,}\\ (\ell-1)(2n-\ell+1)+4p(n-\ell+1-p)&\text{if $\ell$ is odd.}\end{cases}\] Proof.: In the case of \(\ell\) is even, we have \[h_{2n}(0,\ell) =\sum_{i=1}^{\frac{2n-\ell}{2}}\frac{n-(2i-1)+p}{p}+\sum_{i=1}^{ \frac{2n-\ell}{2}}\frac{n-2i+p}{1-p}\] \[=\sum_{i=1}^{\frac{2n-\ell}{2}}\frac{1}{p}+\sum_{i=1}^{\frac{2n- \ell}{2}}\frac{n-2i+p}{p}+\sum_{i=1}^{\frac{2n-\ell}{2}}\frac{n-2i+p}{1-p}\] \[=\sum_{i=1}^{\frac{2n-\ell}{2}}\frac{1}{p}+\sum_{i=1}^{\frac{2n- \ell}{2}}\frac{n-2i+p}{p(1-p)}\] \[=\frac{2n-\ell}{2p}+\frac{(2n-\ell)(n+p)}{2p(1-p)}-\frac{(2n- \ell)(2n-\ell+2)}{4p(1-p)}\] \[=\frac{\ell(2n-\ell)}{4p(1-p)}.\] In the case of \(\ell\) is odd (\(\ell\neq 1,2n-1\)), we have \[h_{2n}(0,\ell) =\sum_{i=1}^{n}\frac{n-(2i-1)+p}{p}+\sum_{i=1}^{n-1}\frac{n-2i+p}{1-p}\] \[\quad+\sum_{i=n+1}^{\frac{4n-\ell-1}{2}}\frac{3n-(2i-1)-p}{p}+ \sum_{i=n+1}^{\frac{4n-\ell-1}{2}}\frac{3n-2i-p}{1-p}\] \[=\sum_{i=1}^{n}\frac{n-2i+1+p}{p}+\sum_{i=1}^{n-1}\frac{n-2i+p}{1-p}\] \[\quad+\sum_{i=1}^{\frac{2n-\ell-1}{2}}\frac{n-2i+1-p}{p}+\sum_{i= 1}^{\frac{2n-\ell-1}{2}}\frac{n-2i-p}{1-p}\] \[=\sum_{i=1}^{n}\frac{1}{p}-\frac{n-p}{p}+\sum_{i=1}^{n-1}\frac{n- 2i+p}{p(1-p)}+\sum_{i=1}^{\frac{2n-\ell-1}{2}}\frac{1}{p}+\sum_{i=1}^{\frac{2 n-\ell-1}{2}}\frac{n-2i-p}{p(1-p)}\] \[=\frac{n}{p}-\frac{n-p}{p}+\frac{(n-1)(n+p)-n(n-1)}{p(1-p)}+\frac {2n-\ell-1}{2p}\] \[\quad+\frac{2(2n-\ell-1)(n-p)-(2n-\ell+1)(2n-\ell-1)}{4p(1-p)}\] \[=\frac{p}{p}+\frac{n-1}{1-p}+\frac{2n-\ell-1}{2p}+\frac{(2n-\ell -1)(\ell-1-2p)}{4p(1-p)}\] \[=\frac{4p(n-p)}{4p(1-p)}+\frac{2(2n-\ell-1)-2p(2n-\ell-1)}{4p(1-p)}\] \[\quad+\frac{(2n-\ell-1)(\ell-1)-2p(2n-\ell-1)}{4p(1-p)}\] \[=\frac{(2n-\ell-1)(\ell+1)-4p(n-\ell-1+p)}{4p(1-p)}\] \[=\frac{(\ell-1)(2n-\ell+1)+4(n-\ell+p)-4p(n-\ell+p)}{4p(1-p)}\] \[=\frac{(\ell-1)(2n-\ell+1)+4(1-p)(n-\ell+p)}{4p(1-p)}.\] In the case of \(\ell=2n-1\), we have \[h_{2n}(0,2n-1) =\sum_{i=1}^{n}\frac{n-(2i-1)+p}{p}+\sum_{i=1}^{n-1}\frac{n-2i+p}{1 -p}\] \[\quad+\sum_{i=n+1}^{n}\frac{3n-(2i-1)-p}{p}+\sum_{i=n+1}^{n}\frac{ 3n-2i-p}{1-p}\] \[=\sum_{i=1}^{n}\frac{n-2i+1+p}{p}+\sum_{i=1}^{n-1}\frac{n-2i+p}{1 -p}\] \[=\sum_{i=1}^{n}\frac{1}{p}-\frac{n-p}{p}+\sum_{i=1}^{n-1}\frac{n-2 i+p}{p(1-p)}\] \[=\frac{n}{p}-\frac{n-p}{p}+\frac{(n-1)(n+p)-n(n-1)}{p(1-p)}\] \[=\frac{n-p}{1-p}\] \[=\frac{(n-1)-(n-1)+p+p(n-1-p)}{p(1-p)}\] \[=\frac{(n-1)-(1-p)(n-1-p)}{p(1-p)}\] \[=\frac{2(2n-2)+4(1-p)(-n+1+p)}{4p(1-p)}\] \[=\frac{(2n-1-1)(2n-(2n-1)+1)+4(1-p)(n-(2n-1)+p)}{4p(1-p)}.\] In the case of \(\ell=1\), we have \[h_{2n}(0,1)= \sum_{i=1}^{n}\frac{n-(2i-1)+p}{p}+\sum_{i=1}^{n-1}\frac{n-2i+p}{1-p}\] \[+\sum_{i=n+1}^{2n-1}\frac{3n-(2i-1)-p}{p}+\sum_{i=n+1}^{2n-1}\frac {3n-2i-p}{1-p}\] \[=\sum_{i=1}^{n}\frac{n-2i+1+p}{p}+\sum_{i=1}^{n-1}\frac{n-2i+p}{1-p}\] \[+\sum_{i=1}^{n-1}\frac{n-2i+1-p}{p}+\sum_{i=1}^{n-1}\frac{n-2i-p} {1-p}\] \[=\sum_{i=1}^{n}\frac{1}{p}-\frac{n-p}{p}+\sum_{i=1}^{n-1}\frac{n- 2i+p}{p(1-p)}+\sum_{i=1}^{n-1}\frac{1}{p}+\sum_{i=1}^{n-1}\frac{n-2i-p}{p(1-p)}\] \[=\frac{n}{p}-\frac{n-p}{p}+\frac{2n(n-1)}{p(1-p)}-\frac{2n(n-1)} {p(1-p)}+\frac{n-1}{p}\] \[=\frac{n-1+p}{p}\] \[=\frac{0\cdot 2n+4(1-p)(n-1+p)}{4p(1-p)}\] \[=\frac{(1-1)(2n-1+1)+4(1-p)(n-1+p)}{4p(1-p)}.\] In the case of \(\ell\) is even, we have \[h_{2n}(1,\ell+1) =\sum_{i=1}^{n}\frac{n-(2i-1)+p}{p}+\sum_{i=1}^{n-1}\frac{n-2i+p}{1-p}\] \[\quad+\sum_{i=n+1}^{\frac{4n-\ell}{2}}\frac{3n-(2i-1)-p}{p}+\sum_ {i=n+1}^{\frac{4n-\ell-2}{2}}\frac{3n-2i-p}{1-p}\] \[=\sum_{i=1}^{n}\frac{n-2i+1+p}{p}+\sum_{i=1}^{n-1}\frac{n-2i+p}{1 -p}\] \[\quad+\sum_{i=1}^{\frac{2n-\ell}{2}}\frac{n-2i+1-p}{p}+\sum_{i=1} ^{\frac{2n-\ell-2}{2}}\frac{n-2i-p}{1-p}\] \[=\sum_{i=1}^{n}\frac{1}{p}-\frac{n-p}{p}+\sum_{i=1}^{n-1}\frac{n -2i+p}{p(1-p)}\] \[\quad+\sum_{i=1}^{\frac{2n-\ell}{2}}\frac{1}{p}+\frac{n-(2n-\ell )+1-p}{p}+\sum_{i=1}^{\frac{2n-\ell-2}{2}}\frac{n-2i-p}{p(1-p)}\] \[=\frac{n}{p}-\frac{n-p}{p}+\frac{(n-1)(n+p)-n(n-1)}{p(1-p)}+\frac {2n-\ell}{2p}\] \[\quad-\frac{n-\ell+p}{p}+\frac{2(2n-\ell-2)(n-p)-(2n-\ell-2)(2n- \ell)}{4p(1-p)}\] \[=\frac{n-1}{1-p}+\frac{\ell}{2p}+\frac{(2n-\ell-2)(\ell-2p)}{4p(1 -p)}\] \[=\frac{4p(n-1)}{4p(1-p)}+\frac{2\ell(1-p)}{4p(1-p)}\] \[\quad+\frac{\ell(2n-\ell-2)-2p(2n-\ell-2)}{4p(1-p)}\] \[=\frac{\ell(2n-\ell)}{4p(1-p)}.\] In the case of \(\ell\) is odd, we have \[h_{2n}(1,\ell+1) =\sum_{i=1}^{\frac{2n-\ell+1}{2}}\frac{n-(2i-1)+p}{p}+\sum_{i=1}^{ \frac{2n-\ell-1}{2}}\frac{n-2i+p}{1-p}\] \[=\sum_{i=1}^{\frac{2n-\ell+1}{2}}\frac{1}{p}+\sum_{i=1}^{\frac{2n -\ell+1}{2}}\frac{n-2i+p}{p}+\sum_{i=1}^{\frac{2n-\ell-1}{2}}\frac{n-2i+p}{1-p}\] \[=\sum_{i=1}^{\frac{2n-\ell+1}{2}}\frac{1}{p}+\frac{n-(2n-\ell+1) +p}{p}+\sum_{i=1}^{\frac{2n-\ell-1}{2}}\frac{n-2i+p}{p(1-p)}\] \[=\frac{2n-\ell+1}{2p}-\frac{n-\ell+1-p}{p}+\frac{(2n-\ell-1)(n+p )}{2p(1-p)}\] \[-\frac{(2n-\ell-1)(2n-\ell+1)}{4p(1-p)}\] \[=\frac{\ell-1+2p}{2p}+\frac{(2n-\ell-1)(\ell-1)+2p(2n-\ell-1)}{4 p(1-p)}\] \[=\frac{(\ell-1)(2n-\ell+1)+4p(n-\ell+1-p)}{4p(1-p)}.\] \(\square\) From Theorem 3.1 we have \(H_{2n}^{-1}={}^{t}U_{2n}\,R_{2n}^{-1}\,U_{2n}\). Hence the entries of \(H_{2n}^{-1}\) also can be calculated as follows: For \(1\leq i,j\leq 2(2n-1)\), \[H_{2n}^{-1}(i,j)=\frac{1}{4np(1-p)}\] \[\begin{cases}(2n-i-1+2p)(j+1-2p)&\text{ if $1\leq j\leq i\leq 2n-1 $,$ i$ is odd, $j$ is odd,}\\ (2n-i)(j+1-2p)&\text{ if $1\leq j\leq i\leq 2n-1$, $i$ is even, $j$ is odd,}\\ (2n-i-1+2p)j&\text{ if $1\leq j\leq i\leq 2n-1$, $i$ is odd, $j$ is even,}\\ (2n-i)j&\text{ if $1\leq j\leq i\leq 2n-1$, $i$ is even, $j$ is even,}\\ (2n-j-1+2p)(i+1-2p)&\text{ if $1\leq i<j\leq 2n-1$, $i$ is odd, $j$ is odd,}\\ (2n-j)(i+1-2p)&\text{ if $1\leq i<j\leq 2n-1$, $i$ is even, $j$ is odd,}\\ (2n-j-1+2p)i&\text{ if $1\leq i<j\leq 2n-1$, $i$ is odd, $j$ is even,}\\ (2n-j)i&\text{ if $1\leq i<j\leq 2n-1$, $i$ is even, $j$ is even,}\\ (2n-j)i&\text{ if $1\leq i<j\leq 2n-1$, $i$ is even, $j$ is even,}\\ (2n-4n+i+2p)(4n-j-2p)&\text{ if $2n\leq j\leq i\leq 2(2n-1)$, $i$ is odd, $j$ is odd,}\\ (2n-4n+i+1)(4n-j-2p)&\text{ if $2n\leq j\leq i\leq 2(2n-1)$, $i$ is even, $j$ is odd,}\\ (2n-4n+i+2p)(4n-j-1)&\text{ if $2n\leq j\leq i\leq 2(2n-1)$, $i$ is odd, $j$ is even,}\\ (2n-4n+i+1)(4n-j-1)&\text{ if $2n\leq j\leq i\leq 2(2n-1)$, $i$ is even, $j$ is even,}\\ (2n-j-1+2p)(4n-i-2p)&\text{ if $2n\leq i<j\leq 2(2n-1)$, $i$ is odd, $j$ is odd,}\\ (2n-4n+j+1)(4n-i-2p)&\text{ if $2n\leq i<j\leq 2(2n-1)$, $i$ is even, $j$ is odd,}\\ (2n-4n+j+2p)(4n-i-1)&\text{ if $2n\leq i<j\leq 2(2n-1)$, $i$ is odd, $j$ is even,}\\ (2n-4n+j+1)(4n-i-1)&\text{ if $2n\leq i<j\leq 2(2n-1)$, $i$ is even, $j$ is even,}\\ 0&\text{ otherwise.}\end{cases}\] ## 4. Weighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\})\) We defined a weight \(w(u,v)\) (\(u,v\in\mathbb{Z}_{N}\)) on the weighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\})\) as follows. \[w(u,v)=\begin{cases}p&\text{if $v-u\equiv\pm 1\pmod{2}$},\\ q&\text{if $v-u\equiv 0\pmod{2}$},\\ 0&\text{otherwise.}\end{cases}\] Let us denote by \(h_{N}(k,\ell)\) the HT for random walks from the vertex \(k\) to the vertex \(\ell\) of a weighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\})\). Note that \(h_{N}(0,0)=0\). Since the symmetry of the weighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\})\), we have that \(h_{N}(0,\ell)=h_{N}(k,k+\ell)=h_{N}(k+N-\ell,k)\) for all \(k,\ell\in\mathbb{Z}_{N}\). Let \(\vec{h}\) be the column vector whose \(i\)-th entry is \(h_{N}(0,i)\). For the case of a random walk on the weighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{2n},\{+1,+2\})\), a random walker moves, with the probability of \(\frac{p}{p+q}\) or \(\frac{q}{p+q}\), to an arbitrary vertex adjacent to the vertex where the walker is. Hence, we have that \(h_{N}(0,\ell)=\frac{1}{p+q}\left(p(1+h_{N}(1,\ell))+q(1+h_{N}(2,\ell))\right)\) for all \(\ell\in\mathbb{Z}_{N}\setminus\{0\}\), and hence \((p+q)h_{N}(0,\ell)-ph_{N}(1,\ell)-qh_{N}(2,\ell)=p+q\) for all \(\ell\in\mathbb{Z}_{N}\setminus\{0\}\). Thus, we have \({}^{t}L_{\text{Cay}(\mathbb{Z}_{N},\{+1,+2\})}^{\prime\prime}\vec{h}=(p+q)\vec{1}\). Let \(L^{\prime\prime}(i,j)\) denote the \((i,j)\)-th entry of \(L_{\text{Cay}(\mathbb{Z}_{N},\{+1,+2\})}^{\prime\prime}\). Let \(H_{N}\) be the \(N-1\times N-1\)-matrix whose \((i,j)\)-th entry is \({}^{t}L^{\prime\prime}(i,j)\). Then we have \(H_{N}\vec{h}=(p+q)\vec{1}\). For our problem, it suffices to solve this matrix equation. Let \(p+q=1\). Let \(P_{N}\) be the following matrix: \[P_{N}:=\begin{bmatrix}0&1&0&0&\cdots&0\\ 1&0&0&0&\cdots&0\\ 0&0&1&0&\cdots&0\\ \vdots&\ddots&\ddots&\ddots&\ddots&\vdots\\ 0&\cdots&0&0&1&0\\ 0&\cdots&0&0&0&1\end{bmatrix}\] Let \(L_{N}\) be the following matrix: \[L_{N}:=\begin{bmatrix}1&0&0&0&\cdots&0&0\\ -\frac{1}{p}&1&0&0&0&\cdots&0\\ -\frac{-1+p}{p}&-1+p-p^{2}&1&0&0&\cdots&0\\ 0&p(-1+p)&-p&1&0&\ddots&\vdots\\ 0&0&-1+p&-p&1&\ddots&\vdots\\ \vdots&\vdots&\ddots&\ddots&\ddots&\ddots&0\\ 0&0&\cdots&0&-1+p&-p&1\end{bmatrix}\] Then its inverse \(L_{N}^{-1}\) is as follows: \[L_{N}^{-1}=\begin{bmatrix}1&0&0&0&\cdots&0&0\\ \frac{1}{p}&1&0&0&0&\cdots&0\\ \frac{(p-1)^{2}-1}{p-2}&\frac{(p-1)^{3}-1}{p-2}&1&0&0&\cdots&0\\ \frac{(p-1)^{3}-1}{p-2}&\frac{(p-1)^{4}-1}{p-2}&\frac{(p-1)^{2}-1}{p-2}&1&0& \ddots&\vdots\\ \frac{(p-1)^{4}-1}{p-2}&\frac{(p-1)^{5}-1}{p-2}&\frac{(p-1)^{3}-1}{p-2}&\frac{ (p-1)^{2}-1}{p-2}&1&\ddots&\vdots\\ \vdots&\vdots&\ddots&\ddots&\ddots&\ddots&0\\ \frac{(p-1)^{N-2}-1}{p-2}&\frac{(p-1)^{N-1}-1}{p-2}&\frac{(p-1)^{N-3}-1}{p-2}& \cdots&\frac{(p-1)^{3}-1}{p-2}&\frac{(p-1)^{2}-1}{p-2}&1\end{bmatrix}\] Let \(U_{N}\) be the following matrix: \[U_{N}:=\begin{bmatrix}-p&1&0&\cdots&0&0\\ 0&\frac{1}{p}&0&\cdots&0&-1+p\\ 0&0&1&\ddots&\vdots&\frac{(p-1)^{4}-p+1}{p-2}\\ 0&0&0&\ddots&0&\vdots\\ \vdots&\vdots&\ddots&\ddots&1&\frac{(p-1)^{N-1}-p+1}{p-2}\\ 0&0&\cdots&0&0&\frac{(p-1)^{N}-1}{p-2}\end{bmatrix}\] Then its inverse \(U_{N}^{-1}\) is as follows: \[U_{N}^{-1}=\begin{bmatrix}-\frac{1}{p}&1&0&\cdots&0&\frac{(p-1)^{2}-p+1}{1-(p-1) ^{N}}\\ 0&p&0&\cdots&0&\frac{(p-1)^{3}-p+1}{1-(p-1)^{N}}\\ 0&0&1&\ddots&\vdots&\frac{(p-1)^{4}-p+1}{1-(p-1)^{N}}\\ 0&0&0&\ddots&0&\vdots\\ \vdots&\vdots&\ddots&\ddots&1&\frac{(p-1)^{N-1}-p+1}{1-(p-1)^{N}}\\ 0&0&\cdots&0&0&\frac{p-2}{(p-1)^{N}-1}\end{bmatrix}\] Then we have the following: **Theorem 4.1**.: \(H_{N}=P_{N}L_{N}U_{N}\)_._ Proof.: Let \(S_{N}\) denote the matrix \(P_{N}L_{N}U_{N}\). By simple matrix-computation, the entries of \(S_{N}\) can be calculated as follows. \[S_{N}(i,j)=\begin{cases}1&\text{if $i=j\neq N-1$},\\ -p&\text{if $i=j+1$},\,4\leq i\leq N-1,\\ -1+p&\text{if $i=j+2$},\,3\leq i\leq N-1,\\ &(i,j)=(1,N-1),\\ -\frac{1}{p}+\frac{1}{p}&\text{if $(i,j)=(1,2)$},\\ -\frac{-1+p}{p}+\frac{-1+p-p^{2}}{p}&\text{if $(i,j)=(3,2)$},\\ (-1+p)(-1+p-p^{2})+\frac{(p-1)^{4}-p+1}{p-2}&\text{if $(i,j)=(3,N-1)$},\\ p(-1+p)^{2}-\frac{p((p-1)^{4}-p+1)}{p-2}+\frac{(p-1)^{5}-p+1}{p-2}&\text{if $(i,j)=(4,N-1)$},\\ (-1+p)\frac{(p-1)^{i-1}-p+1}{p-2}-p\frac{(p-1)^{i}-p+1}{p-2}+\frac{(p-1)^{i+1} -p+1}{p-2}&\text{if $j=N-1$},\,5\leq i\leq N-2,\\ (-1+p)\frac{(p-1)^{N-2}-p+1}{p-2}-p\frac{(p-1)^{N-1}-p+1}{p-2}+\frac{(p-1)^{N} -1}{p-2}&\text{if $i=j=N-1$},\\ 0&\text{otherwise}.\end{cases}\] In the case of \((i,j)=(1,2)\), we have \[S_{N}(i,j)=-\frac{1}{p}+\frac{1}{p}=0.\] In the case of \((i,j)=(3,2)\), we have \[S_{N}(i,j)=-\frac{-1+p}{p}+\frac{-1+p-p^{2}}{p}=-p.\] In the case of \((i,j)=(3,N-1)\), we have \[S_{N}(i,j) =(-1+p)(-1+p-p^{2})+\frac{(p-1)^{4}-p+1}{p-2}\] \[=\frac{(p-1)^{2}(p-2)-p^{2}(p-1)(p-2)}{p-2}+\frac{(p-1)^{4}-p+1}{p -2}\] \[=\frac{(p-1)(p^{2}-3p+2-(p^{3}-2p^{2})+(p^{3}-3p^{2}+3p-1)-1)}{p-2}\] \[=0.\] In the case of \((i,j)=(4,N-1)\), we have \[S_{N}(i,j) =p(-1+p)^{2}-\frac{p((p-1)^{4}-p+1)}{p-2}+\frac{(p-1)^{5}-p+1}{p-2}\] \[=\frac{p(p-1)^{2}(p-2)}{p-2}+\frac{-p(p-1)^{4}+p(p-1)+(p-1)^{5}-( p-1)}{p-2}\] \[=\frac{p(p-1)^{2}(p-2)-(p-1)^{4}+(p-1)^{2}}{p-2}\] \[=0.\] In the case of \(j=N-1\), \(5\leq i\leq N-2\), we have \[S_{N}(i,j) =(-1+p)\frac{(p-1)^{i-1}-p+1}{p-2}-p\frac{(p-1)^{i}-p+1}{p-2}+ \frac{(p-1)^{i+1}-p+1}{p-2}\] \[=\frac{(p-1)^{i}-(p-1)^{2}-p(p-1)^{i}+p(p-1)+(p-1)^{i+1}-(p-1)}{p -2}\] \[=\frac{-(p-1)^{i+1}-(p-1)^{2}+(p-1)^{2}+(p-1)^{i+1}}{p-2}\] \[=0.\] In the case of \(i=j=N-1\), we have \[S_{N}(i,j) =(-1+p)\frac{(p-1)^{N-2}-p+1}{p-2}-p\frac{(p-1)^{N-1}-p+1}{p-2}+ \frac{(p-1)^{N}-1}{p-2}\] \[=\frac{(p-1)^{N-1}-(p-1)^{2}-p(p-1)^{N-1}+p(p-1)+(p-1)^{N}-1}{p-2}\] \[=\frac{-(p-1)^{N}-(p^{2}-2p+1)+p^{2}-p+(p-1)^{N}-1}{p-2}\] \[=1.\] Therefore, we have \(S_{N}=H_{N}\) By using the above theorem 4.1, the matrix equation \(H_{N}\vec{h}=\vec{1}\) can be expressed by the following single matrix equation. \[\vec{h}=\begin{bmatrix}h_{N}(0,1)\\ h_{N}(0,2)\\ \vdots\\ h_{N}(0,\ell)\\ \vdots\\ h_{N}(0,N-1)\end{bmatrix}=U_{N}^{-1}L_{N}^{-1}\begin{bmatrix}1\\ 1\\ \vdots\\ 1\\ \vdots\\ 1\end{bmatrix}.\] By solving this matrix equation, we obtain the following. **Theorem 4.2**.: _For \(1\leq\ell\leq N-1\), the exact formula for the HT's of random walks on weighted Cayley graph \(\operatorname{Cay}(\mathbb{Z}_{N},\{+1,+2\})\) is,_ \[h_{N}(0,\ell)=\frac{N(p-1)((p-1)^{\ell}-1)-\ell((p-1)^{N}-1)}{(p-2)((p-1)^{N}-1)}.\] Proof.: In the case of \(\ell=1\), we have \[h_{N}(0,1) =1+\frac{(p-1)^{2}-p+1}{1-(p-1)^{N}}\sum_{i=1}^{N-1}\frac{(p-1)^{i }-1}{p-2}\] \[=1+\frac{(p-1)^{2}-p+1}{(p-2)(1-(p-1)^{N})}(\frac{(p-1)(1-(p-1)^{N -1})}{1-(p-1)}-N+1)\] \[=1-\frac{p-1}{(p-1)^{N}-1}(\frac{(p-1)^{N}-1}{p-2}-N)\] \[=\frac{(p-2)((p-1)^{N}-1)-(p-1)((p-1)^{N}-1)+N(p-1)(p-2)}{(p-2)(( p-1)^{N}-1)}\] \[=\frac{N(p-1)((p-1)^{1}-1)-((p-1)^{N}-1)}{(p-2)((p-1)^{N}-1)}.\] In the case of \(\ell=2\), we have \[h_{N}(0,2) =1+p+\frac{(p-1)^{3}-p+1}{1-(p-1)^{N}}\sum_{i=1}^{N-1}\frac{(p-1)^{i} -1}{p-2}\] \[=1+p+\frac{(p-1)^{3}-p+1}{(p-2)(1-(p-1)^{N})}(\frac{(p-1)(1-(p-1)^{ N-1})}{1-(p-1)}-N+1)\] \[=1+p-\frac{(p-1)((p-1)^{2}-1)}{(p-2)((p-1)^{N}-1)}(\frac{(p-1)^{N} -1}{p-2}-N)\] \[=\frac{(p+1)(p-2)^{2}}{(p-2)^{2}}-\frac{(p-1)((p-1)^{2}-1)}{(p-2)^ {2}}+\frac{N(p-1)((p-1)^{2}-1)}{(p-2)((p-1)^{N}-1)}\] \[=\frac{(p+1)(p-2)^{2}}{(p-2)^{2}}-\frac{p(p-1)(p-2)}{(p-2)^{2}}+ \frac{N(p-1)((p-1)^{2}-1)}{(p-2)((p-1)^{N}-1)}\] \[=\frac{(p+1)(p-2)-p(p-1)}{(p-2)}+\frac{N(p-1)((p-1)^{2}-1)}{(p-2) ((p-1)^{N}-1)}\] \[=\frac{N(p-1)((p-1)^{2}-1)-2((p-1)^{N}-1)}{(p-2)((p-1)^{N}-1)}.\] In the case of \(3\leq\ell\leq N-2\), we have \[h_{N}(0,\ell) =\sum_{i=1}^{\ell}\frac{(p-1)^{i}-1}{p-2}+\frac{(p-1)^{\ell+1}-p+ 1}{1-(p-1)^{N}}\sum_{i=1}^{N-1}\frac{(p-1)^{i}-1}{p-2}\] \[=\frac{1}{p-2}(\sum_{i=1}^{\ell}(p-1)^{i}-l)+\frac{(p-1)^{\ell+1} -p+1}{(p-2)(1-(p-1)^{N})}(\sum_{i=1}^{N-1}(p-1)^{i}-N+1)\] \[=\frac{1}{p-2}(\frac{(p-1)(1-(p-1)^{\ell})}{1-(p-1)}-\ell\] \[+\frac{(p-1)^{\ell+1}-p+1}{1-(p-1)^{N}}(\frac{(p-1)(1-(p-1)^{N-1 })}{1-(p-1)}-N+1))\] \[=\frac{1}{p-2}(\frac{(p-1)^{\ell+1}-p+1}{p-2}-\ell\] \[-\frac{(p-1)^{\ell+1}-p+1}{(p-1)^{N}-1}(\frac{(p-1)^{N}-p+1}{p-2 }-N+1))\] \[=\frac{1}{p-2}(\frac{(p-1)^{\ell+1}-p+1}{p-2}-\ell-\frac{(p-1)^{ \ell+1}-p+1}{(p-1)^{N}-1}(\frac{(p-1)^{N}-1}{p-2}-N))\] \[=\frac{N(p-1)((p-1)^{\ell}-1)-\ell((p-1)^{N}-1)}{(p-2)((p-1)^{N}- 1)}.\] In the case of \(\ell=N-1\), we have \[h_{N}(0,N-1) =\frac{p-2}{(p-1)^{N}-1}\sum_{i=1}^{N-1}\frac{(p-1)^{i}-1}{p-2}\] \[=\frac{1}{(p-1)^{N}-1}(\frac{(p-1)(1-(p-1)^{N-1})}{1-(p-1)}-N+1)\] \[=\frac{-N(p-2)+(p-1)^{N}-1}{(p-2)((p-1)^{N}-1)}\] \[=\frac{N((p-1)^{N}-p+1)-N((p-1)^{N}-1)+(p-1)^{N}-1}{(p-2)((p-1)^{N }-1)}\] \[=\frac{N(p-1)((p-1)^{N-1}-1)-(N-1)((p-1)^{N}-1)}{(p-2)((p-1)^{N}-1)}.\] From Theorem 4.1 we have \(H_{N}^{-1}=U_{N}^{-1}L_{N}^{-1}P_{N}^{-1}\). Hence the entries of \(H_{N}^{-1}\) also can be calculated as follows: For \(1\leq i,j\leq N-1\), \[H_{N}^{-1}(i,j)=\frac{1}{(p-2)((p-1)^{N}-1)}\] \[\begin{cases}(p-2)^{2}&\text{if $i=j=1$,}\\ (p-2)((p-1)^{N}-p(p-1)^{N-1}+p^{2}-p-1)&\text{if $i=j=2$,}\\ -(p-2)((p-1)^{N-1}-p+1)&\text{if $(i,j)=(1,2)$,}\\ p(p-2)^{2}&\text{if $(i,j)=(2,1)$,}\\ -(p-2)((p-1)^{N-j+1}-p+1)&\text{if $i=1,\;3\leq j\leq N-1$,}\\ -p(p-2)((p-1)^{N-j+1}-p+1)&\text{if $i=2,\;3\leq j\leq N-1$,}\\ (p-2)((p-1)^{i}-1)&\text{if $3\leq i\leq N-2,\;j=1$,}\\ (p-2)(p(p-1)^{i-1}-(p-1)^{N-1}-1)&\text{if $3\leq i\geq N-2,\;j=2$,}\\ (p-2)((p-1)^{i}-1)+((p-1)^{-j+1}-1)((p-1)^{N}-(p-1)^{i})&\text{if $3\leq i,j\leq N-1,\;i \neq N-1$,}\\ (p-2)((p-1)^{N-j}-1)&\text{if $i=N-1$.}\end{cases}\] ## Acknowledgements The authors thank Tsuyoshi Miezaki for his helpful discussions and comments to this research. This work was supported by JSPS Grant-in-Aid for JSPS Fellows (23KJ2020) and Waseda Research Institute for Science and Engineering, Grant-in-Aid for Young Scientists(Early Bird).
2301.10899
Large fluorescence enhancement via lossless all-dielectric spherical mesocavities
Nano- and microparticles are popular media to enhance optical signals, including fluorescence from a dye proximal to the particle. Here we show that homogeneous, lossless, all-dielectric spheres with diameters in the mesoscale range, between nano- ($\lesssim 100$~nm) and micro- ($\gtrsim 1$ $\mu$m) scales, can offer surprisingly large fluorescence enhancements, up to $F\sim 10^4$. With the absence of nonradiative Ohmic losses inherent to plasmonic particles, we show that $F$ can increase, decrease or even stay the same with increasing intrinsic quantum yield $q_0$, for suppressed, enhanced or intact radiative decay rates of a fluorophore, respectively. Further, the fluorophore may be located inside or outside the particle, providing additional flexibility and opportunities to design fit for purpose particles. The presented analysis with simple dielectric spheres should spur further interest in this less-explored scale of particles and experimental investigations to realize their potential for applications in imaging, molecular sensing, light coupling, and quantum information processing.
Vadim I. Zakomirnyi, Alexander Moroz, Rohit Bhargava, Ilia L. Rasskazov
2023-01-26T02:00:21Z
http://arxiv.org/abs/2301.10899v4
# Fluorescence enhancement via lossless all-dielectric spherical mesocavities ###### Abstract Metal nanoparticles have traditionally been used to enhance optical signals, including fluorescence from a dye proximal to the particle. Here, we sought to examine whether appreciable enhancement was possible using relatively simple all-dielectric particles. Using rapid numerical simulations, we deduce that lossless all-dielectric spherical particles with mesoscale sizes, between nanoscale (\(\lesssim 100\) nm) and microscale (\(\gtrsim 1\)\(\mu\)m), are surprisingly suitable for fluorescence enhancement demonstrating up to \(F\sim 10^{4}\) enhancement factors. We discuss both the enhancement possible as well as limiting losses, which are distinct from those observed in metal particles. For a given sphere of a specific refractive index size matters: much larger fluorescence enhancement can be achieved with meso-sized particles than with particles of smaller size. The enhancement originates from multipolar (\(4\lesssim\ell\lesssim 10\)) resonances which induce much stronger electric field enhancement within spheres. The order, \(\ell\), of these resonances is larger than conventionally utilized dipolar or quadrupolar Mie resonances in nanoparticles and smaller than of typical whispering gallery modes (\(\ell\gtrsim 20\)) in microparticles. With the absence of non-radiative Ohmic losses inherent to plasmonic particles, \(F\) can increase, decrease or even stay the same with increasing intrinsic quantum yield \(q_{0}\), for suppressed, enhanced or intact radiative decay rates of fluorophore. This work serves to draw attention to the use of dielectric particles for engineering strong fluorescence enhancement, pointing to possible applications of these materials in imaging, molecular sensing, light coupling, and quantum information processing. American Chemical Society, LaTeXian University, LaTeXian University, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeXian, LaTeX, LaTeX,TeXian, LaTeX, LaTeX, LaTeX, LaTeX, LaTeXian, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTeX, LaTe, LaTeX, LaTeX, LaTe, LaTeX, LaTeX, LaTe, LaTeX, LaTe, LaTeX, LaTe, LaTe, LaTeX, LaTe,TeX, LaTe, LaTeX, LaTe, LaTe, LaTe,TeX, LaTe, LaTe,Te, LaTe, LaTe, LaTe, LaTe,Te, LaTe, LaTe,Te, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe,Te, LaTe, LaTe,Te, LaTe, LaTe, LaTe,Te, LaTe, LaTe, LaTe,Te, LaTe, LaTe,Te, LaTe, LaTe, LaTe,Te, LaTe, LaTe, LaTe,Te, LaTe, LaTe,Te, LaTe, LaTe,Te, LaTe, LaTe, LaTe,Te, LaTe,Te, LaTe, LaTe,Te, LaTe, LaTe,Te, LaTe, LaTe,Te, LaTe, LaTe, LaTe,Te, LaTe,Te, LaTe, LaTe,Te, LaTe,Te, LaTe,Te, LaTe, LaTe,Te, LaTe, LaTe, LaTe,Te, LaTe, LaTe,Te, LaTe, LaTe, LaTe,Te, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe,Te, LaTe, LaTe, LaTe, LaTe,Te, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe, LaTe,Te, LaTe, light-emitting diodes (LEDs) for high-resolution displays [15, 16], and single photon sources for quantum photonics [17, 18]. Here we address the problem of enhancing florescence in order to further enhance both its present and potentially promising applications. Though large fields around plasmonic NPs are beneficial for fluorescence excitation, unavoidable Ohmic losses reduce quantum yield. It is intriguing to examine all-dielectric particles for fluorescence enhancement as they do not suffer from limiting losses. However, as widely believed and revealed quantitatively in recent comparative studies [19, 20], all-dielectric particles cannot enhancing fluorescence in a manner comparable to plasmonic NPs do due to relatively smaller locally supported electric fields and, consequently, low enhancement factor. Giant fluorescence enhancement factors reported, thus, are mostly based on a paradigm of _metal_-enhanced fluorescence (MEF), where the use of dielectrics is generally limited to acting as a spacer to keep an emitter at an optimal distance from a metal surface. Almost two decades of intensive development of MEF [21], the highest recorded \(F\) of 910 - 2300 is considered to be via Ag nanocubes [22, 23], which may well be an upper bound of MEF for single particles since Ag is the most favorable plasmonic material for near-field enhancement [24, 25]. Although Mie theory [26] has been known for more than a century, rising interest in resonant optical properties of all-dielectric nanostructures has been triggered by recent advances in the fabrication of individual dielectric particles with a controlled geometry [6]. Resonant behavior of high-index (GaAs, Si, Ge, TiO\({}_{2}\)) particles not only enables realization of low-loss nonplasmonic metamaterials and metasurfaces with rich optical functionalities, but also paves the way to enhanced light-matter interactions and advanced linear and nonlinear light manipulations. Despite an impressive number of obvious advantages of all-dielectric particles, relatively weaker electric field enhancement compared to plasmonic NPs represents their major drawback, seemingly limiting their utilization. Here we re-examine the role of all-dielectric particles in enhancing fluorescence. Specifically, we sought to examine whether lossless all-dielectric particles may be suitable for fluorescence enhancement by careful tuning of size and choice of material (refractive index). Instead of focusing attention on the traditional route of careful synthesis of a minimum metal particle size with controlled nanoscale structure to support plasmons or tuning NP geometry to obtain designer hotspots, we focused at tention on simple geometries commonly obtainable with dielectrics. We also expanded the size parameter to consider the so-called _meso_-range (with radius \(50\lesssim r\lesssim 500\) nm), between nano- and micro-particle sizes. Such mesoparticles can posses resonances of much higher quality factor (up to \(\approx 10^{4}\)) than a typical plasmonic NP (\(\approx 20\)) and are more appropriately termed _mesocavities_ (MCs). The rich modal composition of high-index MCs requires a delicate search for suitable resonances. This optimization can be complicated to accomplish as both (i) optimal electric field localization and (ii) decay rates need to be optimized for fluorescence enhancement applications, Eq. (1). The complexity of this problem is a key challenge why previous works [19, 20] did not find utility of all-dielectric particles for fluorescence enhancement. Addressing this complexity in our method development and obtaining predictions that demonstrate a feasible path for high enhancement dielectric MCs is the key goal of this study. ## 2 Method Development Consider a fluorescence emitter as an oscillating electric dipole (ED) [27, 28, 29, 30, 31, 32]. The enhanced fluorescence can be described in two steps as shown in Fig. 1: In this model, first, a particle locally enhances an electric field, \(\mathbf{E}_{\text{loc}}\). This, in turn, amplifies the excitation rate, \(\gamma_{\text{exc}}\) (\(\propto|\mathbf{E}_{\text{loc}}|^{2}\)), of the fluorescence emitter. Second, after being excited, the mutual interaction with the particle modifies the radiative decay rate, \(\Gamma_{\text{rad}}\), of the dipole emitter. The presence of dissipative components also induces changes in the nonradiative decay rate, \(\Gamma_{\text{nrad}}\) Figure 1: Conventional two-level model for a particle enhanced fluorescence: excitation process under a plane wave illumination and the emission process with ED radiation. The resulting fluorescence enhancement factor is then determined by [19, 30, 33, 34] \[F=\frac{\gamma_{\rm exc}}{\gamma_{\rm exc;0}}\frac{q}{q_{0}}=\frac{\gamma_{\rm exc }}{\gamma_{\rm exc;0}}\frac{\tilde{\Gamma}_{\rm rad}}{\tilde{\Gamma}_{\rm rad}+ \tilde{\Gamma}_{\rm nrad}+(1-q_{0})/q_{0}}\frac{1}{q_{0}}, \tag{1}\] where the subscript "0" indicates the respective quantity in the free space, the rates with tilde are normalized to the _radiative_ decay rate in free space (e.g. \(\tilde{\Gamma}_{\rm rad}=\Gamma_{\rm rad}/\Gamma_{\rm rad;0}\), \(\tilde{\Gamma}_{\rm nrad}=\Gamma_{\rm nrad}/\Gamma_{\rm rad;0}\)), and \(q_{0}\) is the intrinsic quantum yield. \(\tilde{\Gamma}_{\rm rad}\) is determined by the local density of states, whereas \(\tilde{\Gamma}_{\rm nrad}\) is determined by particle losses. The fluorescence excitation and emission processes are treated independently (i.e., weak coupling) and, in general, there is a Stokes shift between excitation, \(\lambda_{\rm exc}\), and emission, \(\lambda_{\rm ems}\), wavelengths, i.e. \(\Delta\lambda_{\rm s}=\lambda_{\rm ems}-\lambda_{\rm exc}\neq 0\). The emitter is assumed to be below saturation [33, 35]. According to Eq. (1), achieving an optimal fluorescence enhancement factor requires a delicate balance of \(\gamma_{\rm exc}\) on one hand, and of \(\tilde{\Gamma}_{\rm rad}\) and \(\tilde{\Gamma}_{\rm nrad}\) on the other hand [33, 35, 36, 37, 38, 39]. Light-matter interaction of a generic process involving an ED characterized by its ED moment \({\bf d}\) is proportional to \(\sim{\bf E}\cdot{\bf d}\). In order to facilitate ensuing calculations, the following common averaging over the ED orientation and its position on a spherical shell of fixed radius \(r\) are performed: * \({\bf E}\) in the scalar product \({\bf E}\cdot{\bf d}\) is averaged over the spherical surface of fixed radius \(r\); * While performing the above averaging, the ED moment \({\bf d}\) in the scalar product \({\bf E}\cdot{\bf d}\) is simultaneously averaged over all possible orientations of \({\bf d}\) at a given fixed \({\bf r}\). As the result of the above averaging it is possible to reformulate the scalar product into the respective averages of \(|{\bf E}|\) and \(|{\bf d}|\) [33]. The excitation rate, \(\gamma\) is proportional to \(|{\bf E}\cdot{\bf d}|^{2}\). After the above averaging, \(\gamma\) becomes proportional to the surface averaged intensity of the electric field, \(\tilde{\gamma}\propto\oint|{\bf E}|^{2}\,d{\bf S}\). The latter can be determined by closed-form analytical formulas of Ref. [40]. The averaged \(\tilde{\gamma}\) is obviously \(r\)-dependent and does not depend on spherical angular coordinates. The corresponding decay rates become orientation averaged decay rates [32, 33] after the above averaging, \(\Gamma_{\rm(n)rad}=(\Gamma_{\rm(n)rad}^{\perp}+2\Gamma_{\rm(n)rad}^{\parallel} )/3\), where superscripts "\(\perp\)" and "\(\parallel\)" denote the radial and tangential orientations of ED, respectively. All the above steps are implemented in freely available MATLAB code Stratify [41] employed in present simulations. In essence, surface integrals of the electric field intensity can be performed analytically and _the calculation of average intensity costs the same computational time as determining intensity at a given single point_. This strategy allows determining potentially promising designs at a low computational cost, without the need for time-expensive calculations of electric fields at multiple points inside or outside a particle. Once configurations of MCs exhibiting large _averaged_ values of \(|\mathbf{E}_{\text{loc}}|^{2}\) are identified, further detailed calculations of electric fields for these designs are easily implemented (cf. Fig. 3). ## 3 Results and discussions The _averaged_ fluorescence enhancement factor (Eq. 1) in the presence of a lossless (\(\bar{\Gamma}_{\text{nrad}}=0\)) all-dielectric particle reduces to \[\bar{F}=\frac{\bar{\gamma}_{\text{exc}}}{\bar{\gamma}_{\text{exc};0}}\frac{ \bar{\Gamma}_{\text{rad}}}{1-q_{0}+q_{0}\bar{\Gamma}_{\text{rad}}}. \tag{2}\] To relate to practical applications, we consider widely used and readily fabricated TiO\({}_{2}\) spherical particles with real refractive index \(n_{c}=2.7\), which are lossless in the visible range [6], and vary their radii from 10 nm to 500 nm, thus sampling both the "nano" (radii from 10 nm to 50 nm ) and "meso" regimes (radii from 50 nm to 500 nm). For the sake of illustration, either standard Nile Blue dye with \(\lambda_{\text{exc}}=633\) nm and \(\lambda_{\text{exc}}=663\) nm, or Er\({}^{3+}\) is used as ED emitter. It is also worth noting that modern fabrication techniques can embed emitters into spheres with nanometer precision in radial direction for a number of different materials [42, 43]. A controlled positioning of emitters outside the sphere is possible by attaching emitters to (nano)particles via molecular techniques such as using single-stranded DNA (ssDNA) spacers [44] or DNA origami [45]. Therefore, we consider situations with both an emitter inside and outside a TiO\({}_{2}\) sphere. Figures 2(a-c) show excitation and radiative decay rates for two different ED orientations. Extinction spectra for the magnetic (TE) and electric (TM) polarizations are superimposed on the top of graphs to show the nature of resonances. The peaks in the radiative decay rate correlate nicely with the peaks in the modes having dominant electric field component along the ED direction, i.e. the TE (TM) mode for the tangential (radial) ED orientation. In the interstitial case described by the virtual-cavity model [46], the local field, \(\mathbf{E}_{\text{loc}}\), felt by Figure 2: Color maps of different quantities as a function of sphere radius, \(r_{c}\) on the \(y\)-axis and relative ED radial position, \(r_{d}/r_{c}\), on the \(x\)-axis for TiO\({}_{2}\) MC in air: (a) Averaged excitation rate enhancement. (b)-(c) Normalized radiative decay rate for ED with the respective (b) tangential and (c) radial orientation. (d)-(e) Respective averaged fluorescence enhancement \(\tilde{F}\) for ED with quantum yield (d) \(q_{0}=0.1\) and (e) \(q_{0}=1\). Dashed vertical lines at \(r_{d}/r_{c}=1\) in all plots denote the sphere surface. Curves in (a)-(c) show normalized extinction of corresponding TiO\({}_{2}\) spheres for the TM and TE (with horizontal offset for clarity) polarizations at \(\lambda_{\text{exc}}=633\) nm in (a) and at \(\lambda_{\text{ems}}=663\) nm in (b), (c). The peaks in the radiative decay rate correlate nicely with the peaks in the modes having dominant electric field component along the ED direction, i.e. the TE (TM) mode for the tangential (radial) ED orientation. Circles in (d), (e) correspond to configurations considered in details in Figs. 3 and 4. emitters _inside_ the particle in the presence of a macroscopic field \(\mathbf{E}\) is \(\mathbf{E}_{\text{loc}}=L_{\text{vc}}\mathbf{E}\), where \[L_{\text{vc}}=\frac{n_{c}^{2}+2}{3} \tag{3}\] is the so-called Lorentz local-field correction [46] (see Supplementary Material for its general derivation from the Maxwell's equations). This correction is particularly interesting for high-index dielectrics, because \(L_{\text{vc}}\) linearly increases with the host dielectric constant, and it can become large. Averaged fluorescence enhancements \(\bar{F}\) displayed in Figures 2d-e for ED for two different values of \(q_{0}\) (0.1 and 1) are shown with the Lorentz local-field correction included (contributing for \(n_{c}=2.7\) the factor of \(L_{\text{vc}}^{2}\approx 9.6\) to \(\bar{\gamma}_{\text{exc}}\)). Figure 3 shows extinction efficiency and normalized intensity of the electric field, \(|\mathbf{E}_{\text{loc}}|^{2}/|\mathbf{E}_{0}|^{2}\) (at \(\lambda_{\text{exc}}\)), inside and outside TiO\({}_{2}\) sphere with radius (a) \(r_{c}=430\) nm and (b) \(r_{c}=299\) nm, on the background of the respective TE and TM resonances. The results correspond to the incident plane-wave amplitude \(\mathbf{E}_{0}\) oscillating along the \(x\) axis and propagation along the \(z\) axis, as indicated in Fig. 1. Similarly to our previous study [47], we observe that for enhancing \(F\) it is more beneficial to tune a suitable resonance to an emitter excitation wavelength than to its emission wavelength. Tuning a suitable _magnetic_ (TE) MC resonance to the excitation wavelength turned up to be the most beneficial. The so-called decay rate engineering advocated within the MEF framework as means of enhancing \(F\)[21, 48] plays a smaller role in the present case of MCs. If in the lossless case described by Eq. 2: (i) \(\bar{\Gamma}_{\text{rad}}>1\) then \(q/q_{0}\) (the second fraction in Eq. 2) grows as \(q_{0}\) decreases; (ii) \(\bar{\Gamma}_{\text{rad}}<1\) then \(q/q_{0}\) grows as \(q_{0}\) increases; (iii) \(\bar{\Gamma}_{\text{rad}}=1\) then \(q/q_{0}\) does not depend on \(q_{0}\). As shown in Fig. 4, one can encounter all three cases in practice. Note in passing that \(\bar{F}\) does not depend on \(\bar{\Gamma}_{\text{rad}}\) for \(q_{0}=1\). Contrary to the case of plasmonic particles characterized by significant \(\bar{\Gamma}_{\text{nrad}}\) in their close proximity due to large Ohmic losses leading to a scaling \(\bar{F}\sim 1/q_{0}\) behaviour in \(q_{0}\)[49], we have \(\bar{\Gamma}_{\text{nrad}}=0\) here and, consequently, there is no scaling in \(q_{0}\). In fact, the maximum achievable \(\bar{F}\) can be increasing (i.e. not decreasing) with increasing as can be seen from Fig. 4. While we have considered a single emitter thus far, other emitters and species may also be proximally present. First, we consider the effect of multiple emitters. Once an emitter is excited, it can transfer its excitation to a nearby emitter via an energy transfer mechanism. With multiple emitters present this transfer process increases the probability of a decay of emitter excitation. Similarly, if the excitation is transferred to an emitter in a proximity of an impurity, the excitation can be quenched (e.g., OH bonds in silica), whereby the excitation disappears without ever contributing to radiation. This is the essence of the so-called concentration quenching, which does contribute to nonradiative decay rate but whose origin is different from the Ohmic losses of the emitter. Figure 3: The TE and TM components of the extinction efficiency as a function of wavelength, and normalized intensity of the electric field, \(|\mathbf{E}_{\text{loc}}|^{2}/|\mathbf{E}_{0}|^{2}\), at \(\lambda_{\text{exc}}\) inside and outside TiO\({}_{2}\) sphere with radius (a) \(r_{c}=430\) nm and (b) \(r_{c}=299\) nm. The results correspond to the incident plane-wave amplitude \(\mathbf{E}_{0}\) oscillating along the \(x\) axis and propagation along the \(z\) axis as indicated in Fig. 1. Vertical dashed lines in extinction spectra correspond to excitation and emission wavelengths of Nile Blue dye. metal particles. Concentration quenching can be accounted by [53] \[\Gamma_{\rm nrad}=8\pi C_{\rm Er-Er}[{\rm Er}^{3+}][Q], \tag{4}\] where [Er\({}^{3+}\)] ([Q]) is the erbium (quencher) concentration in at. %, and \(C_{\rm Er-Er}\) is a coupling constant. Using a typical coupling value for \(C_{\rm Er-Er}\) from the literature (\(10^{-39}\) cm\({}^{6}\) s\({}^{-1}\)), we also assume a quencher concentration of 100 ppm for representative calculations. For example, assuming the dominant quencher in SiO\({}_{2}\) is OH, this is a reasonable value for the colloids prepared in a wet-chemical reaction [53]. The effect of Er\({}^{3+}\) concentration on the averaged fluorescence enhancement \(\bar{F}\) is highlighted in Fig. 5. Interestingly, even relatively small nonradiative losses have the effect similar to that in the MEF [49], namely larger \(\bar{F}\) are attainable with smaller \(q_{0}\). In the case of non-zero nonradiative losses, utilization of the emitters with the Stokes shift, \(\Delta\lambda_{s}\), smaller than the resonance linewidth can be beneficial allowing one to achieve large \(\gamma_{\rm exc}\) and \(\bar{\Gamma}_{\rm rad}\) simultaneously, thus counteracting any emergent quenching induced by \(\bar{\Gamma}_{\rm nrad}\neq 0\). Recently, we have reported that metal core is essential to get extraordinary fluorescence en Figure 4: Averaged fluorescence enhancement as a function of \(q_{0}\) for three different TiO\({}_{2}\)@Nile Blue configurations highlighted in Figures 2(d),(e): (i) \(r_{c}=430\) nm and \(r_{d}/r_{c}=0.8644\), (ii) \(r_{c}=430\) nm and \(r_{d}/r_{c}=1.001\), (iii) \(r_{c}=299\) nm and \(r_{d}/r_{c}=0.7885\). \(\bar{\Gamma}_{\rm nrad}\equiv 0\) implies the absence of familiar \(\bar{F}\sim 1/q_{0}\) scaling within the MEF framework [49]. For comparison, the most representative cases of fluorescence enhancements and respective \(q_{0}\) described in previous studies are shown: (iv) bow-tie Au antenna [50], (v) Ag nanocubes on Au film [22], (vi) Ag nanocubes on Ag film [23], (vii) Ag@dielectric core-shell [51], (viii) Au@dielectric@Au matryoshkas [52], (ix) Au@dielectric mesoscale core-shell [47] (the result therein has been multiplied by the factor of 22.56, which corresponds to \(L_{\rm vc}^{2}\) for the dielectric shell with \(n_{s}=3.5\) considered in Ref. [47]). hancements exceeding 3000 (results presented therein are shown without taking into account the local field corrections) [47]. This has been confirmed here in that reported fluorescence enhancement is in general smaller than that reported in Ref. [47] (cf. Fig. 4). Nevertheless, the fluorescence enhancement in the presence of all-dielectric MCs is still remarkably high. ## 4 Conclusions In this study, we examined the potential for fluorescence enhancement using all-dielectric particles. The homogeneous MCs suggested here have a number of distinctive advantages: (i) they are generally much easier to fabricate, (ii) do not require costly noble metals, and (iii) allow one to embed emitters in their entire volume (or realize other options [42]) potentially leading to brighter fluorescence sources. By using a fast calculation approach, modeling showed that _size matters_ and, for a given sphere of specific refractive index, much larger fluorescence enhancement can be achieved with meso-sized particles than with particles of smaller size. All of that can be beneficial for fluorescence applications to imaging, sensing, strong coupling, and quantum information processing. ## 5 Supporting Information Available A general derivation of the Lorentz local-field correction directly from the Maxwell's equations. Figure 5: Averaged fluorescence enhancement \(\bar{F}\) for TiO\({}_{2}\)@Er\({}^{3+}\) composites as a function of Er\({}^{3+}\) concentration and intrinsic quantum yield \(q_{0}\). TiO\({}_{2}\) sphere of radius \(r_{c}=489\) nm embedded in water (\(n_{h}=1.33\)) with Er\({}^{3+}\) emitters located at fixed \(r_{d}=438\) nm distance from sphere center.
2308.15232
Classification-Aware Neural Topic Model Combined With Interpretable Analysis -- For Conflict Classification
A large number of conflict events are affecting the world all the time. In order to analyse such conflict events effectively, this paper presents a Classification-Aware Neural Topic Model (CANTM-IA) for Conflict Information Classification and Topic Discovery. The model provides a reliable interpretation of classification results and discovered topics by introducing interpretability analysis. At the same time, interpretation is introduced into the model architecture to improve the classification performance of the model and to allow interpretation to focus further on the details of the data. Finally, the model architecture is optimised to reduce the complexity of the model.
Tianyu Liang, Yida Mu, Soonho Kim, Darline Larissa Kengne Kuate, Julie Lang, Rob Vos, Xingyi Song
2023-08-29T11:40:24Z
http://arxiv.org/abs/2308.15232v1
Classification-Aware Neural Topic Model Combined With Interpretable Analysis - For Conflict Classification ###### Abstract A large number of conflict events are affecting the world all the time. In order to analyse such conflict events effectively, this paper presents a Classification-Aware Neural Topic Model (CANTM-IA) for Conflict Information Classification and Topic Discovery. The model provides a reliable interpretation of classification results and discovered topics by introducing interpretability analysis. At the same time, interpretation is introduced into the model architecture to improve the classification performance of the model and to allow interpretation to focus further on the details of the data. Finally, the model architecture is optimised to reduce the complexity of the model. ## 1 Introduction Hundreds of conflicts break out every day around the world, many of which have a major impact on the world's political and economic situation. A recent example is Ukraine Crisis, which has caused energy scarcity in Europe, a reduction in world food production and many other repercussions. For governments and institutions such as the IFPRI, the impact of conflict events can be greatly reduced if they are classified, analysed and responded to in the shortest possible time. Our goal is to develop a deep learning model suitable for the classification of conflict information. This model should be able to classify conflict categories and discover category-related topics. Most importantly, the model must have high reliability as the consequences of conflicting information are often very serious. We therefore want to combine text classification, topic modelling and interpretable analysis to solve the problem. Text classification assigns category labels to different texts for the purpose of distinguishing textual information. Recurrent neural networks (RNNs), convolutional neural networks (CNNs) and graph neural networks (GNNs) have all been applied to text classification tasks (Tai et al., 2015; Zhu et al., 2015; Cheng et al., 2016; Kalchbrenner et al., 2014; Kim, 2014; Johnson and Zhang, 2015; Peng et al., 2018). More recently, Sun et al. (2019) provides a fine-tuned BERT-based pre-training model (Devlin et al., 2019) for text classification tasks generic solution with new state-of-the-art results on eight extensively studied text classification datasets. The topic model is designed to automatically find a range of topics and topic words from a collection of documents. One of the most classic topic models is latent dirichlet allocation (LDA) (Blei et al., 2003), which is an unsupervised, non-hierarchical model. Many subsequent research has been based on LDA, such as the Hierarchical Latent Dirichlet Allocation (HLDA) proposed by Griffiths et al. (2003). In 2016, Miao et al. (2016) proposed a generative neural variational document model (NVDM), which models the likelihood of documents using a Variational Auto-Encoder (VAE) (Kingma and Welling, 2013). In order to purposefully uncover topic words related to the target (e.g. sentiment), many researchers have also proposed alternative approaches. For example, Ding et al. (2018) added topic consistency to the training as part of the loss as well, thus making the latent variables dependent on the topic target as well. Neural network-based deep learning can be described as a black box, and humans are not yet able to fully explain or peer into the entire deep learning process. So the question arises whether humans can be trusted with the decision-making mechanisms of such data-driven AI systems. The lack of interpretability leads to a reduction in the reliability of deep learning, hence the importance of interpretable analysis. In an earlier study, Koh and Liang (2017) hoped to find parts of the training data/training points that could be used as a basis for interpretation by introducing the influence func tion. Some researchers, on the other hand, have tried to find explanations for the prediction results from the test data itself. Such explanations can be found by perturbing the data Li et al. (2016), extracting attention weights Wiegreffe and Pinter (2019) or calculating the saliency scores of the input sequences Jain et al. (2020), etc. Lei et al. (2016); Jain et al. (2020) used a combination of generators and encoders to extract rationales. ## 2 Preliminary Works Text classification and topic modelling have been important areas of research in natural language processing. These two areas are extremely interrelated, but few studies have effectively integrated them into a unified system. One successful example is the CANTM model proposed by Song et al. (2021) on topic modelling of online text messages during the Covid-19 epidemic, which is able to effectively identify disinformation related to Covid-19 and simultaneously classify the information, helping to address issues such as citizens' distrust of government and healthcare. The architecture of CANTM is shown in Figure 1. The model is divided into three parts, BERT embedding, the classifier-regularised VAE (M1) and the classifier-aware VAE (M2), where the VAE architectures are used as topic models. The model first uses a BERT pre-trained model to extract segment embeddings \(h\) from the input text sequence \(x\). In the encoder part of M1, \(h\) is transformed into the parameters \(\mu\) and \(\sigma\) of the Gaussian distribution via the linear layers \(linear_{\mu}\) and \(linear_{\sigma}\) respectively. The aim of the M1 encoder is to generate the latent variable \(z\), which can be considered as hidden topics. The M1 decoder part uses the latent variable \(z\) as input to reconstruct the bag of words of the input text. The M1 classifier also uses the latent variable \(z\) as input, and generates classification probabilities after passing through a fully connected layer containing a softmax activation function. Note that since the classifier uses hidden topics as the basis for classification, it has not seen real data, which can reduce the overfitting of the model. The architecture of M2 is similar to M1, except that it takes the classification probabilities \(\hat{y}\) output from M1 as input as well, in order to generate hidden topics \(z_{s}\) guided by the classification information. Furthermore, the M2 classifier is not used to output the final classification, but only to compute joint loss during training. The joint loss function of CANTM is a combination of the loss functions of its subcomponents and is calculated as \[\begin{split}\mathcal{L}=\lambda\mathcal{L}_{cls}-ELBO_{x_{bow}} -ELBO_{x_{bow},\hat{y}}\\ -\mathbb{E}_{\hat{y}}[log\ p(x_{bow}|\hat{y})]\end{split} \tag{1}\] CANTM has good classification and topic discovery capabilities, but it is not fully suitable for conflict information. Firstly, it does not introduce interpretability analysis to demonstrate the reliability of the model. Secondly, the topics discovered by CANTM are to some extent disturbed by a large number of neutral words present in the input text, thus making the relevance of the discovered topic words to the category information reduced. Moreover, the CANTM architecture has redundant parts, which affects its computational efficiency. ## 3 Methodology Our model is based on an improvement of CANTM, which we call Classification-Aware Neural Topic Model Combined With Interpretable Analysis (CANTM-IA). CANTM is used as the base model because it combines text classification and topic modelling, which aligns with our goals. Secondly, the stacked VAE architecture of CANTM effectively allows us to discover the hidden topics of the target categories. In addition, topics can also be seen as an interpretation of the classification model, which facilitates our interpretability analysis and improvement of the model in conjunction with rationale. We introduced interpretability analysis specifically by calculating the attention weights of the last layer in the BERT pre-trained model corresponding to the CLS labels and averaging them into the saliency score of the corresponding word piece. The magnitude of the saliency score is used as a visual representation of the importance of different parts of the original sample, and the parts with high saliency scores are used as the rationales of the sample. BERT parameters are frozen during training and only the last transformer encoding layer weights are unlocked for fine-tuning. Afterwards, we use the saliency scores of the rationales instead of the bag of words of the entire input sequence as the reconstruction target in the VAE architecture. This has several advantages. First, using rationales (the part of the input sequence with high contribution) as the reconstruction target allows the topic model to focus more on the important information of the input category-irrelevant words by the topic words and indirectly improve the classification performance of the model. Second, since the decoder uses rationales to guide the discovery of hidden topics and the classifier uses hidden topics for classification, it can be argued that these rationales explain both the hidden topics and the classification results. In addition, there is a redundant structure in the M2 decoder part of the CANTM model. As shown in Figure 1, \(m\), as the variable that combines the input \(h\) with the classification result \(\hat{y}\), already introduces classification information for the rest of M2. That is, the process of generating the variable \(z_{s}\) has been guided by the classification information, which generates the class-aware topics. Therefore, there is no need to reintroduce the classification result \(\hat{y}\) in the decoder part of M2, and the purpose can be achieved by directly reconstructing the target using \(z_{s}\) as the hidden topic variable. Combining the above optimisation methods, the modified CANTM-IA model is shown in Figure 2. ## 4 Experiments ### Dataset We use The Armed Conflict Location & Event Data Project (ACLED), a disaggregated data collection, analysis, and crisis mapping project, as our source dataset (Raleigh et al., 2010). The ACLED dataset collects six types of events. We use data spanning a full 3 years between 25 June 2019 and 24 June 2022 as experimental data. Of these, the volume of data for the conflict category Protests is 415,588, which far exceeds the volume of data for the other categories. In order to ensure a balanced dataset, a quarter of the data, i.e. 103,897 items, are randomly selected as the data of category Protests for the experiment. In addition, 50,000 texts from WMT News Crawl Dataset 1 are used as the out-of-domain data. The details of the experimental dataset are shown in table 1, with 90.43% of the ACLED data and 9.57% of the regular news data. The training set, validation set and test set are sampled from the original dataset in a 7:1:2 ratio Footnote 1: [http://www.statmt.org/wmt13/training-monolingual-news-2012.tgz](http://www.statmt.org/wmt13/training-monolingual-news-2012.tgz) ### Experimental Setup We compare our CANTM-IA model with two strong baseline models: **BERT** and **CANTM**. For BERT model, a linear layer of dimension 300 is connected to BERT [CLS] Token output and uses a fully-connected layer with a softmax activation function as a classifier to output the classification results. For CANTM model, we using a bag of words of size 500 and a hidden topic variable of dimension 100. Three sets of experiments are conducted to compare the choice of parameters and the impact on **CANTM-IA**. The first set uses rationales with a ratio of 10% of the number of tokens in the input text as the reconstruction target, denoted as CANTM-IA (ratio 0.1). In the second set, this proportion is 50% and is denoted as CANTM-IA (ratio 0.5). In addition, a fine-tuning experiment is carried out to fine-tune the model parameters using the CANTM-IA architecture on the trained CANTM model for only 1 epoch. The rationales used for the fine-tuning experiment are scaled to 50% and the model is denoted as CANTM-IA (fine-tune). Other model parameters are kept consistent with Figure 1: The architecture of the CANTM. the CANTM baseline system. We use BERT-base-uncased in experiments, only the last transformer encoding layer is unlocked for fine-tuning, and remaining BERT parameters are frozen during training. ### Results The overall classification results are shown in Table 2. BERT is a strong baseline with a solid classification accuracy (0.9738). On this basis, CANTM and CANTM-IA still obtained better classification performance by using hidden topics as the basis for classification. The best performing CANTM-IA (ratio 0.5) model achieved an accuracy of 0.9780 and an F1 score of 0.9791, which demonstrates the effectiveness of using hidden topics as a basis for classification. Furthermore, the classification performance of the CANTM-IA (fine-tune) is improved over the CANTM model, even after only 1 fine-tuning. This suggests a positive contribution of the topic model guided by rationale to the effectiveness of text classification. The F1 scores for each sub-category in the dataset are given in Table 3. It should be noted that since the ACLED data is cleaned by a professional data agency, the content of the data is to a large extent highly normative and accurate. As a result, classification performance can be extremely good even for the baseline model. This makes it appear that the improved model cannot outperform the baseline model by much in terms of experimental results. However, in this case, due to the large amount of data in the dataset, even a subtle advantage is evident in the face of the number of accurate predictions. We show the top 10 topic words for each conflict category for the CANTM and CANTM-IA (ratio 0.5) models in table 4. As can be seen, the category-related topic words extracted by CANTM already provide a good overview for each conflict category. However, there are still many neutral words such as'report', 'city' and 'unknown' in the CANTM results. This is due to the fact that CANTM uses a bag of words from the complete input sequence for topic reconstruction, which makes a large number of neutral words that appear in the conflict text influential in the reconstruction process. The weights of the reconstruction matrix with respect to this token is strengthened during training, and the relevance of this word to the relevant category is increased. In contrast, CANTM-IA cleverly reduces the influence of such neutral words. Because CANTM-IA uses rationales as the reconstruction target, this allows the model to focus more on the conflict-related information itself and thus ignore irrelevant neutral words. This results in a greater concentration of topic words that are relevant to the classification results and more representative of the categories. It also demonstrates the effectiveness of \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline Type of conflict & Battles & Explosions/Remote violence & Protests & Riots & \begin{tabular}{c} Strategic \\ developments \\ \end{tabular} & \begin{tabular}{c} Violence again- \\ nst civilians \\ \end{tabular} & \begin{tabular}{c} Out of \\ domain \\ \end{tabular} & Total \\ \hline Train & 76202 & 61222 & 72727 & 34097 & 30294 & 56329 & 35000 & 365871 \\ Valid & 10887 & 8747 & 10390 & 4872 & 4328 & 8047 & 5000 & 52271 \\ Test & 21772 & 17493 & 20780 & 9744 & 8656 & 16094 & 10000 & 104539 \\ \hline Total & 108861 & 87462 & 103897 & 48713 & 43278 & 80470 & 50000 & 522681 \\ \hline \hline \end{tabular} \end{table} Table 1: Information of the experimental data set. \begin{table} \begin{tabular}{c c c} \hline \hline Model & Accuracy & F-I \\ \hline BERT (baseline) & 0.9738 & 0.9749 \\ CANTM (baseline) & 0.9751 & 0.9760 \\ CANTM-IA (fine-tune) & 0.9766 & 0.9775 \\ CANTM-IA (ratio 0.1) & 0.9774 & 0.9787 \\ CANTM-IA (ratio 0.5) & **0.9780** & **0.9791** \\ \hline \hline \end{tabular} \end{table} Table 2: Comparison of the classification performance. Figure 2: The architecture of the CANTM-IA. category-related topic words extraction and ensures the possibility of subsequent analysis of the model. We show rationale examples in table 5 that are extracted by the different models from the same sample. It can be observed that while the rationales extracted by CANTM from can focus on conflict-related information ("soldiers", "destroyed", "church"), there is also some irrelevant information that is focused on ("as", "reported", "of", "zone"). CANTM-IA (ratio 0.5), on the other hand, focuses precisely and intently on the conflict information itself ("the", "soldiers", "destroyed", "a", "church"). Note that although the words "the" and "a" appear to be meaningless and category-independent words on their own. However, the model incorporates contextual information. Therefore, it can be argued that the CANTM-IA model also combines and pays some attention to coherent semantics, which makes the rationales extracted by CANTM-IA more coherent than previous rationales, and allows for better interpretation of the model's classification decisions and topic selection. The rationales comparison experiment shows that the rationales extracted by CANTM-IA focuses on the conflict information itself and can reasonably and effectively explain the model's conflict type classification results. This ensures the reliability of the model's classification decisions and allows CANTM-IA to provide humans with reliable results for further analysis of conflict information to a certain extent. ## 5 Conclusion We proposed a Classification-Aware Neural Topic Model (CANTM-IA) for Conflict Information Classification and Topic Discovery in this paper. The classification results and topic models of CANTM-IA can be reliably interpreted using rationales. Also, rationales are introduced into the topic model to improve model performance. Finally, the model architecture has been optimised. Compared to the baseline systems, CANTM-IA has improved predictive performance, reliability and efficiency. Our future work will be to adapt the model to other types of data and to refine the way in which interpretable analysis is introduced. \begin{table} \begin{tabular}{c l} \hline \hline Model & Note of event (with highlighted rationales) \\ \hline \multirow{4}{*}{BERT (baseline)} & Property destruction: Around 13 May 2022 \\ & (as reported), in Ta Tang Ku village of Pinlaung township (coded as Pinlaung) (Pa-O \\ Self-Administered Zone, Shan-South state), the PNO soldiers destroyed the Catholic \\ & \\ & \\ & \\ & \\ & \\ \hline \multirow{4}{*}{CANTM (baseline)} & Property destruction: Around 13 May 2022 \\ & (as reported), in Ta Tang Ku village of Pinlaung township (coded as Pinlaung) (Pa-O \\ Self-Administered Zone, Shan-South state), the PNO soldiers destroyed the Catholic \\ & \\ & \\ & \\ & \\ & \\ & \\ \hline \hline \end{tabular} \end{table} Table 4: Top 10 topic words for each conflict type. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Model & Battles & \begin{tabular}{c} Explosions/ \\ Remote violence \\ \end{tabular} & Protests & Riots & \begin{tabular}{c} Strategic \\ developments \\ \end{tabular} & \begin{tabular}{c} Violence against- \\ net civilians \\ \end{tabular} & \begin{tabular}{c} Out of \\ domain \\ \end{tabular} \\ \hline BERT (baseline) & 0.9583 & 0.9817 & 0.9904 & 0.9754 & 0.9693 & 0.9501 & 0.9994 \\ CANTM (baseline) & 0.9628 & 0.9836 & 0.9879 & 0.9707 & 0.9736 & 0.9540 & **0.9998** \\ CANTM-IA (fine-tune) & 0.9646 & **0.9854** & 0.9885 & 0.9704 & 0.9769 & 0.9575 & 0.9996 \\ CANTM-IA (ratio 0.1) & 0.9633 & 0.9849 & 0.9910 & 0.9769 & **0.9777** & 0.9570 & 0.9997 \\ CANTM-IA (ratio 0.5) & **0.9655** & 0.9847 & **0.9911** & **0.9772** & 0.9774 & **0.9584** & 0.9995 \\ \hline \hline \end{tabular} \end{table} Table 3: F1 scores of the models for different categories of classification results. \begin{table} \begin{tabular}{c l c c c c c} \hline \hline Type of conflict & \multicolumn{3}{c}{Topic words in CANTM} & \multicolumn{3}{c}{Topic words in CANTM-IA (ratio 0.5)} \\ \hline Battles & \begin{tabular}{c} forces military fatalities killed positions \\ \end{tabular} & \begin{tabular}{c} clashed killed clashes fire attacked \\ \end{tabular} \\ & \begin{tabular}{c} clashed militants taliban coded Azerbaijan \\ \end{tabular} & \begin{tabular}{c} clash fired attack small militants \\ \end{tabular} \\ \hline Explosions/ & \begin{tabular}{c} shelled forces fatalities injuries unknown \\ positions artillery smom once airstrikes \\ \end{tabular} & \begin{tabular}{c} shelled casualties fired targeted fatalities \\ total airstrikes artillery killed carried \\ \end{tabular} \\ \hline Protests & \begin{tabular}{c} report protest people city government \\ members held workers gathered protested \\ \end{tabular} & \begin{tabular}{c} protest protested demonstrated held protesters \\ gathered staged demonstration workers demanding \\ \end{tabular} \\ \hline Riots & \begin{tabular}{c} report police roters demonstrators demonstrators demonstrators clashed demonstration \\ \end{tabular} & \begin{tabular}{c} rotters demonstrators clashed demonstration \\ set attacked beaten beat burning fire \\ \end{tabular} \\ \hline Strategic & \begin{tabular}{c} property destruction forces military arrested \\ township district seized movement security \\ \end{tabular} & \begin{tabular}{c} arrested set destroyed looted fire seized \\ military destruction burned forces \\ \end{tabular} \\ \hline Violence & \begin{tabular}{c} killed shot man men fatality armed \\ against civilians \\ \end{tabular} & \begin{tabular}{c} killed shot man attacked armed \\ people found beat abducted dead \\ \end{tabular} \\ \hline \hline \end{tabular} \end{table} Table 3: F1 scores of the models for different categories of classification results. Ethics and Broader Impact Statement ### Ethics Only publicly available dataset2 is used in this paper Raleigh et al. (2010). No ethical approval is required for this work. Footnote 2: [https://www.prio.org/misc/Download.aspx?file=82fccsv%2frd82fReplication+Data%2fReplication+data_Raleigh+et+al+47](https://www.prio.org/misc/Download.aspx?file=82fccsv%2frd82fReplication+Data%2fReplication+data_Raleigh+et+al+47)(5).zip ### Implications Our work has several potential practical implications: * Our model outperforms two strong baselines, BERT and CANTM, in terms of predictive performance. It can also serve as a competitive baseline for future research. * Our explainable neural topic model, CANTM-IA, can be utilized for other NLP downstream tasks, such as stance detection (Mu et al., 2023) and rumor verification (Derczynski et al., 2017), providing **interpretable predictions**. ## Acknowledgments This research is partially supported by an European Union Horizon 2020 Project (Agreement no.871042 under the scheme "INFRAIA-01-2018-2019 - Integrating Activities for Advanced Communities": "SoBigData++: European Integrated Infrastructure for Social Mining and Big Data Analytics" ([http://www.sobigdata.eu](http://www.sobigdata.eu)).
2303.10038
Semilinear Feynman-Kac Formulae for $B$-Continuous Viscosity Solutions
We prove the existence of a $B$-continuous viscosity solution for a class of infinite dimensional semilinear partial differential equations (PDEs) using probabilistic methods. Our approach also yields a stochastic representation formula for the solution in terms of a scalar-valued backward stochastic differential equation. The uniqueness is proved under additional assumptions using a comparison theorem for viscosity solutions. Our results constitute the first nonlinear Feynman-Kac formula using the notion of $B$-continuous viscosity solutions and thus introduces a framework allowing for generalizations to the case of fully nonlinear PDEs.
Lukas Wessels
2023-03-17T15:07:59Z
http://arxiv.org/abs/2303.10038v1
# Semilinear Feynman-Kac Formulae for \(B\)-Continuous Viscosity Solutions ###### Abstract We prove the existence of a \(B\)-continuous viscosity solution for a class of infinite dimensional semilinear partial differential equations (PDEs) using probabilistic methods. Our approach also yields a stochastic representation formula for the solution in terms of a scalar-valued backward stochastic differential equation. The uniqueness is proved under additional assumptions using a comparison theorem for viscosity solutions. Our results constitute the first nonlinear Feynman-Kac formula using the notion of \(B\)-continuous viscosity solutions and thus introduces a framework allowing for generalizations to the case of fully nonlinear PDEs. ## 1 Introduction The classical Feynman-Kac formula going back to Richard Feynman [13] and Mark Kac [22] gives a stochastic representation for the solution of a linear partial differential equation (PDE) in terms of a path integral over a diffusion process. This result and generalizations thereof have since found many applications in various fields such as quantitative finance and stochastic optimal control, see e.g. [32]. Recent years have shown a rising interest in control problems associated with stochastic partial differential equations (SPDEs), see e.g. [11]. Therefore, generalizations of the Feynman-Kac formula to infinite dimensions are of increasing importance. Another emerging field that relies on nonlinear generalizations of the Feynman-Kac formula is numerical analysis for high-dimensional PDEs. The stochastic representation for solutions of PDEs can be utilized to develop numerical schemes based on machine learning methods, see e.g. [3, 19, 21]. In this paper, we prove a generalization of the classical Feynman-Kac formula to semilinear PDEs in infinite dimensional spaces using the theory of \(B\)-continuous viscosity solutions and backward stochastic differential equations (BSDEs). For finite initial and terminal times \(0\leq t<T<\infty\), respectively, and a separable Hilbert space \(H\), we consider the SPDE \[\begin{cases}\mathrm{d}X_{s}^{t,x}=[AX_{s}^{t,x}+b(s,X_{s}^{t,x})]\mathrm{d}s+ \sigma(s,X_{s}^{t,x})\mathrm{d}W_{s},\quad s\in[t,T]\\ X_{t}^{t,x}=x\in H,\end{cases} \tag{1}\] where \(A:\mathcal{D}(A)\subset H\to H\) is an unbounded linear operator, and \(b:[0,T]\times H\to H\) and \(\sigma:[0,T]\times H\to L_{2}(\Xi,H)\) are the drift and noise coefficient, respectively. Moreover, \((W_{s})_{s\in[t,T]}\) is a cylindrical Wiener process on some separable Hilbert space \(\Xi\) and defined on some probability space \((\Omega,\mathcal{F},(\mathcal{F}_{s})_{s\in[t,T]},\mathbb{P})\) with \((\mathcal{F}_{s})\) being its natural filtration augmented by all \(\mathbb{P}\)-null sets. Furthermore, we consider the infinite dimensional PDE \[\begin{cases}v_{t}(t,x)+\langle Ax+b(t,x),Dv(t,x)\rangle_{H}\\ +\frac{1}{2}\text{tr}(\sigma^{*}(t,x)D^{2}v(t,x)\sigma(t,x))-f(t,x,v(t,x), \sigma^{*}Dv(t,x))=0,\quad(t,x)\in[0,T]\times H\\ v(T,x)=g(x),\quad x\in H,\end{cases} \tag{2}\] where \(f:[0,T]\times H\times\mathbb{R}\times\Xi\to\mathbb{R}\) and \(g:H\to\mathbb{R}\) are given. In order to introduce the main ideas, let us assume that equation (2) admits a smooth solution \(v\) with \(Dv(t,x)\in\mathcal{D}(A^{*})\), where \(A^{*}\) denotes the adjoint of \(A\). Then, applying Ito's formula and plugging in equation (2) immediately yields that \[\begin{cases}Y_{s}^{t,x}=v(s,X_{s}^{t,x})\\ Z_{s}^{t,x}=\sigma^{*}Dv(s,X_{s}^{t,x})\end{cases} \tag{3}\] solves the BSDE \[\begin{cases}\mathrm{d}Y_{s}^{t,x}=f(s,X_{s}^{t,x},Y_{s}^{t,x},Z_{s}^{t,x}) \mathrm{d}s+\langle Z_{s}^{t,x},\mathrm{d}W_{s}\rangle_{\Xi},\quad s\in[t,T]\\ Y_{T}^{t,x}=g(X_{T}^{t,x}).\end{cases} \tag{4}\] In particular, for \(s=t\), equations (3) and (4) yield the following stochastic representation of \(v\): \[\begin{split} v(t,x)&=Y_{t}^{t,x}\\ &=g(X_{T}^{t,x})-\int_{t}^{T}f(s,X_{s}^{t,x},v(s,X_{s}^{t,x}), \sigma^{*}Dv(s,X_{s}^{t,x}))\mathrm{d}s\\ &\quad-\int_{t}^{T}\langle\sigma^{*}Dv(s,X_{s}^{t,x}),\mathrm{d}W_{s} \rangle_{\Xi}.\end{split} \tag{5}\] The preceding discussion is only formal since the PDE (2) in general, does not admit a smooth solution and therefore, the solution of the BSDE (4) can not be given in terms of the derivative of \(v\) as in (3). However, the stochastic representation (5) remains valid even if the solution of equation (2) is not differentiable. The objective of this paper is to prove that the stochastic representation \[u(t,x):=Y_{t}^{t,x} \tag{6}\] yields a \(B\)-continuous viscosity solution for the PDE (2), see Theorem 4. Furthermore, we prove under additional assumptions that \(u\) as given in (6) is the unique solution of the PDE (2). **Remark 1.1**.: The representation (6) can be viewed as a nonlinear generalization of the classical Feynman-Kac formula: If \(f(s,x,y,z)=k(s,x)y+l(s,x)\) for some functions \(k,l:[t,T]\times H\to\mathbb{R}\), then the BSDE (4) admits an explicit solution formula for \(Y^{t,x}\) in terms of \(X^{t,x}\) and \(Z^{t,x}\). The \(Z^{t,x}\)-dependency drops out when taking the expectation in (5), yielding the classical Feynman-Kac formula, see e.g. [23, Chapter 5, Theorem 7.6]. In the finite dimensional case, nonlinear Feynman-Kac formulae using BSDEs go back to Pardoux and Peng [28]. Following their seminal work, there were many generalizations of this relationship between PDEs and BSDEs in various directions such as relaxing the assumptions on the coefficients [1, 24], and deriving a similar relationship for integral-partial differential equations and BSDEs involving a forward jump-diffusion process [2], for nonlocal PDEs and mean-field BSDEs [5], for quasilinear PDEs and forward-backward SDEs [12, 30, 31], for reflected BSDEs and related obstacle problems for PDEs [10] and, more recently, even for fully nonlinear PDEs and second order BSDEs [33, 7, 34]. In the infinite dimensional case, the first nonlinear Feynman-Kac formula using BSDEs was obtained by Fuhrman and Tessitore in [15], again followed by generalizations relaxing the assumptions on the coefficients [4, 17, 25, 27], and deriving similar relationships for elliptic PDEs and infinite horizon BSDEs [16, 18, 20], for PDEs on a space of continuous functions and stochastic delay differential equations [14, 26], and for quasilinear PDEs and foward-backward SDEs [6]. All of the results in the infinite dimensional case study the PDE (2) in the framework of mild solutions. In this work, our objective is to prove such a relationship between BSDEs and PDEs in infinite dimensional spaces usind the framework of \(B\)-continuous viscosity solutions. As a byproduct, our probabilistic approach yields a new method to prove existence of \(B\)-continuous viscosity solutions for infinite dimensional PDEs involving unbounded terms. There are various methods in the existing literature for proving the existence of solutions, see [11, Chapter 3] and the references therein. These existing approaches construct solutions even for fully nonlinear PDEs exploiting the connection with infinite dimensional stochastic control problems, using finite dimensional approximations, or by extending Perron's method to the infinite dimensional case. However, in our setting, we do not impose any structural assumptions on the nonlinearity \(f\) and therefore, in general one cannot introduce an associated stochastic control problem to construct a solution to the PDE (2). Furthermore, the method using finite dimensional approximations imposes the additional assumption that the unbounded operator \(A\) satisfies the strong \(B\)-condition for a compact operator \(B\) (see Assumption (A2) for a definition of the strong \(B\)-condition). Finally, Perron's method requires an additional coercivity assumption on \(A\). Finally, let us mention that our work introduces a framework for the extension of the classical Feynman-Kac formula to the case of infinite dimensional fully nonlinear PDEs using second order BSDEs in the spirit of the corresponding results in finite dimensions, see [33, 7, 34]. This problem will be investigated in future work. The remainder of the paper is organized as follows: In Section 2, we introduce the notation and the assumptions we are working with in the following sections. In Section 3, we prove some results for scalar-valued BSDEs driven by an infinite dimensional Wiener process. Section 4 contains our main result, the existence of a \(B\)-continuous viscosity solution of equation (2) and its stochastic representation (5). Furthermore, the uniqueness is also discussed in Section 4. ## 2 Assumptions We impose the following assumptions on the unbounded operator \(A\). **Assumption 2.1**.: 1. _Let_ \(A:\mathcal{D}(A)\subset H\to H\) _be a linear, densely defined, maximal dissipative operator._ 2. _Let_ \(B\in L(H)\) _be a strictly positive, self-adjoint operator that satisfies the strong_ \(B\)_-condition for_ \(A\)_, i.e.,_ \(A^{*}B\in L(H)\) _and_ \[-A^{*}B+c_{0}B\geq I\] (7) _for some_ \(c_{0}\geq 0\)_._ Under Assumption (A1), the operator \(A\) is the generator of a \(C_{0}\)-semigroup which we denote in the following by \((S(s))_{s\in[t,T]}\). Furthermore, using the operator \(B\), for \(\alpha>0\), we define the space \(H_{-\alpha}\) as the completion of the space \(H\) with respect to the norm \[\|x\|_{H_{-1}}\!\!:=\langle Bx,x\rangle_{H}. \tag{8}\] We impose the following assumptions on \(b\) and \(\sigma\). **Assumption 2.2**.: 1. _Let_ \(b:[0,T]\times H\to H\) _be_ \(\mathcal{B}([0,T])\otimes\mathcal{B}(H)/\mathcal{B}(H)\)_-measurable and let there be a constant_ \(C>0\) _such that_ \[\|b(s,x)-b(s,x^{\prime})\|_{H} \leq C\|x-x^{\prime}\|_{H}\] (9) \[\|b(s,x)\|_{H} \leq C(1+\|x\|_{H})\] (10) _for all_ \(s\in[0,T]\)_,_ \(x,x^{\prime}\in H\) 2. _Let_ \(\sigma:[0,T]\times H\to L_{2}(\Xi;H)\) _be_ \(\mathcal{B}([0,T])\otimes\mathcal{B}(H)/\mathcal{B}(L_{2}(\Xi,H))\)_-measurable and let there be a constant_ \(C>0\) _such that_ \[\|\sigma(s,x)-\sigma(s,x^{\prime})\|_{L_{2}(\Xi,H)} \leq C\|x-x^{\prime}\|_{H_{-1}}\] (11) \[\|\sigma(s,x)\|_{L_{2}(\Xi,H)} \leq C(1+\|x\|_{H}),\] (12) _for all_ \(s\in[0,T],\,x,x^{\prime}\in H\)_._ Finally, we impose the following assumptions on the coefficients of the BSDE. **Assumption 2.3**.: 1. _Let_ \(f:[0,T]\times H\times\mathbb{R}\times\Xi\to\mathbb{R}\) _be uniformly continuous on bounded subsets of_ \((0,T)\times H\times\mathbb{R}\times\Xi\) _and satisfy the following conditions:_ * _There exists a constant_ \(C>0\) _such that_ \[|f(s,x,y,z)-f(s,x^{\prime},y^{\prime},z^{\prime})|\leq C(\|x-x^{\prime}\|_{H} +|y-y^{\prime}|+\|z-z^{\prime}\|_{\Xi})\] (13) _for all_ \(s\in[0,T],\,x,x^{\prime}\in H\)_,_ \(y,y^{\prime}\in\mathbb{R}\)_,_ \(z,z^{\prime}\in\Xi\)__ * _It holds_ \[\int_{0}^{T}|f(s,0,0,0)|^{2}\mathrm{d}s<\infty.\] (14) 2. _For_ \(g:H\to\mathbb{R}\)_, let there be a constant_ \(C>0\) _such that_ \[|g(x)-g(x^{\prime})|\leq C\|x-x^{\prime}\|_{H}\] (15) _for all_ \(x,x^{\prime}\in H\)_._ **Remark 2.4**.: Note that the SPDE (1) as well as the BSDE (4) are well-posed under these assumptions, see e.g. [9, Theorem 7.2] and [11, Proposition 6.20], respectively. ## 3 BSDEs In this section, we prove some basic results for scalar-valued BSDEs with infinite dimensional driving noise. For the corresponding theory when the driving noise is finite dimensional, see e.g. [10, 29, 35]. ### Explicit Solution of Linear BSDE First, we need the following explicit solution formula for linear BSDEs. Consider the equation \[\begin{cases}\mathrm{d}Y_{s}=-[a_{s}Y_{s}+b_{s}+\langle c_{s},Z_{s}\rangle_{ \Xi}]\mathrm{d}s+\langle Z_{s},\mathrm{d}W_{s}\rangle_{\Xi}.\quad s\in[t,T]\\ Y_{T}=\eta\in L^{2}(\Omega),\end{cases} \tag{16}\] where \(a,b:[t,T]\times\Omega\to\mathbb{R}\) and \(c:[t,T]\times\Omega\to\Xi\) are progressively measurable processes and \(\eta\) is \(\mathcal{F}_{T}\)-measurable. Let \[\Gamma_{s}=\exp\left[\int_{t}^{s}a_{r}-\frac{1}{2}\|c_{r}\|_{\Xi}^{2}\mathrm{ d}r+\int_{t}^{s}\langle c_{r},\mathrm{d}W_{r}\rangle_{\Xi}\right]. \tag{17}\] Then \[\mathrm{d}\Gamma_{s}=\Gamma_{s}a_{s}\mathrm{d}s+\Gamma_{s}\langle c _{s},\mathrm{d}W_{s}\rangle_{\Xi} \tag{18}\] \[\mathrm{d}\Gamma_{s}^{-1}=\Gamma_{s}^{-1}(-a_{s}+\|c_{s}\|_{\Xi}^ {2})\mathrm{d}s-\Gamma_{s}^{-1}\langle c_{s},\mathrm{d}W_{s}\rangle_{\Xi}. \tag{19}\] By [11, Proposition 6.18], there exists an adapted process \(V\in L^{2}([t,T]\times\Omega;\Xi)\) such that \[\Gamma_{T}\eta+\int_{t}^{T}\Gamma_{s}b_{s}\mathrm{d}s=\mathbb{E}\left[\Gamma_{ T}\eta+\int_{t}^{T}\Gamma_{s}b_{s}\mathrm{d}s\right]+\int_{t}^{T}\langle V_{s}, \mathrm{d}W_{s}\rangle_{\Xi}. \tag{20}\] **Proposition 3.1**.: _The solution of the linear BSDE (16) is given by_ \[\begin{cases}Y_{s}=\Gamma_{s}^{-1}\mathbb{E}\left[\Gamma_{T}\eta+\int_{s}^{T} \Gamma_{r}b_{r}\mathrm{d}r\Big{|}\mathcal{F}_{s}\right]\\ Z_{s}=\Gamma_{s}^{-1}V_{s}-c_{s}Y_{s}.\end{cases} \tag{21}\] Proof.: Using (20), we derive from (21) that \[Y_{s} =\Gamma_{s}^{-1}\left(\mathbb{E}\left[\Gamma_{T}\eta+\int_{t}^{T} \Gamma_{r}b_{r}\mathrm{d}r\right]-\int_{t}^{s}\Gamma_{r}b_{r}\mathrm{d}r+\int_ {t}^{s}\langle V_{r},\mathrm{d}W_{r}\rangle_{\Xi}\right) \tag{22}\] \[=\Gamma_{s}^{-1}\left(\mathbb{E}\left[\Gamma_{T}\eta+\int_{t}^{T} \Gamma_{r}b_{r}\mathrm{d}r\right]-\int_{t}^{s}\Gamma_{r}b_{r}\mathrm{d}r+\int _{t}^{s}\langle\Gamma_{r}Z_{r}+\Gamma_{r}c_{r}Y_{r},\mathrm{d}W_{r}\rangle_{ \Xi}\right). \tag{23}\] Applying Ito's formula yields \[\mathrm{d}Y_{s} =\left[\Gamma_{s}^{-1}\left(-a_{s}\mathrm{d}s+\|c_{s}\|_{\Xi}^{2 }\mathrm{d}s\right)-\Gamma_{s}^{-1}\langle c_{s},\mathrm{d}W_{s}\rangle_{\Xi} \right]\Gamma_{s}Y_{s} \tag{24}\] \[\qquad+\Gamma_{s}^{-1}[\langle\Gamma_{s}Y_{s}c_{s}+\Gamma_{s}Z_{ s},\mathrm{d}W_{s}\rangle_{\Xi}-\Gamma_{s}b_{s}\mathrm{d}s]-\Gamma_{s}^{-1} \langle c_{s},c_{s}\Gamma_{s}Y_{s}+\Gamma_{s}Z_{s}\rangle_{\Xi}\mathrm{d}s\] (25) \[=-[a_{s}Y_{s}+b_{s}+\langle c_{s},Z_{s}\rangle_{\Xi}]\mathrm{d}s+ \langle Z_{s},\mathrm{d}W_{s}\rangle_{\Xi}. \tag{26}\] Since \(Y_{T}=\eta\), this proves that \((Y,Z)\) as given in (21) solves the BSDE (16). ### Comparison for BSDEs In the remainder of this section, let \(F:\Omega\times[t,T]\times\mathbb{R}\times\Xi\to\mathbb{R}\) be \(\mathcal{P}\times\mathcal{B}([t,T]\times\mathbb{R}\times\Xi)/\mathcal{B}( \mathbb{R})\)-measurable, where \(\mathcal{P}\) denotes the predictable \(\sigma\)-field on \(\Omega\times[s,T]\) and \(\mathcal{B}(\Lambda)\) denotes the Borel \(\sigma\)-field of any topological space \(\Lambda\). Furthermore, assume that there exists a constant \(C>0\) such that \[|F(s,y,z)-F(s,y^{\prime},z^{\prime})|\leq C(|y-y^{\prime}|+\|z-z^{\prime}\|_{ \Xi}) \tag{27}\] \(\mathbb{P}\)-almost surely and for every \(s\in[t,T]\), \(y,y^{\prime}\in\mathbb{R}\) and \(z,z^{\prime}\in\Xi\). Finally, assume that \[\mathbb{E}\left[\int_{t}^{T}\lvert F(s,0,0)\rvert^{2}\mathrm{d}s\right]<\infty. \tag{28}\] We consider the BSDE \[\begin{cases}\mathrm{d}Y_{s}=F(s,Y_{s},Z_{s})\mathrm{d}s+\langle Z_{s}, \mathrm{d}W_{s}\rangle_{\Xi},\quad s\in[t,T]\\ Y_{T}=\eta\in L^{2}(\Omega),\end{cases} \tag{29}\] where \(\eta\) is \(\mathcal{F}_{T}\)-measurable. **Definition 3.2**.: A pair \((Y,Z)\) of adapted processes \(Y\in L^{2}(\Omega;C([t,T]))\) and \(Z\in L^{2}([t,T]\times\Omega;\Xi)\) is a supersolution of the BSDE (29) if for every \(t\leq s<r\leq T\) it holds \[\begin{cases}Y_{r}\leq Y_{s}+\int_{s}^{r}F(s^{\prime},Y_{s^{\prime}},Z_{s^{ \prime}})\mathrm{d}s^{\prime}+\int_{s}^{r}\langle Z_{s^{\prime}},\mathrm{d}W_{ s^{\prime}}\rangle_{\Xi}\\ Y_{T}=\eta.\end{cases} \tag{30}\] A pair \((Y,Z)\) of adapted processes \(Y\in L^{2}(\Omega;C([t,T]))\) and \(Z\in L^{2}([t,T]\times\Omega;\Xi)\) is a subsolution of the BSDE (29) if for every \(t\leq s<r\leq T\) it holds \[\begin{cases}Y_{r}\geq Y_{s}+\int_{s}^{r}F(s^{\prime},Y_{s^{\prime}},Z_{s^{ \prime}})\mathrm{d}s^{\prime}+\int_{s}^{r}\langle Z_{s^{\prime}},\mathrm{d}W_{ s^{\prime}}\rangle_{\Xi}\\ Y_{T}=\eta.\end{cases} \tag{31}\] Note that this definition is equivalent with the following definition. **Definition 3.3**.: A triple \((Y,Z,I)\) of adapted processes \(Y\in L^{2}(\Omega;C([t,T]))\), \(Z\in L^{2}([t,T]\times\Omega;\Xi)\) and \(I\in L^{2}(\Omega;C([t,T]))\) is a supersolution of the BSDE (29) if \(I(t)=0\)\(\mathbb{P}\)-almost surely, \(I\) is increasing, and for every \(s\in[t,T]\) it holds \[Y_{s}=\eta-\int_{s}^{T}F(r,Y_{r},Z_{r})\mathrm{d}r+I_{T}-I_{s}-\int_{s}^{T} \langle Z_{r},\mathrm{d}W_{r}\rangle_{\Xi}. \tag{32}\] A triple \((Y,Z,D)\) of adapted processes \(Y\in L^{2}(\Omega;C([t,T]))\), \(Z\in L^{2}([t,T]\times\Omega;\Xi)\) and \(D\in L^{2}(\Omega;C([t,T]))\) is a subsolution of the BSDE (29) if \(D(t)=0\)\(\mathbb{P}\)-almost surely, \(D\) is decreasing, and for every \(s\in[t,T]\) it holds \[Y_{s}=\eta-\int_{s}^{T}F(r,Y_{r},Z_{r})\mathrm{d}r+D_{T}-D_{s}-\int_{s}^{T} \langle Z_{r},\mathrm{d}W_{r}\rangle_{\Xi}. \tag{33}\] **Remark 3.4**.: The fact that (30) and (31) follow from (32) and (33), respectively, is immediate. For the opposite direction, consider \[I_{s}:=Y_{t}+\int_{t}^{s}F(r,Y_{r},Z_{r})\mathrm{d}r+\int_{t}^{s}\langle Z_{r },\mathrm{d}W_{r}\rangle_{\Xi}-Y_{s}. \tag{34}\] In the case of linear BSDEs, we have the following comparison result. **Proposition 3.5**.: _Let \((Y,Z)\) be a supersolution of the linear BSDE (16). Then it holds_ \[Y_{s}\geq\Gamma_{s}^{-1}\mathbb{E}\left[\Gamma_{T}\eta+\int_{s}^{T}\Gamma_{r} b_{r}\mathrm{d}r\bigg{|}\mathcal{F}_{s}\right], \tag{35}\] _i.e., a supersolution to the linear BSDE (16) dominates the solution to the linear BSDE._ Proof.: Let \((Y,Z,I)\) be a supersolution of the linear BSDE (16). Then an application of Ito's formula yields that \[\Gamma_{s}Y_{s}+\int_{t}^{s}\Gamma_{r}b_{r}\mathrm{d}r \tag{36}\] is a local supermartingale. From this fact, one easily deduces (35). Next, let us prove a comparison result for sub- and supersolutions. **Proposition 3.6**.: _Let \((F^{i},\eta^{i})\), \(i=1,2\), satisfy the same assumptions as \((F,\eta)\) above, and let \((Y^{1},Z^{1})\) and \((Y^{2},Z^{2})\) be a sub- and supersolution of the BSDE (29) associated with \((F^{1},\eta^{1})\) and \((F^{2},\eta^{2})\), respectively. Assume_ * \(\eta^{2}\geq\eta^{1}\)__ * \(F^{2}(s,Y_{s}^{2},Z_{s}^{2})\leq F^{1}(s,Y_{s}^{2},Z_{s}^{2})\ \mathrm{d}s\otimes\mathbb{P}\)_-almost surely._ _Then for every \(s\in[t,T]\), it holds \(Y_{s}^{2}\geq Y_{s}^{1}\)\(\mathbb{P}\)-almost surely. Moreover, if there exists an \(s\in[t,T]\) such that_ \[\eta^{2}-\eta^{1}+\int_{s}^{T}F^{2}(r,Y_{r}^{2},Z_{r}^{2})-F^{1}(r,Y_{r}^{2},Z _{r}^{2})\mathrm{d}r>0 \tag{37}\] \(\mathbb{P}\)_-almost surely, then \(Y_{s}^{2}>Y_{s}^{1}\)\(\mathbb{P}\)-almost surely._ Proof.: Note that \[Y_{r}^{2}-Y_{r}^{1}\leq Y_{s}^{2}-Y_{s}^{1}-\int_{s}^{r}a_{s^{\prime}}(Y_{s^{ \prime}}^{2}-Y_{s^{\prime}}^{1})+b_{s^{\prime}}+\langle c_{s^{\prime}},Z_{s^{ \prime}}^{2}-Z_{s^{\prime}}^{1}\rangle_{\Xi}\mathrm{d}s^{\prime}+\int_{s}^{r} \langle Z_{s^{\prime}}^{2}-Z_{s^{\prime}}^{1},\mathrm{d}W_{s^{\prime}}\rangle_{ \Xi}, \tag{38}\] where \[a_{s^{\prime}}=\frac{1}{Y_{s^{\prime}}^{2}-Y_{s^{\prime}}^{1}}(F^{1}(s^{\prime },Y_{s^{\prime}}^{1},Z_{s}^{1})-F^{1}(s^{\prime},Y_{s^{\prime}}^{2},Z_{s^{ \prime}}^{1}))\mathbf{1}_{\{Y_{s^{\prime}}^{2}\neq Y_{s^{\prime}}^{1}\}} \tag{39}\] \[b_{s^{\prime}}=F^{1}(s^{\prime},Y^{2}_{g^{\prime}},Z^{2}_{s^{\prime}})-F^{2}(s^{ \prime},Y^{2}_{g^{\prime}},Z^{2}_{s^{\prime}}) \tag{40}\] and \[c_{s^{\prime}}=\frac{Z^{2}_{g^{\prime}}-Z^{1}_{s^{\prime}}}{\|Z^{2}_{s^{\prime} }-Z^{1}_{s^{\prime}}\|^{2}_{\Xi}}(F^{1}(s^{\prime},Y^{2}_{g^{\prime}},Z^{1}_{s ^{\prime}})-F^{1}(s^{\prime},Y^{2}_{g^{\prime}},Z^{2}_{s^{\prime}}))\mathbf{1}_ {\{Z^{2}_{s^{\prime}}\neq Z^{1}_{s^{\prime}}\}}. \tag{41}\] Therefore, we obtain from Proposition 3.5 and assumptions \((i)\) and \((ii)\) \[Y^{2}_{s}-Y^{1}_{s}\geq\Gamma^{-1}_{s}\mathbb{E}\left[\Gamma_{T}(\eta^{2}- \eta^{1})+\int_{s}^{T}\Gamma_{r}b_{r}\mathrm{d}r\bigg{|}\mathcal{F}_{s}\right] \geq 0, \tag{42}\] where \(\Gamma\) is given by equation (17). The strict inequality follows from (37). ### A Piori Estimates for BSDEs **Lemma 3.7**.: _Let \((Y,Z)\) be the solution of the BSDE (29). Then it holds_ \[\mathbb{E}\left[\sup_{s\in[t,T]}|Y_{s}|^{2}+\int_{t}^{T}\|Z_{s}\|^{2}_{\Xi} \mathrm{d}s\right]\leq C\mathbb{E}\left[|\eta|^{2}+\left(\int_{t}^{T}|F(s,0,0 )|\mathrm{d}s\right)^{2}\right]. \tag{43}\] The proof of this result can be found in [15, Proposition 4.3]. **Lemma 3.8**.: _Let \((F^{i},\eta^{i})\), \(i=1,2\), satisfy the same assumptions as \((F,\eta)\) above and let \((Y^{1},Z^{1})\) and \((Y^{2},Z^{2})\) be the solution of the BSDE (29) associated with \((F^{1},\eta^{1})\) and \((F^{2},\eta^{2})\), respectively. Then it holds_ \[\begin{split}&\mathbb{E}\left[\sup_{s\in[t,T]}\big{|}Y^{1}_{s}-Y^{2}_ {s}\big{|}^{2}+\int_{t}^{T}\big{\|}Z^{1}_{s}-Z^{2}_{s}\big{\|}^{2}_{\Xi} \,\mathrm{d}s\right]\\ &\leq C\mathbb{E}\left[\big{|}\eta^{1}-\eta^{2}\big{|}^{2}+\left( \int_{t}^{T}|F^{1}(r,Y^{1}_{r},Z^{1}_{r})-F^{2}(r,Y^{1}_{r},Z^{1}_{r})| \mathrm{d}r\right)^{2}\right].\end{split} \tag{44}\] Proof.: Note that for all \(s\in[t,T]\) it holds \[Y^{1}_{s}-Y^{2}_{s} =\eta^{1}-\eta^{2}-\int_{s}^{T}F^{1}(r,Y^{1}_{r},Z^{1}_{r})-F^{2} (r,Y^{2}_{r},Z^{2}_{r})\mathrm{d}r-\int_{s}^{T}\langle Z^{1}_{r}-Z^{2}_{r}, \mathrm{d}W_{r}\rangle_{\Xi} \tag{45}\] \[=\eta^{1}-\eta^{2}-\int_{s}^{T}F^{1}(r,Y^{1}_{r},Z^{1}_{r})-F^{2} (r,Y^{1}_{r},Z^{1}_{r})\mathrm{d}r\] \[\qquad-\int_{s}^{T}a_{r}(Y^{1}_{r}-Y^{2}_{r})+b_{r}(Z^{1}_{r}-Z^{ 2}_{r})\mathrm{d}r-\int_{s}^{T}\langle Z^{1}_{r}-Z^{2}_{r},\mathrm{d}W_{r} \rangle_{\Xi},\] where \[a_{r}:=\frac{F^{2}(r,Y^{1}_{r},Z^{1}_{r})-F^{2}(r,Y^{2}_{r},Z^{1}_{r})}{Y^{1}_ {r}-Y^{2}_{r}}\mathbf{1}_{\{Y^{1}_{r}\neq Y^{2}_{r}\}} \tag{46}\] and \[b_{r}:=\frac{F^{2}(r,Y^{2}_{r},Z^{1}_{r})-F^{2}(r,Y^{2}_{r},Z^{2}_{r})}{Z^{1}_ {r}-Z^{2}_{r}}\mathbf{1}_{\{Z^{1}_{r}\neq Z^{2}_{r}\}}. \tag{47}\] Now, Lemma 3.7 yields the result. ## 4 Feynman-Kac Formulae for \(B\)-Continuous Viscosity Solutions In this section, we prove that the stochastic representation formula (6) yields a \(B\)-continuous viscosity solution of the PDE (2). Under additional assumptions, we establish uniqueness. ### \(B\)-Continuous Viscosity Solutions First, we recall the notion of \(B\)-continuous viscosity solutions. Let us begin by introducing the class of test functions we are using, see [11, Definition 3.32]. **Definition 4.1**.: A function \(\psi\) is a test function if \(\psi=\varphi+h(t,\|x\|_{H})\), where: * \(\varphi\in C^{1,2}((0,T)\times H)\) is locally bounded, and is such that \(\varphi\) is \(B\)-lower semicontinuous, and \(\varphi_{t}\), \(A^{*}D\varphi\), \(D\varphi\), \(D^{2}\varphi\) are uniformly continuous on \((0,T)\times H\). * \(h\in C^{1,2}((0,T)\times\mathbb{R})\) and is such that for every \(t\in(0,T)\), \(h(t,\cdot)\) is even and \(h(t,\cdot)\) is non-decreasing on \([0,+\infty)\). Now, we define the notion of \(B\)-continuous viscosity solution, see [11, Definition 3.35]. **Definition 4.2**.: A locally bounded and upper semicontinuous function \(u\) on \((0,T]\times H\) which is \(B\)-upper semicontinuous on \((0,T)\times H\) is a viscosity subsolution of (2) if the following holds: whenever \(u-\psi\) has a local maximum at a point \((t,x)\in(0,T)\times H\) for a test function \(\psi\) as in Definition 4.1 then \[\psi_{t}(t,x)+\langle x,A^{*}D\varphi(t,x)\rangle_{H}+\langle b(t,x),D\psi(t,x)\rangle_{H}\\ +\frac{1}{2}\text{tr}(\sigma^{*}(t,x)D^{2}\psi(t,x)\sigma(t,x))- f(t,x,u(t,x),\sigma^{*}(t,x)D\psi(t,x))\geq 0. \tag{48}\] A locally bounded and lower semicontinuous function \(u\) on \((0,T]\times H\) which is \(B\)-lower semicontinuous on \((0,T)\times H\) is a viscosity supersolution of (2) if the following holds: whenever \(u+\psi\) has a local minimum at a point \((t,x)\in(0,T)\times H\) for a test function \(\psi\) as in Definition 4.1 then \[-\psi_{t}(t,x)-\langle x,A^{*}D\varphi(t,x)\rangle_{H}-\langle b(t,x),D\psi(t,x)\rangle_{H}\\ -\frac{1}{2}\text{tr}(\sigma^{*}(t,x)D^{2}\psi(t,x)\sigma(t,x))- f(t,x,u(t,x),-\sigma^{*}(t,x)D\psi(t,x))\leq 0. \tag{49}\] A viscosity solution of (2) is a function which is both a viscosity subsolution and a viscosity supersolution of (2). ### Existence and Stochastic Representation First, we recall the following a priori estimates for the forward SPDE (1), see [11, Lemma 3.23] and [15, Proposition 3.3], respectively. **Lemma 4.3**.: _Let Assumptions 2.1 and 2.2 be satisfied. Then, there exists a constant \(C>0\) independent of \(t\in[0,T]\) such that_ \[\mathbb{E}\left[\|X_{T}^{t,x}-X_{T}^{t,x^{\prime}}\|_{H_{-1}}^{2}+\int_{t}^{ T}\|X_{s}^{t,x}-X_{s}^{t,x^{\prime}}\|_{H}^{2}\mathrm{d}s\right]\leq C\|x-x^{ \prime}\|_{H_{-1}}^{2} \tag{50}\] _for all \(x,x^{\prime}\in H\). Furthermore, for every \(t\in[0,T)\), there exists a constant \(C>0\) such that_ \[\mathbb{E}\left[\|X_{T}^{t,x}-X_{T}^{t,x^{\prime}}\|_{H}^{2}\right]\leq\frac{ C}{T-t}\|x-x^{\prime}\|_{H_{-1}}^{2} \tag{51}\] _for all \(x,x^{\prime}\in H\)._ **Lemma 4.4**.: _Let Assumptions 2.1 and 2.2 be satisfied. Then, for every \(x\in H\), it holds_ \[\mathbb{E}\left[\sup_{s\in[t^{\prime},T]}\|X_{s}^{t^{\prime},x}-X_{s}^{t,x}\| _{H}^{2}\right]\to 0 \tag{52}\] _as \(t^{\prime}\downarrow t\)._ **Remark 4.5**.: In [15, Proposition 3.3], the authors actually prove a stronger result than this. However, we only need the stated result. Now, let us state and prove our main result. **Theorem 4.6**.: _Let \((Y,Z)\) be the solution of the BSDE (4), and define \(u(t,x):=Y_{t}^{t,x}\). Then \(u\) is a \(B\)-continuous viscosity solution of the PDE (2) in the sense of Definition 4.2._ Proof.: In the first step, we are going to prove the \(B\)-continuity of \(u\), and in the second step, we are going to prove the viscosity property. _Step 1_: Let \((t,x)\in(0,T)\times H\). Let \(t_{n}\to t\), \(x_{n}\rightharpoonup x\) and \(Bx_{n}\to Bx\) and fix \(N\in\mathbb{N}\) such that \(|t_{n}-t|<(T-t)/2\) for all \(n\geq N\). We have \[|u(t_{n},x_{n})-u(t,x)|=\left|Y_{t_{n}}^{t_{n},x_{n}}-Y_{t}^{t,x}\right|\leq \left|Y_{t_{n}}^{t_{n},x_{n}}-Y_{t_{n}}^{t_{n},x}\right|+\left|Y_{t_{n}}^{t_{n },x}-Y_{t}^{t,x}\right|. \tag{53}\] For the first term, choosing \[\begin{cases}F^{1}(s,y,z):=f(s,X_{s}^{t_{n},x_{n}},y,z)\\ \eta^{1}:=g(X_{T}^{t_{n},x_{n}})\end{cases} \tag{54}\] and \[\begin{cases}F^{2}(s,y,z):=f(s,X_{s}^{t_{n},x},y,z)\\ \eta^{2}:=g(X_{T}^{t_{n},x})\end{cases} \tag{55}\] and applying Lemma 3.8 yields \[\begin{split}&\left|Y_{t_{n}}^{t_{n},x_{n}}-Y_{t_{n}}^{t_{n},x} \right|^{2}\\ &\leq C\mathbb{E}\Bigg{[}\left|g(X_{T}^{t_{n},x_{n}})-g(X_{T}^{t_{n},x})\right|^{2}\\ &\qquad\qquad+\left(\int_{t_{n}}^{T}\lvert f(s,X_{s}^{t_{n},x_{n} },Y_{s}^{t_{n},x_{n}},Z_{s}^{t_{n},x_{n}})-f(s,X_{s}^{t_{n},x},Y_{s}^{t_{n},x _{n}},Z_{s}^{t_{n},x_{n}})\rvert\mathrm{d}s\right)^{2}\Bigg{]}.\end{split} \tag{56}\] Using Assumption 2.3, we obtain \[\left|Y_{t_{n}}^{t_{n},x_{n}}-Y_{t_{n}}^{t_{n},x}\right|^{2}\leq C\mathbb{E} \left[\|X_{T}^{t_{n},x_{n}}-X_{T}^{t_{n},x}\|_{H}^{2}+\int_{t_{n}}^{T}\lVert X _{s}^{t_{n},x_{n}}-X_{s}^{t_{n},x}\rVert_{H}^{2}\mathrm{d}s\right]. \tag{57}\] Applying Lemma 4.3 yields \[\left|Y_{t_{n}}^{t_{n},x_{n}}-Y_{t_{n}}^{t_{n},x}\right|^{2}\leq C\lVert x_{n} -x\rVert_{H_{-1}}^{2} \tag{58}\] for some constant that is uniform in \(n\geq N\). Now let us turn to the second term in (53). Assume without loss of generality that \(t_{n}\downarrow t\). Then we have \[\begin{split}&\left|Y_{t_{n}}^{t_{n},x}-Y_{t}^{t,x}\right|^{2} \\ &\leq C\mathbb{E}\Bigg{[}\left|g(X_{T}^{t_{n},x})-g(X_{T}^{t,x}) \right|^{2}+\int_{t_{n}}^{T}\lvert f(s,X_{s}^{t_{n},x},Y_{s}^{t_{n},x},Z_{s}^ {t_{n},x})-f(s,X_{s}^{t,x},Y_{s}^{t,x},Z_{s}^{t,x})\rvert^{2}\mathrm{d}s\\ &\quad+\left|\int_{t_{n}}^{T}\langle Z_{s}^{t_{n},x}-Z_{s}^{t,x},\mathrm{d}W_{s}\rangle_{\Xi}\right|^{2}+\int_{t}^{t_{n}}\lvert f(s,X_{s}^{t,x },Y_{s}^{t,x},Z_{s}^{t,x})\rvert^{2}\mathrm{d}s+\left|\int_{t}^{t_{n}}\langle Z _{s}^{t,x},\mathrm{d}W_{s}\rangle_{\Xi}\right|^{2}\Bigg{]}.\end{split} \tag{59}\] The fourth and the fifth term tend to zero as \(t_{n}\downarrow t\) due to the continuity of the integral. For the first term, we have \[\left|g(X_{T}^{t_{n},x})-g(X_{T}^{t_{x},x})\right|^{2}\leq C\|X_{T}^{t_{n},x}-X_{ T}^{t_{x},x}\|_{H}^{2}. \tag{60}\] For the second term we have by Assumption (C1) \[\int_{t_{n}}^{T}\lvert f(s,X_{s}^{t_{n},x},Y_{s}^{t_{n},x},Z_{s}^ {t_{n},x})-f(s,X_{s}^{t,x},Y_{s}^{t,x},Z_{s}^{t,x})\rvert^{2}\mathrm{d}s \tag{61}\] \[\leq C\int_{t_{n}}^{T}\lVert X_{s}^{t_{n},x}-X_{s}^{t,x}\rVert_{H} ^{2}+\lvert Y_{s}^{t_{n},x}-Y_{s}^{t,x}\rvert^{2}+\lVert Z_{s}^{t_{n},x}-Z_{s}^ {t,x}\rVert_{\Xi}^{2}\mathrm{d}s.\] Therefore, applying Lemma 3.8 and using (60), we obtain \[\mathbb{E}\left[\int_{t_{n}}^{T}\lvert f(s,X_{s}^{t_{n},x},Y_{s} ^{t_{n},x},Z_{s}^{t_{n},x})-f(s,X_{s}^{t,x},Y_{s}^{t,x},Z_{s}^{t,x})\rvert^{2 }\mathrm{d}s\right] \tag{62}\] \[\leq C\mathbb{E}\left[\sup_{s\in[t_{n},T]}\lVert X_{s}^{t_{n},x}- X_{s}^{t,x}\rVert_{H}^{2}\right].\] For the third term, we have by Lemma 3.8 \[\mathbb{E}\left[\left|\int_{t_{n}}^{T}\langle Z_{s}^{t_{n},x}-Z_{ s}^{t,x},\mathrm{d}W_{s}\rangle\underline{\varepsilon}\right|^{2}\right] =\mathbb{E}\left[\int_{t_{n}}^{T}\lVert Z_{s}^{t_{n},x}-Z_{s}^{t,x}\rVert_{\Xi}^{2}\mathrm{d}s\right] \tag{63}\] \[\leq C\mathbb{E}\left[\sup_{s\in[t_{n},T]}\lVert X_{s}^{t_{n},x} -X_{s}^{t,x}\rVert_{H}^{2}\right].\] Applying Lemma 4.4 yields that the right-hand side of (59) tends to zero as \(t_{n}\downarrow t\). Together with (58), this concludes the proof of the \(B\)-continuity. _Step 2_: First, we show that \(u\) is a viscosity subsolution. Let \(\psi=\varphi+h\) be a test function as in Definition 4.1 such that \[0=(u-\psi)(t,x)=\max_{\begin{subarray}{c}s\in[t,T]\\ x^{\prime}\in H\end{subarray}}(u-\psi)(s,x^{\prime}) \tag{64}\] We can assume without loss of generaly that the maximum is strict, see [11, Lemma 3.37]. Assume, for sake of contradiction, that \[\psi_{t}(t,x)+\langle x,A^{*}D\varphi(t,x)\rangle_{H}+\langle b(t, x),D\psi(t,x)\rangle_{H}\\ +\text{tr}(\sigma^{*}(t,x)D^{2}\psi(t,x)\sigma(t,x))-f(t,x,u(t,x),\sigma^{*}D\psi(t,x))<0. \tag{65}\] Fix \(\varepsilon>0\) such that for all \(t\leq s\leq t+\varepsilon\) and \(\lVert x^{\prime}-x\rVert_{H}\leq\varepsilon\), it holds \[\psi_{t}(s,x^{\prime})+\langle x^{\prime},A^{*}D\varphi(s,x^{ \prime})\rangle_{H}+\langle b(s,x^{\prime}),D\psi(s,x^{\prime})\rangle_{H}\\ +\text{tr}(\sigma^{*}(s,x^{\prime})D^{2}\psi(s,x^{\prime})\sigma( s,x^{\prime}))-f(s,x^{\prime},u(s,x^{\prime}),\sigma^{*}D\psi(s,x^{\prime}))<0. \tag{66}\] Now, we define the stopping time \[\tau:=\inf\{s\geq t\lVert X_{s}^{t,x}-x\rVert_{H}\geq\varepsilon\}\wedge(t+ \varepsilon). \tag{67}\] Let \((Y^{t,x},Z^{t,x})\) be the solution of the BSDE (4), and define for \(t\leq s\leq t+\varepsilon\) \[(Y_{s}^{1},Z_{s}^{1})=(Y_{s/\tau}^{t,x},\mathbf{1}_{[0,\tau]}(s)Z_{s}^{t,x}). \tag{68}\] Then we have \[u(t+h,X_{t+h}^{t,x})=Y_{t+h}^{t+h,X_{t+h}^{t,x}}=Y_{t+h}^{t,x} \tag{69}\] due to the uniqueness of the solution of the SPDE (1) and the uniqueness of the solution of the BSDE (4). Therefore, for every \(r\in[t,t+\varepsilon]\) \[Y^{t,x}_{r\wedge\tau}=u(r\wedge\tau,X^{t,x}_{r\wedge\tau}), \tag{70}\] which shows that \((Y^{1},Z^{1})\) solves the BSDE \[\begin{cases}\mathrm{d}Y^{1}_{s}=\mathbf{1}_{[0,\tau]}(s)f(s,X^{t,x}_{s},u(s, X^{t,x}_{s}),Z^{1}_{s})\mathrm{d}s+\langle Z^{1}_{s},\mathrm{d}W_{s}\rangle_{ \Xi}\\ Y^{1}_{\tau}=u(\tau,X^{t,x}_{\tau}).\end{cases} \tag{71}\] On the other hand, we obtain from Ito's formula for test functions (see [11, Proposition 1.166]) that for every \(r,s\in[t,t+\varepsilon]\), \(r\geq s\), \[\psi(r\wedge\tau,X^{t,x}_{r\wedge\tau}) \tag{72}\] \[\leq\psi(s\wedge\tau,X^{t,x}_{s\wedge\tau})+\int_{s}^{r}\mathbf{1 }_{[0,\tau]}(s^{\prime})\left(\psi_{t}(s^{\prime},X^{t,x}_{s^{\prime}})+ \langle X^{t,x}_{s^{\prime}},A^{*}D\varphi(s^{\prime},X^{t,x}_{s^{\prime}}) \rangle_{H}\right)\mathrm{d}s^{\prime}\] \[\quad+\int_{s}^{r}\mathbf{1}_{[0,\tau]}(s^{\prime})\langle b(s^{ \prime},X^{t,x}_{s^{\prime}}),D\psi(s^{\prime},X^{t,x}_{s^{\prime}})\rangle_ {H}\mathrm{d}s^{\prime}\] \[\quad+\frac{1}{2}\int_{s}^{r}\mathbf{1}_{[0,\tau]}(s^{\prime}) \text{tr}\left(\sigma^{*}(s^{\prime},X^{t,x}_{s^{\prime}})D^{2}\psi(s^{\prime },X^{t,x}_{s^{\prime}})\sigma(s^{\prime},X^{t,x}_{s^{\prime}})\right)\mathrm{d }s^{\prime}\] \[\quad+\int_{s}^{r}\langle\mathbf{1}_{[0,\tau]}(s^{\prime}) \sigma^{*}(s^{\prime},X^{t,x}_{s^{\prime}})D\psi(s^{\prime},X^{t,x}_{s^{ \prime}}),\mathrm{d}W_{s^{\prime}}\rangle_{\Xi}.\] Therefore, \[(Y^{2}_{s},Z^{2}_{s}):=(\psi(s\wedge\tau,X^{t,x}_{s\wedge\tau}),\mathbf{1}_{[ 0,\tau]}(s)\sigma^{*}D\psi(s,X^{t,x}_{s})) \tag{73}\] is a supersolution of the BSDE \[\begin{cases}\mathrm{d}Y_{s}=\mathbf{1}_{[0,\tau]}(s)\Big{[}\psi_{t}(s,X^{t, x}_{s})+\langle X^{t,x}_{s},A^{*}D\varphi(s,X^{t,x}_{s})\rangle_{H}+\langle b (s,X^{t,x}_{s}),D\psi(s,X^{t,x}_{s})\rangle_{H}\\ \hskip 142.26378pt+\frac{1}{2}\text{tr}(\sigma^{*}(s,X^{t,x}_{s})D^{2}\psi(s, X^{t,x}_{s})\sigma(s,X^{t,x}_{s}))\Big{]}\mathrm{d}s+\langle Z_{s},\mathrm{d}W_{s} \rangle_{\Xi}\\ Y_{\tau}=\psi(\tau,X^{t,x}_{\tau}).\end{cases} \tag{74}\] Due to inequality (66) and the definition of \(\tau\), we can apply Proposition 3.6 to obtain \[\psi(t,x)=Y^{2}_{t}>Y^{1}_{t}=Y^{t,x}_{t}=u(t,x) \tag{75}\] which contradicts (64) and therefore concludes the proof that \(u\) is a \(B\)-continuous viscosity subsolution. The proof that \(u\) is a \(B\)-continuous viscosity supersolution is similar. ### Uniqueness In order to prove uniqueness, we rely on a comparison theorem for the PDE (2) which is proved using analytic methods. For this result to hold, we have to impose additional assumptions on the coefficients of the PDE. **Assumption 4.7**.: 1. _Let_ \(\sigma\) _be uniformly continuous on bounded subsets of_ \((0,T)\times H\)_._ 2. _For_ \(y^{\prime}\geq y\)_, let_ \[f(s,x,y^{\prime},z)-f(s,x,y,z)\geq 0\] (76) _for all_ \((s,x,z)\in(0,T)\times H\times\Xi\)_._ **Theorem 4.8**.: _Let Assumptions 2.1, 2.2, 2.3, 4.7 be satisfied. Then \(u(t,x):=Y^{t,x}_{t}\) is the unique \(B\)-continuous viscosity solutions satisfying_ \[\lim_{t\to T}\lvert u(t,x)-g(S(T-t)x)\rvert=0 \tag{77}\] uniformly on bounded subsets of \(H\) and_ \[|u(t,x)|\leq C_{1}\exp\left(C_{2}\left(\ln\left(1+\|x\|_{H}\right)\right)^{2}\right) \tag{78}\] _for some constants \(C_{1},C_{2}\geq 0\) and all \((t,x)\in(0,T)\times H\)._ Proof.: Once we know that \(u\) satisfies (77) and (78), the result follows from [11, Theorem 3.54]. Let us first prove (77). Note that \[\begin{split}&|u(t,x)-g(S(T-t)x)|^{2}\\ &=\mathbb{E}\left[\left|Y_{t}^{t,x}-g(S(T-t)x)\right|^{2}\right] \\ &=\mathbb{E}\left[\left|g(X_{T}^{t,x})-\int_{t}^{T}f(s,X_{s}^{t,x },Y_{s}^{t,x},Z_{s}^{t,x})\mathrm{d}s-\int_{t}^{T}\langle Z_{s}^{t,x}, \mathrm{d}W_{s}\rangle\underline{\pi}-g(S(T-t)x)\right|^{2}\right]\\ &\leq\mathbb{E}\left[\int_{t}^{T}\lvert f(s,X_{s}^{t,x},Y_{s}^{t, x},Z_{s}^{t,x})\rvert^{2}\mathrm{d}s+\left|\int_{t}^{T}\langle Z_{s}^{t,x}, \mathrm{d}W_{s}\rangle\underline{\pi}\right|^{2}\right]\\ &\quad+\mathbb{E}\left[\lvert g(X_{T}^{t,x})-g(S(T-t)x)\rvert^{ 2}\right].\end{split} \tag{79}\] For the first term, using Ito's isometry and Assumption 2.3, we have \[\begin{split}&\mathbb{E}\left[\int_{t}^{T}\lvert f(s,X_{s}^{t,x },Y_{s}^{t,x},Z_{s}^{t,x})\rvert^{2}\mathrm{d}s+\left|\int_{t}^{T}\langle Z_{s }^{t,x},\mathrm{d}W_{s}\rangle\underline{\pi}\right|^{2}\right]\\ &\leq C\int_{t}^{T}\lvert f(s,0,0,0)\rvert^{2}+\mathbb{E}\left[ \lVert X_{s}^{t,x}\rVert_{H}+\lvert Y_{s}^{t,x}\rvert^{2}+\lVert Z_{s}^{t,x} \rVert\underline{\pi}\right]\mathrm{d}s.\end{split} \tag{80}\] Applying Lemma 3.7, we obtain \[\begin{split}&\mathbb{E}\left[\int_{t}^{T}\lvert f(s,X_{s}^{t,x },Y_{s}^{t,x},Z_{s}^{t,x})\rvert^{2}\mathrm{d}s+\left|\int_{t}^{T}\langle Z_{ s}^{t,x},\mathrm{d}W_{s}\rangle\underline{\pi}\right|^{2}\right]\\ &\leq C\int_{t}^{T}\lvert f(s,0,0,0)\rvert^{2}+1+\mathbb{E}\left[ \lVert X_{s}^{t,x}\rVert_{H}^{2}\right]\mathrm{d}s.\end{split} \tag{81}\] For the last term in (79), using Assumption (C2), we have \[\begin{split}&\mathbb{E}\left[\lvert g(X_{T}^{t,x})-g(S(T-t)x) \rvert^{2}\right]\\ &\leq C\mathbb{E}\left[\lVert X_{T}^{t,x}-S(T-t)x\rVert_{H}^{2} \right]\\ &=C\mathbb{E}\left[\left\lVert\int_{t}^{T}S(T-s)b(s,X_{s}^{t,x}) \mathrm{d}s+\int_{t}^{T}S(T-s)\sigma(s,X_{s}^{t,x})\mathrm{d}W_{s}\right\rVert _{H}^{2}\right]\\ &\leq C\mathbb{E}\left[\int_{t}^{T}\lVert S(T-s)b(s,X_{s}^{t,x}) \rVert_{H}^{2}\mathrm{d}s+\left\lVert\int_{t}^{T}S(T-s)\sigma(s,X_{s}^{t,x}) \mathrm{d}W_{s}\right\rVert_{H}^{2}\right].\end{split} \tag{82}\] Using again Ito's isometry and the linear growth assumption on \(b\) and \(\sigma\), we obtain \[\mathbb{E}\left[\lvert g(X_{T}^{t,x})-g(S(T-t)x)\rvert^{2}\right]\leq C\int_ {t}^{T}1+\mathbb{E}\left[\lVert X_{s}^{t,x}\rVert_{H}^{2}\right]\mathrm{d}s. \tag{83}\] Therefore, together with (81), we have \[\lvert u(t,x)-g(S(T-t)x)\rvert^{2}\leq C\int_{t}^{T}\lvert f(s,0,0,0)\rvert^{ 2}+1+\mathbb{E}\left[\lVert X_{s}^{t,x}\rVert_{H}^{2}\right]\mathrm{d}s. \tag{84}\] Due to Assumption (C1) and well-known a priori estimates for the solution of the SPDE (1) (see e.g. [9, Theorem 7.2]), the right-hand side converges to zero as \(t\to T\) uniformly on bounded subsets of \(H\). The growth condition (78) follows from Lemma 3.7 and the same a priori estimates for the solution of the SPDE (1). ## Acknowledgments The author would like to thank Andrzej Swiech for his constructive criticism of the manuscript. During the preparation of this work, the author was supported by a fellowship of the German Academic Exchange Service (DAAD).
2302.10311
Understanding the effect of varying amounts of replay per step
Model-based reinforcement learning uses models to plan, where the predictions and policies of an agent can be improved by using more computation without additional data from the environment, thereby improving sample efficiency. However, learning accurate estimates of the model is hard. Subsequently, the natural question is whether we can get similar benefits as planning with model-free methods. Experience replay is an essential component of many model-free algorithms enabling sample-efficient learning and stability by providing a mechanism to store past experiences for further reuse in the gradient computational process. Prior works have established connections between models and experience replay by planning with the latter. This involves increasing the number of times a mini-batch is sampled and used for updates at each step (amount of replay per step). We attempt to exploit this connection by doing a systematic study on the effect of varying amounts of replay per step in a well-known model-free algorithm: Deep Q-Network (DQN) in the Mountain Car environment. We empirically show that increasing replay improves DQN's sample efficiency, reduces the variation in its performance, and makes it more robust to change in hyperparameters. Altogether, this takes a step toward a better algorithm for deployment.
Animesh Kumar Paul, Videh Raj Nema
2023-02-20T20:54:11Z
http://arxiv.org/abs/2302.10311v1
# Understanding the effect of varying amounts of replay per step ###### Abstract Model-based reinforcement learning uses models to plan, where the predictions and policies of an agent can be improved by using more computation without additional data from the environment, thereby improving sample efficiency. However, learning accurate estimates of the model is hard. Subsequently, the natural question is whether we can get similar benefits as planning with model-free methods. Experience replay is an essential component of many model-free algorithms enabling sample-efficient learning and stability by providing a mechanism to store past experiences for further reuse in the gradient computational process. Prior works have established connections between models and experience replay by planning with the latter. This involves increasing the number of times a mini-batch is sampled and used for updates at each step (amount of replay per step). We attempt to exploit this connection by doing a systematic study on the effect of varying amounts of replay per step in a well-known model-free algorithm: Deep Q-Network (DQN) in the Mountain Car environment. We empirically show that increasing replay improves DQN's sample efficiency, reduces the variation in its performance, and makes it more robust to change in hyperparameters. Altogether, this takes a step toward a better algorithm for deployment. Machine Learning, Reinforcement Learning by van Hasselt et al. (van Hasselt et al., 2019), where they make connections between parametric models and ER. Model-based algorithms use models to _plan_, i.e. using more computation to improve the predictions and policies without consuming additional data in the form of new interactions with the environment. They argue that the experience stored in the ER buffer of a model-free algorithm (like DQN) can be similarly used to plan. They provide empirical evidence in Atari where Rainbow DQN (a variant of DQN (Hessel et al., 2018)) with experience planning (i.e. increased \(\tau\)) achieves better sample efficiency and faster learning than a model-based algorithm. However, they did not do a systematic study investigating sample efficiency with different values of \(\tau\) and their effect on the algorithm's sensitivity to other hyperparameters. We do this in our work. Varying \(\tau\) can have interesting effects on DQN and its other hyperparameters. We investigate this hypothesis empirically in the Mountain Car environment (Moore, 1990). Our objectives are to investigate whether (1) increasing \(\tau\) helps DQN learn faster and achieve better performance, (2) different \(\tau\) have different effects on sample efficiency, and (3) increasing \(\tau\) makes DQN less sensitive to other hyperparameters, thereby making it easier to choose their best values. ## 2 Background A Markov Decision Process (MDP) is defined by the tuple (\(\mathcal{S},\mathcal{A},R,P,\gamma\)), where \(\mathcal{S}\) and \(\mathcal{A}\) denote the state and action space respectively. \(R:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}\) denotes the scalar reward function, \(P:\mathcal{S}\times\mathcal{S}\times\mathcal{A}\rightarrow[0,1]\) is the state transition probability, where \(P(s^{\prime}|s,a)\) denotes probability of transitioning to state \(s^{\prime}\) from state \(s\) by taking action \(a\). \(\gamma\in[0,1)\) denotes the discount factor. The goal of the agent is to find a policy \(\pi:\mathcal{A}\times\mathcal{S}\rightarrow[0,1]\) (\(\pi(a|s)\) is the probability of taking action \(a\) in state \(s\)) that maximizes the expected return \[Q^{\pi}(s,a)\doteq\mathbb{E}_{\pi}\left[r_{1}+\gamma r_{2}+\ldots|S_{0}=s,A_{0 }=a\right],\] where \(r_{k}\) is the random variable of reward at time step \(k\). Q-learning is an algorithm to achieve this by directly learning the optimal action-value function \(Q^{*}(s,a)\doteq\max_{\pi}Q^{\pi}(s,a)\) and deriving the policy from that (Watkins, 1989). The learning rule for Q-learning is given by \[Q_{t+1}(s,a)=Q_{t}(s,a)+\alpha\left(Z_{t}^{Q}-Q_{t}(s,a)\right),\\ Z_{t}^{Q}=r+\gamma\max_{a^{\prime}}Q_{t}(s^{\prime},a^{\prime}), \tag{1}\] where \((s,a,r,s^{\prime})\) denotes a transition from state \(s\) by taking action \(a\) to state \(s^{\prime}\) with reward \(r\). \(Z_{t}^{Q}\) and \(\alpha\) above denote the target and scalar learning rate respectively. Intuitively, Q-learning moves the Q-function estimates toward the target. Note that the subscript \(t\) above denotes the time step at which an update is made, which can be different from the time step at which the transition is collected. Q-learning is an off-policy algorithm, meaning it learns about the greedy policy but collects data using a different policy. To encourage exploration, one way to collect data is according to an \(\epsilon\)-greedy policy that chooses a random action with probability \(\epsilon\) and the greedy action with \(1-\epsilon\). The learning rule in (1) updates the value of each \((s,a)\) pair without affecting the values of other pairs. Hence there is no generalization of value from one pair to another even if the pairs are similar. This makes it unrealistic to directly use (1) for large state-action spaces. To address this issue, we need to turn to function approximation where we learn a parameterized Q-function with a fixed number of parameters and use it for all \((s,a)\) pairs. ### Deep Q-Network Deep Q-Network (DQN) is a deep learning version of Q-learning (Mnih et al., 2015). Here the Q-function is a neural network parameterized by \(\theta\), where \(Q(s,a;\theta_{t})\) is obtained by passing the state \(s\) into a network with parameters \(\theta_{t}\) and one output for each action \(a\). The DQN stochastic gradient descent update to \(\theta_{t}\) is given by \[\theta_{t+1}=\theta_{t}+\alpha\left(Z_{t}^{\text{DQN}}-Q(s,a; \theta_{t})\right)\nabla_{\theta}Q(s,a;\theta_{t}),\\ Z_{t}^{\text{DQN}}=r+\max_{a^{\prime}}Q(s^{\prime},a^{\prime}; \theta^{-}). \tag{2}\] One difference from Q-learning here is that the target uses \(\theta^{-}\), which are the parameters of the target Q-network. The target Q-network is a lagging copy of the online Q-network, which is refreshed after every \(C\) steps, i.e., \(\theta^{-}=\theta_{t}\) at time step \(t\), and then kept constant for the next \(C\) steps. Using a target network induces stability when learning with neural networks by creating a delay between the time target is computed and the time when parameters are updated. Another important component and difference of DQN from standard Q-learning is experience replay, which we discuss in more detail in the following section. ### Experience Replay Experience Replay (ER) was introduced by (Lin, 1992). It is a constant-size buffer with capacity \(\Omega\) and comprising the agent's experiences (transition tuples) at interaction time steps, i.e. \(e_{k}=(s_{k},a_{k},r_{k+1},s_{k+1})\) at time step \(k\). Note that a single transition tuple is sufficient to make updates according to (2). However, it is generally more effective to (randomly) sample a mini-batch of transitions (batch size \(B\)) from the buffer and use it to update Q-network parameters. Using an ER buffer provides multiple benefits (Mnih et al., 2015). First, it improves sample efficiency by using a par ticular transition (sample) for multiple updates. Second and more importantly, it makes training more stable by randomizing the samples, thereby breaking correlations between consecutive samples. A typical implementation of the replay buffer involves storing transitions at each time step and making room for new transitions by removing the old ones. This fixes the amount of time each transition spends in the buffer and helps to discard transitions from a very old policy, that might not be relevant to make the current update. The simplest strategy to sample transitions is sampling uniformly, i.e. each transition has an equal probability of being sampled. However, this does not efficiently use transitions that might be the most effective for training. Other sophisticated approaches like Prioritized ER address this issue by sampling important transitions more frequently (Schaul et al., 2016). While prioritization may provide a better performance, in this work we adhere to focusing on uniform random sampling. We do so to observe the sole effect of increasing the replay frequency \(\tau\) in a simpler setting while keeping other components of the algorithm intact. ## 3 Experimental Design In this section, we describe our environment setup, hyperparameter choices, and other experimental details in order to understand and reproduce our results. ### Environment setup We evaluate DQN in the Mountain Car environment (Moore, 1990). Our experiments are based on OpenAI Gym's implementation of Mountain Car with slight modifications (Brockman et al., 2016). The environment is formulated as an MDP having a two-dimensional continuous state space: position \(\in[-1.2,0.6]\) and velocity \(\in[-0.07,0.07]\). Note that this is the state of the agent and not the environment. The action space consists of three discrete actions: accelerate left \((0)\), do not accelerate \((1)\), and accelerate right \((2)\). The goal of the agent (car) is to reach the top of the hill as soon as possible. However, it does not have enough power to accelerate up the hill and hence should accelerate backward to generate enough momentum to climb up. A reward of \(-1\) per step is given to encourage the agent to finish the task fast. The default AIGym implementation terminates an episode after \(200\) steps. We modify this to \(2000\) steps to avoid misleading results due to aggressive episode cutoffs (Patterson et al., 2020). At the same time, we do not set it to infinity to avoid the agent getting stuck. ### Experimental Setup Our objectives are to (1) assess the effect of increasing \(\tau\) on the performance and sample efficiency of DQN in Mountain Car, and (2) understand the variety of behaviors produced by DQN with different \(\tau\) values for different hyperparameter settings. For our experiments, we borrow terminology from (Patterson et al., 2020) where an _agent_ refers to "a single entity learning and adapting over time", an _algorithm_ refers to "a process producing a set of agents by specifying initial conditions and learning rules", and _hyperparameters_ refer to "a scalar set of parameters affecting the agents produced by the algorithm". We fix the total number of environment interactions (steps) for each agent instead of episodes. Doing so ensures a fair evaluation as each agent receives the same amount of learning experience. We found \(N=250,000\) steps good to evaluate if an agent reaches good performance and stably maintains it for some time. We use a discount factor \(\gamma=0.99\) for all our experiments. However, we use the undiscounted return as our performance metric since discounting is a part of the agent's internal mechanics and not the environment. Using the undiscounted return for Mountain car indicates how quickly the task is solved. Further, we measure online performance, i.e. how the agent performs while it is learning. Note that it needs to balance exploration and exploitation in such evaluation (Patterson et al., 2020). To measure performance at a particular step of learning, we use the undiscounted return for the episode containing that step. We do 30 runs for each \(\tau\), and each run corresponds to \(250,000\) steps but differs in terms of the random seed (Q-network initialization and initial state of the agent). To fairly compare run \(i\) of two agents with different \(\tau\), we need to ensure that their seeds for run \(i\) are the same. We do so in the following way: * For the Q-network initialization, we randomly generate an array of seeds of length equal to the total number of runs and use the same array for each \(\tau\). * For the initial state of the agent, the AIGym environment gets reinitialized every time a new episode starts inside a run. Hence we generate a large array of seeds to ensure that the same seed (initial state) is used for episode \(j\) inside run \(i\) for agents with different \(\tau\). Additionally, we use Xavier initialization to initialize the Q-network parameters (Glorot and Bengio, 2010). To achieve our first objective, we follow a simple strategy to set the hyperparameters for our experiments. We borrow most hyperparameter values of DQN in Mountain Car from an existing codebase1 and set the remaining ones using random search. We use these hyperparameters for all values of \(\tau\). The resulting setting of hyperparameters might not be the best but is good enough to ensure a nearly-steady improvement in performance. We argue that doing this is appropriate since we want to assess how much better DQN can be made just by increasing \(\tau\) for a system that might not be exhaustively tuned for the best hyperparameters. Our main entity of interest is \(\tau\) and we assess the change in performance upon increasing \(\tau\) while keeping other hyperparameters fixed. We evaluate on \(\tau=[1,2,4,8,16,32]\), where \(\tau=1\) corresponds to vanilla DQN. The maximum number of transitions that can be stored in the replay buffer (capacity \(\Omega\)) is set to \(4000\) and the number of transitions sampled per update (batch size \(B\)) is set to \(32\). Further, at the beginning of each run, the initial policy collects and stores transitions for the first \(1024\) steps (replay start size) without making any updates to the Q-network. We set the replay start size to be much bigger than the batch size to ensure better randomization while sampling that breaks correlations between samples in early learning. If the replay start size is smaller, there is a higher probability to sample the most correlated recent transitions, which can result in a bad initial start and never recovering thereafter. To represent the Q-network, we use a neural network with two hidden layers of size \(32\) each and with the ReLU activation function. We refresh the parameters of the target Q-network every \(C=128\) steps and use a mean-squared loss to measure the difference between the DQN target and Q value. To optimize this loss function, we use the Adam optimizer with learning rate \(\alpha=0.001\), gradient momentum \(\beta_{1}=0.9\), and squared gradient momentum \(\beta_{2}=0.999\). Finally, we use an \(\epsilon\)-greedy behavioral policy for exploration with \(\epsilon_{initial}=1\) at the start of a run, decayed to \(\epsilon_{final}=0.1\) with a decay rate \(\epsilon_{decay}=0.999\) and fixed thereafter. This roughly corresponds to annealing \(\epsilon\) from \(1\) to \(0.1\) over \(2300\) steps. For our second objective, we do hyperparameter sensitivity analysis. Sensitivity plots help us understand changes in the behavior of algorithms and suggest sensitivity to hyperparameters that is essential for deployment (Patterson et al., 2020). We pick \(4\) hyperparameters: learning rate, batch size, replay capacity, and the target network refresh rate. We chose these hyperparameters as we found them to be amongst the most important ones affecting DQN's performance. The range of tested values is specified in the next section. ## 4 Evaluation and Results As stated before, we care about online performance, which is measured at each step and equals the undiscounted return for the episode containing that step (Patterson et al., 2020). If an episode finishes with an undiscounted return of \(G\), then every step of that episode has the same performance value, \(G\). We refer to this as the performance measure. To aggregate performance for a single run, we simply sum the performance measures at each step. To get a scalar aggregate performance value over multiple runs for each replay frequency, we sum the aggregate performance for each run and divide it by the total number of steps times the total number of runs. Additionally, to get the mean performance curve, we average each step's performance measure over all the runs and do this for all steps. Note that we do not apply any kind of smoothing for the mean performance curve. For instance, it is possible to sum the performance measures till a particular step, divide the sum by the step index, and use that for plotting the mean curve (like a running average). In such a case, the curve becomes very smooth. However, we do not do this as it hides potentially important variations. Since individual runs can be different from each other depending on the seed initialization, we use the following techniques to measure the variability in performance (Patterson et al., 2020) along with the mean performance: * Confidence Interval: We use confidence intervals to measure the uncertainty in our estimated mean online performance. For computing it, we use the Student t-distribution, which requires the assumption of having an approximately Gaussian performance distribution. To validate this assumption in our case, we visualize the distribution over performance. We use a \(95\%\) confidence interval (\(\alpha=0.05\)) to report our uncertainty in our mean estimate. * Tolerance Interval: We use tolerance intervals to capture the variation in the performance from a limited number of samples. We use a (\(\alpha=0.05,\beta=0.9\)) tolerance interval to examine the performance variability between multiple runs. The \((\alpha,\beta)\) values suggest that the interval contains at least \(0.9\) fraction of total runs with the confidence of \(95\%\). Now we provide the empirical results2. First, we visualize Figure 1: Performance distribution \(\mathbb{P}(M)\) with \(\tau=32\) for \(30\) runs. the distribution of performance for DQN. Fig. 1 shows the approximate distribution for \(30\) runs with replay frequency \(\tau=32\). This distribution is obtained by computing the aggregate performance for each run, dividing it by the total interactions, and plotting the frequency distribution. Hence a sample of this distribution corresponds to a run. Note that the resulting distribution is approximately Gaussian, which allows using the Student-t confidence intervals3. The x-axis contains the distribution between \(-472.57\) and \(-183.51\). This is because the aggregate performance for each run (sample) among the \(30\) runs was between these two values. We do not show the performance distribution for other \(\tau\) values because they follow a similar behavior with the difference that the distribution for larger \(\tau\) values has a greater mean and lower variance estimate than smaller \(\tau\) values. Footnote 3: We use \(\mathrm{t}=2.045\) from the Student-t table for 30 runs. ### Confidence Intervals The mean performances and confidence intervals for \(\tau=(1,8,32)\) with \(30\) runs are shown in fig. 2 (a, b, c), respectively. The interval for \(\tau=1\) is wider than \(\tau=8\), which is wider than \(\tau=32\) (when we say \(\tau=i\), we imply DQN with \(\tau=i\)). This shows that with a higher replay frequency, we are more certain in our estimate of the mean. The three plots also show that \(\tau=32\) learns faster than \(\tau=8\), which learns faster than \(\tau=1\), i.e. uses fewer samples. Note that agents with all replay frequencies eventually achieve good performance. However, a higher \(\tau\) results in better sample efficiency. This confirms that using more computation per step with fixed data (from the replay buffer) results in faster learning for DQN in Mountain Car. Lastly, we make a few observations about stability with different \(\tau\) values. From fig. 2 (a, b, c), we can observe that the mean performance is roughly stable for \(\tau=(8,32)\) after about \(50,000\) learning steps (the same holds true for \(\tau=(4,16)\) but plots are not included for brevity). For \(\tau=(1,2)\), the mean starts to stabilize after \(200,000\) and \(150,000\) steps respectively. This shows that with a higher \(\tau\), not only does the agent reach good performance faster but also maintains that on average. The mean curves are still a little noisy because we do not apply any smoothing. Figure 2: DQN performance in terms of confidence and tolerance intervals with different replay frequencies \(\tau\): (a, b, c) show the confidence intervals with mean performance for \(30\) runs and \(\tau=(1,8,32)\) respectively. The curves (d, e, f) show the tolerance intervals with the mean performance for \(30\) runs and \(\tau=(1,8,32)\) respectively. The curves (g, h, i) show the tolerance intervals with the median agent’s performance for \(29\) runs and \(\tau=(1,8,32)\) respectively. The y-axis denotes performance. No smoothing has been applied to the curves. ### Tolerance Intervals Tolerance intervals summarize the range of an algorithm's performance, irrespective of the underlying performance distribution while taking into account the uncertainty due to a limited number of samples. To compute tolerance intervals, we use the method described in (Patterson et al., 2020). Fig. 2 (d-i) depict the tolerance intervals around the mean performance and around the median agent's performance. Fig. 2 (d, e, f) show the interval around the mean performance for \(\tau=(1,8,32)\), respectively. Note that the interval is much wider for \(\tau=1\) than for \(\tau=8\). Further, the interval for \(\tau=32\) is tighter than for \(\tau=8\). This shows that with a higher \(\tau\), the variation in algorithm performance is low. The tolerance intervals also show the bottom percentile of runs which indicates that the worst-case performance of a higher \(\tau\) (\(\tau=8,32\)) is better than a lower \(\tau\) (\(\tau=1\)). Fig. 2 (g, h, i) show the interval around the median agent's performance. The learning curve for the median agent is obtained by arranging the aggregate performances for the first \(29\) runs4 in increasing order, finding the median, and plotting the curve for the corresponding run index. Note that the learning curve for a higher \(\tau\) even for an individual run (median here) is not very noisy and the performance increases and stays between \(-100\) and \(-200\) most of the time except for a few occasional drops (verified empirically). Footnote 4: Having an even number of runs (\(30\)) requires averaging the middle two runs after arranging the aggregated runs in increasing order. However, the average is not representative of any single run and can hide the differences between the behavior of the individual runs. Hence we use \(29\) runs for the median. ### Replay Frequency Curve To get a bigger picture of DQN performance with increasing replay frequency, we plot the aggregate performance against replay frequencies in fig. 3. The y-values denote the aggregate (online) performance across all \(30\) runs each with \(250,000\) steps, which is computed using the method described in the second paragraph of section 4. The error bars are computed using Student-t confidence intervals and depict the uncertainty in the mean estimates. The curve indicates how well DQN with different replay frequencies performs, given a fixed number of interactions with the environment. Hence greater y-values denote better sample efficiency. Note that the curve does not depict the final policy learned after training. The performance of the final policy is better than the average aggregate performance during training shown in the curve. As shown in fig. 3, vanilla DQN (\(\tau=1\)) performs the worst. When the replay frequency is increased to \(\tau=2\) and \(\tau=4\), the increase in mean performance estimate is large and the uncertainty reduces. However, after \(\tau=4\), the change is not very large. Interestingly, the mean and uncertainty estimates degrade slightly when moving from \(\tau=8\) to \(\tau=16\) but improve from \(\tau=16\) to \(\tau=32\). From \(\tau=4\) onward, the confidence varies but is still more than \(\tau=(1,2)\). Even though aggregation hides the internal behavior of individual runs, fig. 3 helps find the suitable replay frequency for a given scenario. For instance, if we have computational constraints, we would prefer using \(\tau=4\) because it provides a good enough performance without a large increase in computation per step. However, if we care more about performance, we might trade \(8\) times more computation per step for an improvement in performance. ### Hyperparameter Sensitivity Analysis In this section, we assess the variation in algorithm behavior when interpolating across different hyperparameters. For doing so, we use two-dimensional sensitivity curves where only one hyperparameter value is varied while keeping the others fixed (Patterson et al., 2020). Sensitivity to a hyperparameter is assessed by how much the aggregate performance varies with a change in the hyperparameter value. Fig. 4 shows the curves for four DQN hyperparameters, each for \(\tau=(1,4)\). We choose \(\tau=4\) to compare against vanilla DQN (\(\tau=1\)). Our choice of \(\tau=4\) is justified by fig. 3. \(\tau=4\) provides a middle ground between good performance and the computation spent per step. Moreover, the uncertainty in the mean estimate is the lowest for \(\tau=4\). One objective of the sensitivity experiments is to see if the better sample efficiency of higher \(\tau\) values is only for the above specific hyperparameter setting or does it apply to a wide range of hyperparameter values. Knowing this is essential, especially for algorithm deployment in scenarios where hyperparameter tuning can be expensive. Another objective is to find the appropriate values of hyperparameters. Figure 3: Comparison of different replay frequencies in terms of aggregate performance measure and error bar (computed using Student t-distribution) over \(250,000\) interactions and \(30\) runs for DQN. The x-axis is on the log scale (base \(2\)). As done for fig 3, we fix the total number of samples for each \(\tau\) (30 runs each with \(250,000\) interactions) and assess the aggregate performance with confidence estimates. #### 4.4.1 Learning rate Fig. 4 (a) shows the curve for the learning rate \(\alpha\). We experiment with four values of \(\alpha:(0.0001,0.001,0.01,0.1)\). Our previous experiments used \(\alpha=0.001\). For \(\alpha=0.0001\), the learning is slow for both \(\tau\) values. However, \(\tau=4\) learns much faster than \(\tau=1\). For \(\alpha=(0.001,0.01)\), the mean performance estimate for \(\tau=4\) is higher than \(\tau=1\). Moreover, the uncertainty estimates are low with \(\tau=4\) with the lowest for \(\alpha=0.001\). For \(\alpha=0.1\), both \(\tau\) values result in worse performance. This is because a large \(\alpha\) takes overly aggressive gradient steps resulting in the agents learning nothing. Finally, if we look at the first three values of \(\alpha\), \(\tau=4\) is less sensitive (around the peak value at \(\alpha=0.001\)) to change in \(\alpha\) than \(\tau=1\), thereby making it relatively easier to choose an appropriate value of \(\alpha\) for deployment. #### 4.4.2 Batch size Fig. 4 (b) shows the curve for batch size \(B\). We experiment with eight values of \(B:(8,16,32,64,128,256,512,1024)\). Our previous experiments used \(B=32\). Note that \(1024\) is the maximum value of \(B\) that we can get without changing any other hyperparameter. This is because we fixed the replay start size to \(1024\) and hence choosing \(B\) greater than that would require changing the start size. However, we do not increase the replay start size to observe the sole effect of changing \(B\) and to remain data efficient (van Hasselt et al., 2019). The curve for \(\tau=1\) increases continuously with the peak mean performance at \(B=1024\). However, the uncertainty estimates of \(\tau=1\) are very high for all values of \(B\), thereby making it difficult to choose the appropriate value of \(B\). On the other hand, \(\tau=4\) is relatively less sensitive to a change in \(B\) and has better mean performance with low uncertainty, which lets us choose \(B=32\) that has the best mean performance and lowest uncertainty. One can argue that choosing a large value of \(B\) with \(\tau=1\) may provide benefits similar to increasing \(\tau\). However, this is not true because for a large \(B\), the worst value (lower Figure 4: DQN hyperparameter sensitivity curves for \(\tau=(1,4)\) with 30 runs: (a), (b), (c), and (d) show the sensitivity curve for learning rate, batch size, replay capacity, and target network refresh rate respectively. The error bars are computed using the Student-t distribution. The x-axis for all plots is on the log scale (base 2). end of the confidence interval) of the mean estimate for \(\tau=1\) is considerably lower than the worst value for \(\tau=4\). Moreover, choosing a large \(B\) may not always be feasible due to memory constraints as it requires loading more samples at each step (Stember and Shalu, 2021). Increasing \(B\) also requires more computation per step. We argue that it is instead wiser to spend more computation on increasing \(\tau\) while keeping \(B\) small. This is clear from the curve for \(\tau=4\) where \(B=(32,64)\) are the best performing values. Along similar lines, it is interesting to compare \(\tau=1,B=32\) with \(\tau=4,B=8\). Both use \(32\) samples to make parameter updates with the difference in the way they update. The first one uses all \(32\) randomly sampled transitions at once, while the second randomly samples \(8\) transitions and uses them to make an update, doing this sequentially four times at each step. It is clearly visible that the worst mean performance for the second is much better than the best mean performance for the first. This traces back to connections of replay with planning where putting a loop around the model-based update results in better performance without consuming additional data (van Hasselt et al., 2019). #### 4.4.3 Replay capacity Fig. 4 (c) shows the curve for the replay capacity \(\Omega\). We experiment with eight different values of \(\Omega:(0.5,1,4,10,20,100,200,250)\times 10^{3}\). Our previous experiments used \(\Omega=4000\). When \(\Omega\) is too small, the life of a transition in the buffer reduces as it gets discarded to make room for new transitions. Hence the replay buffer contains more transitions from a recent policy. This can have a negative effect as there is less chance of breaking correlations between the transitions used for updates. When \(\Omega\) is too large, the updates can use transitions from an old policy which can be distributed in parts of the state space irrelevant to solving the task. In the extreme case, \(\Omega\) can be logically equal to the total number of interactions with the environment, in which case it will not forget any experience (\(250,000\) in our experiments). The curve shows that the performance is worst when \(\Omega\) is relatively small. For \(\tau=4\), \(\Omega=(500,1000)\) result in inferior performance with high uncertainty in the mean estimates. However, for \(\Omega=4000\) and greater, the performance is less sensitive to a change in \(\Omega\) with relatively low uncertainty. \(\tau=4\) with \(\Omega=10,000\) gives the highest mean performance with the lowest uncertainty in the estimate. Note that the mean performance and uncertainty estimates of \(\tau=4\) are better than \(\tau=1\) for all \(\Omega\) values, with the difference clearly visible for \(\Omega\) less than \(20,000\). For \(\tau=1\), the mean performance increases till \(\Omega=20,000\), after which it degrades slightly. However, high uncertainty in the mean estimates makes it difficult to pick an appropriate value of \(\Omega\) for \(\tau=1\). It is interesting to note that the nature of the replay capacity sensitivity curve is quite different for \(\tau=1\) and \(\tau=4\). The same holds for the batch size sensitivity curve from the previous subsection. \(\Omega,B,\) and \(\tau\) all are hyperparameters of experience replay and they may be interacting with each other in a non-trivial manner. Our two-dimensional sensitivity curves indicate a change in performance with variation in a single hyperparameter but do not capture interactions between multiple hyperparameters. It will be interesting to work on interacting hyperparameters in the future to get a deeper insight into the effects of increasing \(\tau\). #### 4.4.4 Target network refresh rate Fig. 4 (d) shows the curve for the target network refresh rate \(C\). We experiment with six values of \(C:(8,32,128,256,512,1024)\). Our previous experiments used \(C=128\). The frequency with which the lagging target network is refreshed affects the stability and performance of DQN. When \(C\) is too small, the delay between the time when the target is computed and the time when parameters are updated decreases, thereby causing oscillations or divergence of policy. However, when \(C\) is too large, the target is computed using a very old policy that may be very different from the current policy, which may cause inconsistency in the parameter update. Our results agree with this: both \(\tau=(1,4)\) seem sensitive to a change in \(C\) with a worse performance when \(C\) is too small or too large. It is interesting that a small \(C\) has a _milder_ effect on performance than a large \(C\) for \(\tau=4\). On the other hand, both extremes of \(C\) result in similar performance drops for \(\tau=1\). This hints that \(\tau=4\) is able to handle non-stationary targets better than \(\tau=1\). This might be because \(\tau=4\) reuses the data more to improve the Q-network's approximation of the value function faster than \(\tau=1\). Subsequently, refreshing the target network more frequently (small \(C\)) causes the target to become more accurate faster. When updating the parameters more frequently (\(\tau=4\)) with this better target, the drop in performance due to target oscillations is lesser. The highest mean performance occurs at \(C=128\) for \(\tau=4\) and at \(C=256\) for \(\tau=1\). However, the best value of \(C\) is not clear for \(\tau=1\) because of the high uncertainty in the mean estimate. The uncertainty is high for all values of \(C\) for \(\tau=1\) with the highest when \(C\) is small. For \(\tau=4\), it is easy to pick the appropriate value of \(C\): \(128\) is the best-performing value with relatively low uncertainty. ## 5 Conclusions In this work, we investigated how varying the replay frequency affects DQN's performance, sample efficiency, and sensitivity to hyperparameters in the Mountain Car environment. To validate our hypothesis, we experimented with different replay frequencies, measured the variability in performance, and tested with different hyperparameter values. The empirical results suggest that (1) increasing \(\tau\) results in better sample efficiency than vanilla DQN (\(\tau=1\)); (2) DQN with higher \(\tau\) values generally gives better mean performance with tighter confidence and tolerance intervals; (c) higher \(\tau\) makes DQN less sensitive to other hyperparameters, thereby easing the task of hyperparameter selection. ## Software The code of our experiments is available at [https://github.com/animeshkumarpaul/IncreasingReplay](https://github.com/animeshkumarpaul/IncreasingReplay). ## Acknowledgements We took the initial codebase from Dongmin repository. Thanks to Andy for sharing the code for tolerance intervals.
2301.09310
SaLoBa: Maximizing Data Locality and Workload Balance for Fast Sequence Alignment on GPUs
Sequence alignment forms an important backbone in many sequencing applications. A commonly used strategy for sequence alignment is an approximate string matching with a two-dimensional dynamic programming approach. Although some prior work has been conducted on GPU acceleration of a sequence alignment, we identify several shortcomings that limit exploiting the full computational capability of modern GPUs. This paper presents SaLoBa, a GPU-accelerated sequence alignment library focused on seed extension. Based on the analysis of previous work with real-world sequencing data, we propose techniques to exploit the data locality and improve workload balancing. The experimental results reveal that SaLoBa significantly improves the seed extension kernel compared to state-of-the-art GPU-based methods.
Seongyeon Park, Hajin Kim, Tanveer Ahmad, Nauman Ahmed, Zaid Al-Ars, H. Peter Hofstee, Youngsok Kim, Jinho Lee
2023-01-23T08:14:40Z
http://arxiv.org/abs/2301.09310v1
# SALoBa: Maximizing Data Locality and Workload Balance for Fast Sequence Alignment on GPUs ###### Abstract Sequence alignment forms an important backbone in many sequencing applications. A commonly used strategy for sequence alignment is an approximate string matching with a two-dimensional dynamic programming approach. Although some prior work has been conducted on GPU acceleration of a sequence alignment, we identify several shortcomings that limit exploiting the full computational capability of modern GPUs. This paper presents SALoBa, a GPU-accelerated sequence alignment library focused on seed extension. Based on the analysis of previous work with real-world sequencing data, we propose techniques to exploit the data locality and improve workload balancing. The experimental results reveal that SALoBa significantly improves the seed extension kernel compared to state-of-the-art GPU-based methods. Genome sequencing, Sequence alignment, Smith-Waterman, GPU acceleration ## I Introduction With the fast advances in next-generation sequencing (NGS) techniques, the monetary cost for DNA sequencing has been declining at a rate that is outpacing Moore's law [1]. However, on the other side of the coin, this rapid throughput in sequencing means that data processing has become a more severe bottleneck. For example, performing read mapping of the human genome on an Intel Xeon processor now takes more than 20\(\times\) the sequencing time [24]. _Read mapping_, a process in sequence alignment, maps a piece of query DNA (generated from sequencing) to matching locations of a reference DNA. However, the reference and query DNAs do not exactly match due to sequencing errors and/or mutations. Therefore, the problem falls into the category of approximate string matching. One popular strategy to solve this problem is _seed-and-extend_, where the seeding phase locates a few exact matches, and the extension phase performs approximate matching based on dynamic programming (DP) such as the Smith-Waterman algorithm [57]. Unfortunately, the Smith-Waterman algorithm has quadratic complexity with the input length, which continues to increase as NGS techniques evolve. In this work, we focus on the extension, which consumes a significant portion of the execution time for read mapping [12]. Many proposals have been made for dedicated hardware accelerators with ASICs [22, 23, 58] or FPGAs [10, 25, 26, 28, 62] to cope with the computational bottleneck problem. However, such accelerators are yet to be widely used in practice. The ASICs are expensive to produce and would have difficulty appearing in the market unless mass production was guaranteed. The FPGAs are less prone to this issue because many FPGA accelerator cards are already available on the market. However, FPGA accelerator cards are not yet the dominant option because they are expensive and hard to program. Under such circumstances, GPU-based accelerations can be viable options because they are cheaper, easier to find on the market, and require less effort to develop software. Therefore, many GPU-based libraries have been developed for bioinformatics [3, 9, 48]. With the fast growth in the computational capabilities of GPUs [17], GPUs will likely continue to be the primary option for accelerating sequence alignments. However, by analyzing the state-of-the-art GPU-accelerated seed extension, we identified several missing opportunities for performance improvement that have become critical, especially with long string queries. Considering that the lengths of sequence reads are rapidly growing with third-generation sequencers [55], the performance gap between the ideal case and the existing software will widen. Specifically, we found that existing GPU kernels 1) inefficiently utilize memory and 2) suffer from load imbalance. In this paper, we present SALoBa (**S**equence **A**lignment with Data **L**ocality and **W**orkload **B**alance), which addresses the mentioned problems to achieve superior performance. Taking lessons from several ASIC/FPGA accelerators [10, 23, 62], SALoBa puts together a kernel that makes better use of the computational capability of modern GPUs. SALoBa utilizes intra-query parallelism by allocating multiple CUDA threads to a query-reference pair. Even though this option has been studied by some prior art [13, 35], these suffer from inefficient memory access patterns and resource underutilization, being outperformed by the current state-of-the-art, which only uses inter-query parallelism. SALoBa addresses those issues by introducing two novel techniques: lazy spilling to global memory and subway scheduling. In _lazy spilling_, the data are first accumulated to the CUDA shared memory and later spilled to the global memory in a coalesced manner. By utilizing the double-buffered shared memory region in a rotating manner, the redundancy in global memory is greatly reduced without much overhead to the shared memory. In addition, _subwarp scheduling_ divides a warp into multiple subwarps to mitigate the underutilization problem. While this slightly increases the workload imbalance, the gain from better resource utilization often dominates the overhead from workload imbalance. According to the experimental results, SALoBa performs significantly faster than the state-of-the-art GPU aligner in the seed extension kernel over several sequence lengths. When tested on real-world datasets with various lengths, we obtained superior results due to better workload balancing. Our contributions can be summarized as follows: * We identify several unexploited opportunities from the current libraries for performance improvements. * We propose SALoBa, which provides significant speedup compared to the current state-of-the-art GPU-based alignment libraries. * We perform an extensive amount of evaluation, including synthetic and real-world data from a popular read alignment software to demonstrate the efficiency of SALoBa. ## II Background ### _Seed Extension_ The heart of the sequence alignment problem is approximate string matching. Because there could be multiple possible paths to take every time a mismatch occurs, its complexity quickly increases along with the length of the input pair. The seed-and-extend strategy is an approach to perform alignment efficiently. Based on the observation that a good alignment usually contains many matches, the strategy first finds multiple exact matches, called _seeds_. Based on these, approximate matching scores are calculated by _extending_ to both directions from the found seeds. Seed extension is often performed using a DP algorithm that fills a two-dimensional DP table with the complexity of \(O(N^{2})\). Popular algorithms are Smith-Waterman [57] and Needleman-Wunsch [51]. The score of each cell is calculated based on predetermined scores for the type of differences (i.e., insert, delete, and mismatch). With some adjustments for considerations for frequent long gaps, the affine gap function calculates the score of a DP cell \(H(i,j)\) as follows: \[H(i,j) =max\begin{cases}0\\ E(i,j)\\ H(i-1,j-1)+S(i,j)\end{cases}, \tag{1}\] \[E(i,j) =max\begin{cases}H(i,j-1)-\alpha\\ E(i,j-1)-\beta\end{cases},\] (2) \[F(i,j) =max\begin{cases}H(i-1,j)-\alpha\\ F(i-1,j)-\beta\end{cases}, \tag{3}\] where \(S(i,j)\) is a score function that returns a positive value when the reference at \(i\) and query at \(j\) match and returns a negative value otherwise. In addition, \(E\) and \(F\) are auxiliary variables that keep track of the continuing gaps. Last, \(\alpha\) and \(\beta\) account for different gap penalties according to new gaps or continued gaps, respectively. Fig. 1 presents an example DP table, where \(Q\) represents a query string, and \(R\) represents a reference string. To calculate a cell, the cells from three adjacent cells, the top, left, and top left, are needed, represented by green arrows. The trace of the highest score in the DP table represents the best match found by the algorithm, denoted with red arrows and numbers. ### _Baseline GPU-based Seed Extension_ There have been several attempts to accelerate seed extension using GPUs. In sequence read data, each base is represented by a character data type of eight bits. However, only five bases exist within the sequences: A, C, G, T/U, and N, where T is used for DNA and U is used for RNA. The N denotes unknown bases. Having five bases indicates that at least three bits are needed to represent each. Because three-bit representations are inefficient to deal with in modern architectures, a four-bit packed representation is often used [3, 9] and some work utilizes eight-bit representation [35]. Existing methods for GPU-based seed extension can be roughly categorized by the parallelism they utilize: _intra-query parallelism_ and _inter-query parallelism_. Intra-query parallelism refers to processing multiple cells in a DP table concurrently. As shown in Fig. 1, parallelism exists in an anti-diagonal form in the DP table. For example, after the cell (0,0) has been processed, cells (0,1) and (1,0) can be processed in parallel. Afterwards, cells (0,2), (1,1), and (2,0) can be processed in parallel. Many ASIC/FPGA accelerators successfully take advantage of this, commonly using one-dimensional systolic array structures [10, 23, 62]. Some GPU-based approaches utilize this type of parallelism [13, 35]. However, they often fail to achieve sufficient speedup due to resource under-utilization and inefficient memory access [13]. On the other hand, inter-query parallelism refers to simply processing multiple query-reference pairs in parallel. Because the latter is easier to optimize resource utilization, many successful libraries adopt this approach [3, 9, 39, 45]. Fig. 1: An example seed extension algorithm. Among the libraries using inter-query parallelism, GASAL2 [9] is known to show the current state-of-the-art performance. In this section, we describe the details of its strategy. The GPU registers are 32-bit wide; thus, eight bases from each query and reference are fetched in a single step. Therefore, it is natural to process 8x8 cells at once that correspond to a single-word reference and a single-word query. After processing the block of 8x8 cells, the thread advances to the right. For this, a column of eight rightmost cells must be stored for dependency. The dependency is fulfilled by keeping the values of these DP cells within the registers. When a thread reaches the rightmost end of the table, it moves to the bottom part of the DP table. Then, the cells from the top must be accessed for dependency, and the thread stores the bottom eight cells of each 8x8 block in the global memory. Thus, considering the matrix size of \(N\times N\), the amount of global memory access becomes \(N\times N/4\) for reading and writing dependent cell values. ## III Motivation - Diagnosis In this section, we diagnose the state-of-the-art aligner GASAL2 [9] and discuss potential opportunities for further performance improvement. First, we reveal a significant load imbalance between individual queries. Second, we identify that the use of global memory is not coalesced, which requires a significant amount of redundant memory access. ### _Load Imbalance_ As discussed in Section II-B, the main parallelization scheme of GASAL2 is inter-query parallelism. In the strategy of letting each CUDA thread handle a single query, the main advantage is that there is no complicated inter-thread synchronization or costly communication. However, one drawback of this approach is that it ignores the variance in the workload for the seed extension. The seeding step provides the query and reference sizes as input to the seed extension. Because of this, the lengths vary by individual inputs, and a significant imbalance occurs. Fig. 2 plots the distribution of the query and reference sequences from two types of reads (see Section V for detailed settings) from the popular alignment software BWA-MEM [37]. As illustrated, both distributions range from zero to several hundred or thousand and are not well clustered, implying a substantial amount of warp divergence within GPUs. As depicted in the figure, the difference of length between the shortest and longest strings can be up to \(10\times\) for both the query and reference string. Provided that the computational complexity is proportional to the multiple of the two string lengths, the workload imbalance could be very large in practice. ### _Memory Inefficiency_ With four-bit sequence packing, it is rational to process \(8\times 8\) DP cells at a time. When moving to the next cell,1 the current cell content can be captured within the registers. However, the dependency structure of the seed extension also requires long-term storage of the cell information to process the next row of \(8\times 8\) cells. This intermediate data well exceed the size of the register file and are usually stored in the global memory (i.e., DRAM). Footnote 1: Without loss of generality, we assume that cells are processed left to right and top to bottom. However, this scheme incurs two inefficiencies. First, the intermediate data need not remain in the global memory at the end of the kernel and are considered overhead. Therefore, reducing this overhead contributes to better performance. Second, the minimum access size of the GPU's global memory is 128 B (or 32 B from Volta [5] architecture, as identified by [32]), whereas the individual cell data size is only 4 B. If not captured by the L2 cache, this will incur a number of redundant access instances for the same intermediate data, leading to an inefficient kernel. \begin{table} \begin{tabular}{l c} \hline \hline Data & Quantity \\ \hline Necessary & \(2N\) \\ Stored & \(2N+N^{2}/4\) \\ Accessed (Until Pascal [4]) & \(128N+16N^{2}\) \\ Accessed (After Volta [5]) & \(32N+4N^{2}\) \\ \hline \hline \end{tabular} \end{table} TABLE I: Amount of Data Stored and Accessed for the Existing GPU Aligner Fig. 2: Distribution of the sequences as input to the seed extension in BWA-MEM [37]. TABLE I lists the data volume, input data necessary for the extension (Necessary), data stored for intermediate data (Stored), and data accessed due to the access granularity (Accessed). The table reveals that the inefficiency is multifold even with modern architectures, which provides another opportunity for improvements. In this work, we demonstrate that SALoBa can remove much of this access, leading to superior performance. ## IV SALoBa Design ### _Intra-query Parallelism_ Because each cell in the DP table has dependencies from the top, left, and top-left cells, it is known that intra-query parallelism exists in an anti-diagonal form, as depicted in Fig. 3. Adopting the idea, we use multiple CUDA threads to concurrently handle anti-diagonal elements. A slight difference is that instead of individual threads computing a single cell at a time, the cells are assigned to threads in \(8\times 8\)_blocks_ because of the 4-bit packing on the reference and query sequences. In the first version of the library, we decided that 32 threads at most--a warp--should collaborate in processing a query. Because intra-warp synchronization is relatively cheap or free (for GPUs before introduction of independent thread scheduling [5]), this becomes an attractive design choice. While using more than 32 threads is theoretically possible, it would require threadblock level synchronizations (i.e., __syncthreads()) that cause non-negligible overhead. For communication between the threads, we used the CUDA shared memory. Using shared memory as a communication channel and reusing it every iteration, only a fixed amount of shared memory is needed, and all access to the shared memory is conflict-free (see Section IV-B). Fig. 3 (a) and (b) illustrate the procedure of SALoBa. The largest units of computation are called _chunk_s, which are horizontal partitions of the DP table. The chunks have the height of 32 blocks and the width of the entire query sequence. As displayed in the figure, a thread processes a block in a _strip_ (i.e., a row of blocks) of the chunk per step. In the first step, the first thread starts the processing by fetching a 32-bit word from each of the query and reference sequences, which is enough to process an \(8\times\)8 _block_ of cells. The number of ready-to-process cells increases by one per step in the upcoming steps. Therefore, the number of threads participating in the computation increases until the 32nd step, forming a 31-step long _prologue_. In the main loop stage,2 all threads in the warp simultaneously process blocks in an anti-diagonal manner, advancing one block to the right per step. This process continues for \(Q-31\) steps, where \(Q\) is the number of the blocks in the query sequence (i.e., \([query\_length/8]\)). When the first thread reaches the end of the row, the _epilogue_ starts, and the number of participating threads decreases by one per step until the last thread reaches the end of the row. At the end of the epilogue, the total number of steps taken for processing the entire chunk is \(Q+31\). Footnote 2: This pattern is also called the ‘kernel,’ but we decided to call it the main loop stage because it would be confusing with the term GPU kernel. Each time the threads advance to the next step, they access the dependency data from the cells on the top, left, and top-left (Eq. 3). Fig. 3 (c) demonstrates how this is done. When a block is calculated, the data from the eight cells to the left are what the current thread computed in the previous step. Therefore, Fig. 3: Intra-query parallelism. (a) shows parallelization strategy using a warp, (b) illustrates the terms used in this figure. (c) demonstrates how the dependency between cells are handled and (d) depicts the use of anti-diagonal structure. the data can be stored in the register. The data from the eight cells at the top were computed by the adjacent thread in the previous step. At the end of processing a block, each thread writes the bottom 8x1 cells in the shared memory. In the next step, each thread reads the data in the next position of the shared memory so that it can receive the 8x1 cells from the above block. Last, one cell at the top left has dependency on the current block, which is what the thread received from the shared memory in the previous step. Fortunately, it can also be passed using the register. Thus, the number of cells stored in the register becomes nine instead of eight. This scheme not only reduces the warp divergence but also improves memory access. In the previous technique, all bottom cells in each strip must be stored and read back to/from the global memory. In contrast, with intra-query parallelism, only the bottom cells of each chunk (not of the strip) are stored and read to/from the global memory. For a 32-thread warp, this reduces the amount of intermediate data access to 1/32. ### _Lazy Spill to Global Memory Access_ For processing a chunk, the data from the bottom-most cells must be stored for processing the top-most cells of the next chunk. Usually, this easily exceeds the shared memory capacity and must be stored in the global memory. In a naive scheme, as in Fig. 4 (left), the last thread (\(t31\)) stores the bottom-most cells of a block in the global memory, and the first thread (\(t0\)) reads them from the global memory as the next chunk is processed. However, as diagnosed in Section III, this results in an abundant amount of non-coalesced access, becoming a reason for inefficiency. To address this problem, we used _lazy spilling_ implemented using double-buffered shared memory for each warp to reduce the amount of global memory access. Fig. 4 (right) indicates the shared memory location where the threads write at each step. For each warp, a shared memory region is allocated with the size of \(2\cdot dim(block)\cdot\#threads\). As explained in Section IV-A, the first thread (\(t0\)) starts writing on the first location of the shared memory, which is passed to the second thread before the next step. In each step, the threads read data from the shared memory, process a block, and overwrite the bottom cells in the block to the shared memory. Then, all threads shift one block location for the next step. After the prologue, the threads leave a data trail, written by the last (\(t31\)) thread. Luckily, these are the data to be stored on the global memory. When the first thread (\(t0\)) reaches the end of the buffer at step 64, the shared memory has a consecutive region of data at the left side which are large enough to be processed by a warp. The threads stop processing and gather together to spill the data to the global memory in a coalesced write. Afterward, the threads wrap around in the shared memory, and the trail of block data starts forming on the right half of the shared memory, which is later spilled to the global memory. The spilled data from the previous chunk must be read to process the next chunk so that the first thread can use the data to calculate the cell values. This process is also performed in the same way using the double-buffered shared memory in the opposite direction of the writes. ### _Subwarp Scheduling: Trade-off Between Parallelism and Workload Balancing_ The proposed strategy--a warp per query--can eliminate the workload imbalance between threads in a warp because they work together. However, as demonstrated in Section IV-A, a problem exists in the prologue and epilogue. The average thread utilization over these phases is 50% (because it has a triangular form), and their size is proportional to the warp size. The DP table must be significantly larger than the number of threads in a warp to amortize this overhead. However, according to Fig. 2, the size of the table dimension is only around several hundred, which is only a few times larger than the size of 32, and sometimes even smaller. Fig. 4: Lazy spill optimization. (Left): Naive global memory access. (Right): Proposed optimized access. Each small rectangle represents 8x1 cells stored for dependency. A colored rectangle labeled ‘\(\tttt\)’ denotes that it is being accessed by thread# and a gray rectangle labeled ‘block #’ represent that it is from block id # and is ready to be spilled to the global memory. We chose to split the warp into multiple _subwarps_ and let them process a single query, as in Fig. 5, similar to the idea of VWC [27] used in graph processing. Its effect is essentially a trade-off between workload balancing and resource utilization. When many subwarps are used (i.e., the subwarp size is smaller), the sizes of the prologue and epilogue become smaller; therefore, the average utilization increases. However, multiple subwarps make the scheme suffer more often from warp divergence due to workload imbalance. In contrast, when fewer subwarps are used (i.e., the subwarp size is larger), it is less likely to suffer from imbalance at the cost of lower utilization from increased portions of the prologue and epilogue. Empirically, we found that using two or four subwarps yields the best results in our setting (see Section V). One potential drawback is that using subwarps complicates the memory access optimization. If the same double-buffered technique is used, the number of threads accessing the consecutive memory address becomes less than 32. Subwarps larger than eight threads do not incur a considerable problem for architectures later than NVIDIA Volta [32]. For older architectures, the problem can be solved by allocating \(N+32\) slots of shared memory instead of \(2N\). By making the active region rotate around and making all threads in the entire warp write to 32 slots of data to the global memory together, a full coalescing can be achieved at the expense of a slightly higher shared memory capacity requirement. ## V Evaluation ### _Experimental Setup_ We evaluated SALoBa on two platforms. First, as an 'affordable' system, we used a GTX1650 GPU card based on the Turing [6] architecture. The server has a six-core Intel i5-9600K CPU with 16 GB of RAM. We also conducted experiments on a 'high-end' system with an RTX3090 GPU card based on the Ampere [17] architecture. The server has a single socket, 12-core AMD EPYC 7272 CPU and 128 GB of RAM. Both machines run on Ubuntu 18.04 with CUDA version 11.2 and NVIDIA driver 460.27.04. For comparison, we used the seed extension kernels from the following libraries as the baseline listed in TABLE II. **SOAP3-dp [50]** is an early algorithm for GPU-assisted short read alignment, which utilizes inter-query parallelism. **CUSHAW** is a family of GPU-assisted short read alignment algorithms. While most of its members [43, 44, 48] use GPU only for seeding phase, **CUSHAW2-GPU**[45] accelerates the seed extension part with a GPU using a similar strategy to SOAP3-dp. Better performance is obtained by compacting the global memory storage format and using CUDA texture memory. **NVBIO [3]** is a library of reusable components designed by NVIDIA to accelerate bioinformatics applications using CUDA. **SW# [35]** is an algorithm that utilizes intra-query parallelism. It splits the DP table into multiple anti-diagonal partitions and processes one partition per each kernel launch. Because it launches the kernel multiple times per query, it targets very long sequences. **ADEPT [13]** is the most recent work among algorithms that use intra-query parallelism. It uses shuffle instruction along with binary masking to make use of the anti-diagonal parallelism. Lastly, **GASAL2 [9]** is the state-of-the-art library. It achieves superior performance by further executing the sequence packing on GPU devices. \begin{table} \begin{tabular}{l l l l} \hline \hline Kernel & Parallelism & Bitwidth & Mapping \\ \hline SOAP3-dp [39] & inter-query & 2 bits & one-to-one \\ CUSHAW2-GPU [45] & inter-query & 2 bits & one-to-many \\ NVBIO [3] & inter-query & 2,4,8 bits & one-to-many \\ GASALE [9] & inter-query & 4 bits & one-to-one \\ SW\# [35] & INTRA-query & 8 bits & one-to-many \\ ADEPT [13] & INTRA-query & 8 bits & one-to-one \\ \hline \hline \end{tabular} *(All kernels have been modified to have at least four-bit GPU-assisted packing and to support one-to-one mapping.) \end{table} TABLE II: Baseline Kernels Under Comparison Fig. 5: Subwarp scheduling. For a fair comparison, we have put our best efforts into optimizing the existing methods under the same environment. Most importantly, we assume on-GPU sequence packing for all methods. GASAL2 is known to achieve state-of-the-art performance using a custom sequence packing kernel executed on GPUs. While the strategy is successful, the packing is orthogonal to the seed extension itself and the same method can be applied to all other kernels. We have modified SOAP3-dp, CUSHAW2-GPU, and ADEPT kernels to support five possible types of literals by unifying the kernels to work on four-bit packing. We left SW# to use its original eight-bit packing, because modifying it to support packing would complicate the memory access behavior. In addition, we have modified some kernels (CUSHAW2-GPU, SW#, and NVBIO) that only have one-to-many alignment modes to support one-to-one alignment. ### _Kernel Performance Measurements_ In this subsection, we compare the performance of the seed extension kernels without load imbalances using input reads of equal lengths. To measure the performance under various input lengths, we used an in-house sequence read simulator similar to Wgsim [7] to generate synthetic reads for each length in the range of 64 to 4096. Each kernel processed 5,000 reads per call 200 times, and these results were averaged to output the results in Fig. 6. The execution times were measured with the \(cudaEventElapsedTime()\) API. Fig. 6 (a) and (b) present the performance comparison of the methods on the GTX1650 card, and (c) and (d) present that on the RTX3090 card. For lengths equal to or longer than 128 bp, SALoBa outperforms all other methods. For a very short length of 64 bp, the execution time of NVBIO is slightly shorter than SALoBa (0.42 ms vs 0.51 ms in GTX 1650 and 0.21 ms vs 0.24 ms in RTX 3090). This is reasonable because with intra-query parallelism, the effect of resource under-utilization from prologue and epilogue is relatively significant. However, the break-even point is found at 128 bp, where the reduced global memory access of SALoBa becomes dominant. The methods other than SALoBa that utilize intra-query parallelism (SW# and ADEPT) perform poorly compared to the others. SW# is especially slow because it divides a single DP table into multiple kernel calls where one kernel call processes an anti-diagonal group, resulting in very low resource utilization. A rather surprising observation is that GASAL2 does not always show superior performance over other baselines. While GASAL2 reports multi-fold speedup compared to its previous work, its speedup is mainly from the on-GPU packing, not the extension strategy. Because we combined CUSHAW2-GPU with the on-GPU packing module taken from GASAL2, it shows comparable results on GTX1650 for all lengths and slightly better performance on RTX3090 for long sequence lengths. The best speedup obtained at short lengths by SALoBa is observed at 512 bp. The speedup is 27.7% on GTX1650 and 43.6% on RTX3090 against GASAL2. At longer sequence lengths, some baseline kernels fail to run due to structural limitation (ADEPT) or bounded device memory (NVBIO and SOAP3-dp). The performance trend becomes more consistent for longer lengths because the portion of the overhead of global memory access in execution time diminishes. Another cause for this trend would be the decrease of the relative size of prologue/epilogue compared to the sequence length. For inputs equal to or longer than 1,024 bp, the speedup of SALoBa against GASAL2 is consistently around 30% on GTX 1650 and 50% on RTX 3090. Against CUSHAW2-GPU, the Fig. 6: Performance comparison between extension kernels. The y axes of the (a) and (c) are in log scale for better visibility. speedup is around 40% on GTX1650 and 20% on RTX 3090. ### _Ablation Study_ Fig. 7 breaks down the contributions of the three techniques composing SALoBa. The speedups are normalized against GASAL2, which performs reasonably well along all sequence lengths we target. The most effective technique for shorter lengths (\(\leq\)1024) is subwarp scheduling, which is expected because at shorter inputs, the portions of the prologue and the epilogue are very large. Therefore, switching to intra-query parallelism yields performance degradation. Using subwarp scheduling directly reduces the overhead and provides a substantial speedup. A seemingly large speedup is observed at 64 bp for both GTX1650 and RTX3090, which is counter-intuitive because there is less intra-query parallelism that SALoBa can exploit. However, further analyzing Fig. 6 (a) and (c) reveals that it is the inefficiency of GASAL2, not speedup of SALoBa. GASAL2 has a relatively large memory initialization cost at the beginning, and using a small sequence length fails to amortize it. Compared to NVBIO instead of GASAL2 at 64 bp, it correctly reflects the intuition that inter-query parallelism is better suited for very short queries. As the input becomes longer, the gain from using subwarps becomes marginal, and the effects of the other two techniques become significant. Intra-query parallelism and lazy spilling both contribute to less global memory access. The former reduces the amount of data stored in the global memory, whereas the latter reduces redundant access by coalescing access better. Therefore, these become the main driver for performance gains in longer input ranges. Interestingly, in RTX3090, intra-query parallelism has more effect, whereas lazy spilling is more effective for GTX1650. We believe this is explainable by the fact that RTX3090 has a higher computation/memory bandwidth ratio. RTX3090 has a peak performance of 35.58 TFlops, and its memory bandwidth is 936.2 GB/s using GDDR6X. On the other hand, GTX has a peak performance of 2.98 TFlops with 128.1 GB/s memory bandwidth. The ratio is 38.91 Flops/B in RTX3090 and 23.82 Flops/B in GTX1650. This means that RTX3090 is generally more bottlenecked at its memory. The major drawback of intra-query parallelism is the low CUDA thread utilization. However, as RTX3090 is more bottlenecked in memory bandwidth, it is likely that there are more computational resources to process the data even when some are being idle. This partially offsets the utilization problem in RTX3090. ### _Real-world Data Experiments_ To test SALoBa under workload imbalance with the realistic distribution of workload sizes, we generate seeds from BWA-MEM [37] using real-world datasets. It takes a whole reference genome sequence and many sequence reads to generate seeds that are composed of multiple pairs that can be processed by extension kernels. For the reference genome sequence, we used the latest release of the human genome assembly project, Build 38 patch release 13 (GRCh38.p13) [2] that has 3.1G base pairs (bp). For the input sequence reads, we used two datasets downloaded from the sequence read archive [36]. The **dataset A** (SRR835433) is from a 2nd generation sequencer Illumina MiSeq and represents short reads where each sequence has a length of 250 bp. The dataset comprises 8.3M sequences which are randomly read genome parts from a human. Each sequence is read twice, resulting in the total number of base pairs to be 250\(\times\)8.3M\(\times\)2=4.1 Gbp. The **dataset B** (SRP091981) is from a 3rd generation sequencer PacBio RS and represents long reads. It also contains randomly read genome parts from a human where there are 82K sequences with variable lengths that averages around 2,000 bp to a total of 182.4 Mbp. The distributions of the seeds processed by BWA-MEM [37] are presented in Fig. 2. The results are plotted in Fig. 8. Fig. 8 (a) shows the performance for short read dataset A. The best speedup of SALoBa over the baseline GASAL2 is 32.5% for GTX1650 and 20.2% for RTX3090. SOAP3-dp could not complete the workload on GTX1650 as some of the inputs exceeded the length it could process. The speedup values observed from SALoBa are slightly larger than that of Fig. 6 (a) and (c) due to the fact that SALoBa suffers less from workload imbalance. While the performance of SOAP3-dp and NVBIO are inferior to GASAL2, CUSHAW2-GPU exhibits some speedup over GASAL2 (with the help of GASAL2's on-GPU packing kernel). However, its speedup is smaller than that of SALoBa. ADEPT achieves a similar speedup compared to SALoBa only on RTX3090, but the speedup comes from placing all the intermediate values in the shared memory (no global memory access), which fundamentally limits ADEPT up to 1024 bp. An interesting difference between GTX1650 and RTX3090 is that the optimal subwarp size was 16 for GTX1650 and eight for RTX3090 (see Fig. 8 (c)). This outcome is due to Fig. 7: Ablation study for (a) GTX1650 and (b) RTX3090. the different effectiveness of each technique. According to Fig. 7, the benefit of applying subwarp scheduling for shorter sequences was 2.26\(\times\) for GTX1650 and 2.85\(\times\) for RTX3090 in geomean. In GTX1650, the benefit of using subwarps is partially offset by the overhead from workload imbalance, shifting the optimal configuration to 16 threads per subwarp. Fig. 8 (b) reveals the performance for dataset B with longer reads. On this dataset, SOAP3-dp, ADEPT, and NVBIO fail to run due to their input length limitations. The speedup of SALoBa greatly improves compared to that of Fig. 6 (b) and (d) because the amount of workload imbalance increases which works in favor of SALoBa. The best speedup is around 2.1\(\times\) for both GTX1650 and RTX3090. Subwarps with 16 threads per subwarps gained the best performance, which is the sweet spot between exploiting intra-query parallelism and reducing the workload imbalance. ## VI Related Work ### _GPU-based Smith-Waterman Algorithm Implementations_ Accelerating Smith-Waterman algorithm [57] with GPUs has been studied for many years. Some earlier work based on OpenGL library [40, 41] addressed issues for mapping the algorithm on graphics pipeline. A popular CUDA implementation can be found from the CUDASW++ family [42, 47, 49]. They suggest utilizing both the intra- and inter-query parallelism based on a pre-determined sequence length threshold [42]. Similar approaches can be found from GSWABE [46] and gpu-pailalign [14]. In addition, focused on all-to-all patterns for protein DB alignments, query profiling optimization [47] and CPU-GPU hybrid parallelism [49] have been suggested. However, such optimizations are often too specific to protein alignments, and cannot be easily generalized to other domains such as DNAs. In such regard, the CUDAIign family [20, 21, 52, 54] focus on a general Smith-Waterman algorithm using intra-query parallelism. CUDAIign [54] splits the DP table into multiple blocks and distributes them to threadblocks. Then, the communication between the threadblocks are done using a dedicated region in the global memory called the 'horizontal bus' and'vertical bus'. The algorithm is later extended to support linear space algorithm to reduce the memory size [52]. Its recent versions support multi-GPU alignment with execution time prediction [21] and traceback [20]. When the DP table is too large, approaches such as MultiBP [18], MASA [19], and SW# [35] suggest block pruning [53] that removes some blocks that can never achieve the optimal score. However, these algorithms are optimized for huge sequences that sometimes do not even fit into a single GPU card [33]. For DNA alignment softwares with seed-and-extend strategies, the input sequence length for the extension ranges around several hundreds even with the long DNA reads. Because of this, libraries that rely on inter-query parallelism [3, 9] often perform much better for DNA alignments as demonstrated in [13]. On the other hand, SALoBa outperforms the previous approaches on DNA alignment scenarios by adopting several careful optimizations. ### _GPU-accelerated Sequence Alignment Softwares_ Some approaches design an end-to-end DNA alignment software that are friendly to GPU architectures. SARUMAN [15] and GPU-RMAP [11] are some early methods that implement hashtable lookups on GPUs to perform short read mappings. With the increased use of the Burrows-Wheeler transform (BWT) [38] based on suffix-trees, BarraCUDA [34], SOAP3 [39] and CUSHAW1 [48] were introduced with GPU implementations of BWT. SOAP3-dp [50], CUSHAW2-GPU [45], and LOGAN [60] are later methods that adopt seed-and-extend strategy, and Arioc [59] further expands the strategy to multiple GPUs. However, these methods often sacrifice alignment quality for the sake of throughput. For example, SOAP3 and LOGAN only supports limited number of mismatches, and CUSHAW2-GPU packs the sequences to two bits by converting the fifth 'N' bases to random bases. Although the approaches above show good throughput with only a small amount of quality degradation, the industry has grown to value quality over speed. As high-quality algorithms such as BWA-MEM [37] became a de facto standard, the usability of the GPU-aware alignment softwares were limited. Some approaches [13, 29, 30, 31] tackle this problem and design a seed extension kernel general enough to be used for BWA-MEM using intra-query performance. However, later approaches based on inter-query parallelism outperformed these kernels, which is the strategy adopted by the current state-of-the-art methods such as NVBIO [3] or GASAL2 [9]. Finally, it is worth mentioning cuBLASTP [61], a GPU accelerated protein search algorithm that uses a variant of seed-and-extend strategy. However, cuBLASTP only accelerates the Fig. 8: Performance of SALoBa on a real-world data. The speedup is normalized to that of GASAL2. seeding part using GPU, and leaves the extension part to CPU for utilizing CPU-GPU hybrid parallelism. ### _FPGA/ASIC Acceleration_ As interest in the genomics pipeline has increased, more researchers have considered designing dedicated accelerators using FPGAs or ASICs. [62] provided a implementation of the Smith-Waterman algorithm on an Altera FPGA, followed by a few other studies using banded algorithms [16, 26] or flexible systolic arrays [10, 28]. Moreover, DRAGEN [56] is an FPGA-based platform from Illumina that implements a full end-to-end analysis into an FPGA. Darwin [58] is an ASIC accelerator that speeds up the whole genome sequencing through the co-design of both the software and hardware, powered by gapped filtering. GenAx [22] is an automata-based accelerator for both seeding and extension. Although these approaches provide significant speedup, FPGAs or ASICs are much harder to design and it requires a long time to reach the market. Compared to them, GPU-based alignment software could be a quick solution easily adopted by any system that has GPUs attached. ## VII Discussion ### _CUDA Shuffle Instructions_ The CUDA shuffle instruction is an alternative to shared memory by allowing a direct register-to-register data exchange for inter-thread communication. However, using shuffle-based communication did not add any additional speedup on top of SALoBa. This outcome aligns with the CUDA specification that the throughput of the shuffle instructions is almost the same as the nonconflicting shared memory reads [8]. One potential gain from shuffle instructions is reducing shared memory usage. However, the portion of data cached in the shared memory was negligible in the current scenario and did not provide noticeable speedup. ### _Banded Algorithms_ Many seed extension methods use the banded algorithm to reduce the computational burden. Taking advantage of the fact that the best matching path of the DP table is usually near the diagonal, processing cells within some predetermined width (the _band_) often yields solutions of sufficient quality. However, most GPU-based extension libraries do not exploit this. One problem is that the band sizes for each query are often different, which worsens load balancing. However, we envision that banded algorithms would have more benefits and could be adopted for better performance with longer reads. ### _Multiple GPUs_ It is often beneficial to use multiple GPUs, especially when equipped on a single machine. We believe that extending this work by splitting the queries into equal numbers and assigning them to multiple GPUs would be straightforward. One possible drawback of such a strategy would be the load imbalance between the GPUs. However, the penalty would be small compared to the thread-level imbalance problem. If this becomes significant, it could be solved by dynamic assignment or preprocessing with approximate sorting. ## VIII Conclusion We proposed SALoBa, a new GPU-based seed extension library, exhibiting state-of-the-art performance over the existing work. The experiments reveal that the software performs well on modern devices, and the performance gain increases with recent GPUs. We believe this software will be useful in fields where sequence alignment times are critical for diagnosing fatal diseases. ## Acknowledgements This work has been supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2022R1C1C1008131, 2022R1C1C1011307) and Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (2020-0-01361, Artificial Intelligence Graduate School Program (Yonsei University), 2021-0-00853, Developing Software Platform for Programming of PIM).
2308.13657
Transcendence of Sturmian Numbers over an Algebraic Base
We consider numbers of the form $S_\beta(\boldsymbol{u}):=\sum_{n=0}^\infty \frac{u_n}{\beta^n}$ for $\boldsymbol{u}=\langle u_n \rangle_{n=0}^\infty$ a Sturmian sequence over a binary alphabet and $\beta$ an algebraic number with $|\beta|>1$. We show that every such number is transcendental. More generally, for a given base~$\beta$ and given irrational number~$\theta$ we characterise the $\overline{\mathbb{Q}}$-linear independence of sets of the form $\left\{ 1, S_\beta(\boldsymbol{u}^{(1)}),\ldots,S_\beta(\boldsymbol{u}^{(k)}) \right\}$, where $\boldsymbol{u}^{(1)},\ldots,\boldsymbol{u}^{(k)}$ are Sturmian sequences having slope $\theta$. We give an application of our main result to the theory of dynamical systems, showing that for a contracted rotation on the unit circle with algebraic slope, its limit set is either finite or consists exclusively of transcendental elements other than its endpoints $0$ and $1$. This confirms a conjecture of Bugeaud, Kim, Laurent, and Nogueira.
Florian Luca, Joel Ouaknine, James Worrell
2023-08-25T20:12:00Z
http://arxiv.org/abs/2308.13657v1
# Transcendence of Sturmian Numbers over an Algebraic Base ###### Abstract We consider numbers of the form \(S_{\beta}(\boldsymbol{u}):=\sum_{n=0}^{\infty}\frac{u_{n}}{\beta^{n}}\) for \(\boldsymbol{u}=\langle u_{n}\rangle_{n=0}^{\infty}\) a Sturmian sequence over a binary alphabet and \(\beta\) an algebraic number with \(|\beta|>1\). We show that every such number is transcendental. More generally, for a given base \(\beta\) and given irrational number \(\theta\) we characterise the \(\overline{\mathbb{Q}}\)-linear independence of sets of the form \(\left\{1,S_{\beta}(\boldsymbol{u}^{(1)}),\ldots,S_{\beta}(\boldsymbol{u}^{(k) })\right\}\), where \(\boldsymbol{u}^{(1)},\ldots,\boldsymbol{u}^{(k)}\) are Sturmian sequences having slope \(\theta\). We give an application of our main result to the theory of dynamical systems, showing that for a contracted rotation on the unit circle with algebraic slope, its limit set is either finite or consists exclusively of transcendental elements other than its endpoints \(0\) and \(1\). This confirms a conjecture of Bugeaud, Kim, Laurent, and Nogueira [3]. ## 1 Introduction A famous conjecture of Hartmanis and Stearns asserts that a real number \(\alpha\) whose sequence of digits can be produced by a linear-time Turing machine (in the sense that for all \(n\), given input \(n\) in unary the machine outputs the first \(n\) digits of \(\alpha\) in time \(O(n)\)) is either rational or transcendental. This conjecture remains open and is considered to be very difficult. A weaker version--proposed by Cobham and eventually proved by Adamczewski, Bugeaud, and Luca [2]--asserts the transcendence of an irrational automatic real number. The underlying intuition is that the sequence of digits of an irrational algebraic number cannot be too simple. Indeed, the main technical result of [2] is that over an integer base every number whose sequence of digits has linear subword complexity is either rational or transcendental. Cobham's conjecture is an immediate corollary, given that automatic sequences have linear subword complexity. In this paper we prove a transcendence result for numbers whose digit sequences are Sturmian words (sometimes called mechanical words). Such words have minimal subword complexity among non-ultimately periodic words and have a natural characterisation in terms of dynamical systems as codings of rotations on the unit circle. The novelty of this work is that we handle expansions over an arbitrary algebraic base rather than just an integer base. Here we are motivated by applications to control theory and dynamical systems. An infinite sequence \(\boldsymbol{u}=u_{0}u_{1}u_{2}\ldots\) over a binary alphabet is said to be _Sturmian_ if the number \(p(n)\) of different length-\(n\) factors in \(\boldsymbol{u}\) satisfies \(p(n)=n+1\) for all \(n\in\mathbb{N}\), see [11]. Coven and Hedlund [4] show that an infinite word such that \(p(n)\leq n\) for some \(n\) is necessarily ultimately periodic. Thus Sturmian words have minimal subword complexity among non-ultimately periodic words over a binary alphabet \(\{0,1\}\). The letters in a Sturmian word have a limiting frequency--the limit frequency of the letter \(1\) is called the _slope_ of the word. Related to this, Sturmian words have a natural characterisation in terms of dynamical systems, namely as codings of the orbits of irrational rotations on \(\mathbb{R}/\mathbb{Z}\). Perhaps the best known example of a Sturmian word is the _Fibonacci word_. This is defined as the limit \(\boldsymbol{f}_{\infty}\) of the sequence \((\boldsymbol{f}_{n})_{n=0}^{\infty}\) of finite strings over the binary alphabet \(\{0,1\}\), defined by the recurrence \(\boldsymbol{f}_{0}:=0\), \(\boldsymbol{f}_{1}:=01\), and \(\boldsymbol{f}_{n}=\boldsymbol{f}_{n-1}\boldsymbol{f}_{n-2}\) for all \(n\geq 2\). The limit is well defined since \(\boldsymbol{f}_{n}\) is a prefix of \(\boldsymbol{f}_{n+1}\) for all \(n\in\mathbb{N}\). The Fibonacci word has slope \(1/\phi\), where \(\phi=\frac{1+\sqrt{5}}{2}\) is the golden ratio. It so happens that the Fibonacci word is morphic, although it is not automatic. Let \(\boldsymbol{u}\) be a Sturmian word over a finite alphabet \(\Sigma\subseteq\overline{\mathbb{Q}}\) and let \(\beta\in\overline{\mathbb{Q}}\) be such that \(|\beta|>1\). Then we call \(S_{\beta}(\boldsymbol{u}):=\sum_{n=0}^{\infty}\frac{u_{n}}{\beta^{\alpha}}\) a _Sturmian number_ with _sequence of digits_\(\boldsymbol{u}\) and _base_\(\beta\).1 Ferenczi and Mauduit [5] proved the transcendence of every number \(S_{\beta}(\boldsymbol{u})\) over an integer base \(\beta>1\). Their proof combined combinatorial properties of Sturmian sequences with a \(p\)-adic version of the Thue-Siegel-Roth Theorem, due to Ridout. This result was strengthened by Bugeaud _et al._[3] to show \(\overline{\mathbb{Q}}\)-linear independence of sets of the form \(\big{\{}1,S_{\beta}(\boldsymbol{u}^{(1)}),S_{\beta}(\boldsymbol{u}^{(2)}) \big{\}}\) where \(\boldsymbol{u}^{(1)},\boldsymbol{u}^{(2)}\) are Sturmian words having the same slope and \(\beta>1\) is an integer. In the case of an algebraic base \(\beta\), Laurent and Nogueria [12] observe that if \(\boldsymbol{u}\) is a characteristic Sturmian word (cf. Section 3), then the transcendence of \(S_{\beta}(\boldsymbol{u})\) follows from a result of Loxton and Van der Poorten [8, Theorem 7] concerning transcendence of Hecke-Mahler series. Footnote 1: Our notion of Sturmian number is more permissive than that of Morse and Hedland [10] who restricted to the case of an integer base \(b>1\) and digit sequence \(\boldsymbol{u}\) over alphabet \(\{0,\ldots,b-1\}\). In this paper we give a common generalisation of the above three results. For every algebraic base \(\beta\) and irrational slope \(\theta\) we give sufficient and necessary conditions for \(\overline{\mathbb{Q}}\)-linear independence of a set of Sturmian numbers \(\big{\{}1,S_{\beta}(\boldsymbol{u}^{(1)}),\ldots,S_{\beta}(\boldsymbol{u}^{( k)})\big{\}}\), where \(\boldsymbol{u}^{(1)},\ldots,\boldsymbol{u}^{(k)}\), where are Sturmian sequences of slope \(\theta\). Our characterisation relies on a new combinatorial criterion on a sequence \(\boldsymbol{u}\) that ensures transcendence of \(S_{\beta}(\boldsymbol{u})\) for \(\beta\) an algebraic base. Similar to [3], the Subspace Theorem plays a major role in our argument. In [7] we give a more elaborate and powerful transcendence criterion that allows proving \(\overline{\mathbb{Q}}\)-linear independence results about Sturmian numbers (again with a common slope) over different algebraic bases. For a sequence \(\boldsymbol{u}\) with linear subword complexity (i.e., such that \(\liminf_{n}\frac{p(n)}{n}<\infty\)), it is shown in [1] that \(S_{\beta}(\boldsymbol{u})\) is transcendental under the condition that \(\beta\) is a Pisot number (i.e., a real algebraic integer greater than one all of whose Galois conjugates have absolute value less than one). Compared to the main result of this paper, the class of sequences considered by [1] is more general (requiring merely linear subword complexity rather than the stronger condition of being Sturmian), but the condition on the base is more restrictive (being a Pisot number rather than merely an algebraic number of absolute value strictly greater than one). In Section 5 we give an application of our main result to the theory of dynamical systems. We consider the set \(C\) of limit points of a contracted rotation \(f\) on the unit interval, where \(f\) is assumed to have an algebraic contraction factor. The set \(C\) is finite if \(f\) has a periodic orbit and is otherwise a Cantor set, that is, it is homeomorphic to the Cantor ternary set (equivalently, it is compact, nowhere dense, and has no isolated points). In the latter case we show that all elements of \(C\) except its endpoints \(0\) and \(1\) are transcendental. Our result confirms a conjecture of Bugeaud, Kim, Laurent, and Nogueira, who proved a special case of this result in [3]. We remark that it is a longstanding open question whether the actual Cantor ternary set contains any algebraic elements other than \(0\) or \(1\). ## 2 Preliminaries Let \(K\) be a number field of degree \(d\) and let \(M(K)\) be the set of _places_ of \(K\). We divide \(M(K)\) into the collection of _infinite places_, which are determined either by an embedding of \(K\) in \(\mathbb{R}\) or a complex-conjugate pair of embeddings of \(K\) in \(\mathbb{C}\), and the set of _finite places_, which are determined by prime ideals in the ring \(\mathcal{O}_{K}\) of integers of \(K\). For \(x\in K\) and \(v\in M(K)\), define the absolute value \(|x|_{v}\) as follows: \(|x|_{v}:=|\sigma(x)|^{1/d}\) in case \(v\) corresponds to a real embedding \(\sigma:K\to\mathbb{R}\); \(|x|_{v}:=|\sigma(x)|^{2/d}\) in case \(v\) corresponds to a complex-conjugate pair of embeddings \(\sigma,\overline{\sigma}:K\to\mathbb{C}\); finally, \(|x|_{v}:=N(\mathfrak{p})^{-\mathrm{ord}_{\mathfrak{p}}(x)/d}\) if \(v\) corresponds to a prime ideal \(\mathfrak{p}\) in \(\mathcal{O}\) and \(\mathrm{ord}_{\mathfrak{p}}(x)\) is the order of \(\mathfrak{p}\) as a divisor of the ideal \(x\mathcal{O}\). With the above definitions we have the _product formula_: \(\prod_{v\in M(K)}|x|_{v}=1\) for all \(x\in K^{*}\). Given a set of places \(S\subseteq M(K)\), the ring \(\mathcal{O}_{S}\) of _\(S\)-integers_ is the subring comprising all \(x\in K\) such \(|x|_{v}\leq 1\) for all finite places \(v\in S\). For \(m\geq 2\) the _absolute Weil height_ of \(\mathbf{x}=(x_{1},\ldots,x_{m})\in K^{m}\) is defined to be \[H(\mathbf{x}):=\prod_{v\in M(K)}\max(|x_{1}|_{v},\ldots,|x_{m}|_{v})\,.\] This definition is independent of the choice of field \(K\) containing \(x_{1},\ldots,x_{m}\). Note the restriction \(m\geq 2\) in the above definition. For \(x\in K\) we define its height \(H(x)\) to be \(H(1,x)\). For a non-zero polynomial \(f=\sum_{i=0}^{s}a_{i}X^{i}\in K[X]\), where \(s\geq 1\), we define its height \(H(f)\) to be the height of its coefficient vector \((a_{0},\ldots,a_{s})\). The following classical result of Schlickewei will be instrumental in our approach. **Theorem 1** (Subspace Theorem).: _Let \(S\subseteq M(K)\) be a finite set of places, containing all infinite places and let \(m\geq 2\). For every \(v\in S\) let \(L_{1,v},\ldots,L_{m,v}\) be linearly independent linear forms in \(m\) variables with algebraic coefficients. Then for any \(\varepsilon>0\) the solutions \(\mathbf{x}\in\mathcal{O}_{S}^{m}\) of the inequality_ \[\prod_{v\in S}\prod_{i=1}^{m}|L_{i,v}(\mathbf{x})|_{v}\leq H(\mathbf{x})^{-\varepsilon}\] _are contained in finitely many proper subspaces of \(K^{m}\)._ We will also need the following more elementary proposition. **Proposition 2**.: _[_6_, Proposition 2.3]_ _Let \(f\in K[X]\) be a polynomial with at most \(k+1\) terms. Assume that \(f\) can be written as the sum of two polynomials \(g\) and \(h\), where every monomial of \(g\) has degree at most \(d_{0}\) and every monomial of \(h\) has degree at least \(d_{1}\). Let \(\beta\) be a root of \(f\) that is not a root of unity. If \(d_{1}-d_{0}>\frac{\log(k\,H(f))}{\log H(\beta)}\) then \(\beta\) is a common root of \(g\) and \(h\)._ ## 3 Stuttering Sequences Let \(A\subseteq\overline{\mathbb{Q}}\) be a finite alphabet. An infinite sequence \(\mathbf{u}=u_{0}u_{1}u_{2}\ldots\in A^{\omega}\) is said to be _stuttering_ if for all \(w>0\) there exist sequences \(\langle r_{n}\rangle_{n=0}^{\infty}\) and \(\langle s_{n}\rangle_{n=0}^{\infty}\) of positive integers and \(d\geq 2\) such that: * \(\langle r_{n}\rangle_{n=0}^{\infty}\) is unbounded and \(s_{n}\geq wr_{n}\) for all \(n\in\mathbb{N}\); * for all \(n\in\mathbb{N}\) there exist integers \(0\leq i_{1}(n)<\ldots<i_{d}(n)\leq s_{n}\) such that the strings \(u_{0}\ldots u_{s_{n}}\) and \(u_{r_{n}}\ldots u_{r_{n}+s_{n}}\) differ at the set of indices \(\bigcup_{j=1}^{d}\{i_{j}(n),i_{j}(n)+1\}\); * we have \(i_{d}(n)-i_{1}(n)=\omega(\log r_{n})\) and, writing \(i_{0}(n):=0\) and \(i_{d+1}(n):=s_{n}\) for all \(n\), we have \(i_{j+1}(n)-i_{j}(n)=\omega(1)\) for all \(j\in\{0,1,\ldots,d\}\); * for all \(n\in\mathbb{N}\) and \(j\in\{1,2\ldots,d\}\) we have \(u_{i_{j}(n)}+u_{i_{j}(n)+1}=u_{i_{j}(n)+r_{n}}+u_{i_{j}(n)+r_{n}+1}\). The notion of a stuttering sequence is reminiscent of the transcendence conditions of [1, 3, 5] in that it concerns periodicity in an infinite word. Roughly speaking, a sequence \(\mathbf{u}\) is stuttering if for all \(w>0\) there are arbitrarily long prefixes of \(\mathbf{u}\) that, modulo a fixed number of mismatches, comprise \(w\) repetitions of some finite word. The fact that the number \(w\) of repetitions is arbitrary is key to our being able to prove transcendence results over an arbitrary algebraic base \(\beta\). In compensation, our condition allows repetitions with a certain number of discrepancies. This should be contrasted with the notion of stammering sequence in [1, Section 4], where there is no allowance for such discrepancies and in which the quantity corresponding to \(w\) is fixed. **Example 3**.: _To illustrate the notion of stuttering sequence, we recall the example of the Fibonacci word. That this sequence is stuttering is a consequence of Theorem 4. Here in fact the sequence of shifts \(\langle r_{n}\rangle_{n=0}^{\infty}\) witnessing that the Fibonacci word is stuttering is the Fibonacci sequence \(\langle 1,1,2,3,5,\dots\rangle\). Below we align the Fibonacci word \(\mathbf{f}_{\infty}\) with its shift \(\mathbf{f}_{\infty}^{(5)}\) by \(r_{5}=5\), underlining the mismatches which arise in consecutive pairs that satisfy Condition S4._ \[\mathbf{f}_{\infty} := 010010\underline{10}0100101001\underline{10}0100101\underline{ 0}0100101001\dots\] \[\mathbf{f}_{\infty}^{(5)} := 010010\underline{01}01001010010\underline{01}0100100101001 01\dots\] In what follows, we use the following representation of Sturmian words. Write \(I:=[0,1)\) for the unit interval and given \(x\in\mathbb{R}\) denote the integer part of \(x\) by \(\lfloor x\rfloor\) and the fractional part of \(x\) by \(\{x\}:=x-\lfloor x\rfloor\in I\). Let \(0<\theta<1\) be an irrational number and define the _rotation map_\(T=T_{\theta}:I\to I\) by \(T(y)=\{y+\theta\}\). Given \(x\in I\), the \(\theta\)_-coding of \(x\)_ is the infinite sequence \(\mathbf{u}=u_{1}u_{2}u_{3}\dots\) defined by \(u_{n}:=1\) if \(T^{n}(x)\in[0,\theta)\) and \(u_{n}:=0\) otherwise. As shown by Morse and Hedlund, \(\mathbf{u}\) is a Sturmian word and, up to changing at most two letters, all Sturmian words over a binary alphabet arise as codings of the above type for some choice of \(\theta\) and \(x\). In particular, for the purposes of establishing our transcendence results we may work exclusively with codings as defined above. The number \(\theta\) is equal to the slope of the Sturmian word, as defined in Section 1. The \(\theta\)-coding of \(0\) is in particular called the _characteristic Sturmian word of slope \(\theta\)_. The main result of this section is as follows: **Theorem 4**.: _Let \(\theta\in(0,1)\) be irrational. Given a positive integer \(k\), let \(c_{0},\dots,c_{k}\in\mathbb{C}\) and \(x_{1},\dots,x_{k}\in I\). Suppose that \(x_{i}-x_{j}\not\in\mathbb{Z}\theta+\mathbb{Z}\) for all \(i\neq j\). Writing \(\langle u_{n}^{(i)\infty}\rangle_{n=0}^{\infty}\) for the \(\theta\)-coding of \(x_{i}\), for \(i=1,\dots,k\), define \(u_{n}:=c_{0}+\sum_{i=1}^{k}c_{i}u_{n}^{(i)}\) for all \(n\in\mathbb{N}\). Then \(\mathbf{u}=\langle u_{n}\rangle_{n=0}^{\infty}\) is stuttering._ Proof.: We start by recalling some basic facts about the continued-fractions. Write \([a_{0},a_{1},a_{2},a_{3},\dots]\) for the simple continued-fraction expansion of \(\theta\). Given \(n\in\mathbb{N}\), we write \(\frac{p_{n}}{q_{n}}:=[a_{0},a_{1},\dots,a_{n}]\) for the \(n\)-th convergent. Then \(\langle q_{n}\rangle_{n=0}^{\infty}\) is a strictly increasing sequence of positive integers such that \(\|q_{n}\theta\|=|q_{n}\theta-p_{n}|\), where \(\|\alpha\|\) denotes the distance of a given number \(\alpha\in\mathbb{R}\) to the nearest integer. We moreover have that \(q_{n}\theta-p_{n}\) and \(q_{n+1}\theta-p_{n+1}\) have opposite signs for all \(n\). Finally we have the _law of best approximation_: \(q\in\mathbb{N}\) occurs as one of the \(q_{n}\) just in case \(\|q\theta\|<\|q^{\prime}\theta\|\) for all \(q^{\prime}\) with \(0<q^{\prime}<q\). To establish that \(\mathbf{u}\) is stuttering, given \(w>0\) we define \(\langle r_{n}\rangle_{n=0}^{\infty}\) to be the subsequence of \(\langle q_{n}\rangle_{n=0}^{\infty}\) comprising all terms \(q_{n}\) such that \(\|q_{n}\theta\|=q_{n}\theta-p_{n}>0\). Note that we either have \(r_{n}=q_{2n}\) for all \(n\) or \(r_{n}=q_{2n+1}\) for all \(n\), so \(\langle r_{n}\rangle_{n=0}^{\infty}\) is an infinite sequence that diverges to infinity. Next, write \(d=(k+1)w\) and for all \(n\in\mathbb{N}\) define \(s_{n}\) be the greatest number such that the words \(u_{0}\dots u_{s_{n}}\) and \(u_{r_{n}}\cdots u_{r_{n}+s_{n}}\) have Hamming distance at most \(2d\). Since \(\mathbf{u}\) is not ultimately periodic, \(s_{n}\) is thereby well-defined. **Condition S2**.: Denote the set of positions at which \(u_{0}\dots u_{s_{n}}\) and \(u_{r_{n}}\dots u_{s_{n}+r_{n}}\) differ by \[\Delta_{n}:=\left\{m\in\{0,\dots,s_{n}\}:u_{m}\neq u_{m+r_{n}}\right\}. \tag{1}\] We claim that for \(n\) sufficiently large, \(m\in\Delta_{n}\) if and only if there exists \(\ell\in\{1,\dots,k\}\) such that one of the following two conditions holds: * \(T^{m}(x_{\ell})\in[1-\|r_{n}\theta\|,1)\), * \(T^{m}(x_{\ell})\in[\theta-\|r_{n}\theta\|,\theta)\). We claim furthermore that for all \(m\) there is most \(\ell\) such that one of above conditions holds. Assuming the claim, since \(T^{m}(x_{\ell})\in[1-\|r_{n}\theta\|,1)\) if and only if \(T^{m+1}(x_{\ell})\in[\theta-\|r_{n}\theta\|,\theta)\), it follows that the elements of \(\Delta_{n}\) come in consecutive pairs, i.e., we can write \[\Delta_{n}=\bigcup_{j=1}^{d}\{i_{j}(n),i_{j}(n)+1\}\,,\] where \(i_{1}(n)<\dots<i_{d}(n)\) are the elements \(m\in\Delta_{n}\) that satisfy Condition (i) above for some \(\ell\). It remains to prove the claim. To this end note that for a fixed \(\ell\in\{1,\ldots,k\}\) we have \(u_{m}^{(\ell)}\neq u_{m+r_{n}}^{(\ell)}\) iff exactly one of \(T^{m}(x_{\ell})\) and \(T^{m+r_{n}}(x_{\ell})\) lies in the interval \([0,\theta)\) iff either Condition (i) or Condition (ii) holds. Moreover, since \(x_{\ell}-x_{\ell^{\prime}}\neq\theta\pmod{1}\) for \(\ell\neq\ell^{\prime}\), we see that for \(n\) sufficiently large there is at most one \(\ell\in\{1,\ldots,k\}\) such that one of these two conditions holds. Equivalently, for all \(m\) there is at most one \(\ell\) such that \(u_{m}^{(\ell)}\neq u_{m+r_{n}}^{(\ell)}\). We deduce that \(u_{m}\neq u_{m+r_{n}}\) if and only if \(u_{m}^{(\ell)}\neq u_{m+r_{n}}^{(\ell)}\) for some \(\ell\in\{1,\ldots,k\}\). This concludes the proof of the claim. **Condition S1.** Our objective is to show that \(s_{n}\geq wr_{n}\) for all \(n\in\mathbb{N}\). We have already established that there are \(d=(k+1)w\) distinct \(m\in\Delta_{n}\) that satisfy Condition (i), above, for some \(\ell\in\{1,\ldots,k\}\). Thus there exists \(\ell_{0}\in\{1,\ldots,k\}\) and \(\Delta_{n}^{\prime}\subseteq\Delta_{n}\) such that \(|\Delta_{n}^{\prime}|\geq w\) and all \(m\in\Delta_{n}^{\prime}\) satisfy Condition (i) for \(\ell=\ell_{0}\). In this case we have \(\|(m_{1}-m_{2})\theta\|<\|r_{n}\theta\|\) for all \(m_{1},m_{2}\in\Delta_{n}^{\prime}\). By the law of best approximation it follows that every two distinct elements of \(\Delta_{n}^{\prime}\) have difference strictly greater than \(r_{n}\). But this contradicts \(|\Delta_{n}^{\prime}|=w\) given that \(\Delta_{n}^{\prime}\subseteq\{0,1,\ldots,wr_{n}\}\). **Condition S3.** By definition of \(i_{1}(n),\ldots,i_{d}(n)\), for all \(j\in\{1,\ldots,d\}\) there exists \(\ell_{j}(n)\in\{1,\ldots,k\}\) with \(T^{i_{j}(n)}(x_{\ell_{j}(n)})\in[1-\|r_{n}\theta\|,1)\). Now, for all \(n\in\mathbb{N}\) and \(1\leq j_{1}<j_{2}\leq d\) we have \[\|(i_{j_{2}}(n)-i_{j_{1}}(n))\theta+x_{\ell_{j_{2}}(n)}-x_{\ell_{j_{1}}(n)}\| \leq\|r_{n}\theta\|\,. \tag{2}\] We claim that the left-hand side of (2) is non-zero. Indeed, the claim holds if \(\ell_{j_{2}}(n)=\ell_{j_{1}}(n)\) because \(\theta\) is irrational, while the claim also holds in case \(\ell_{j_{2}}(n)\neq\ell_{j_{1}}(n)\) since in this case we have \(x_{\ell_{j_{2}}(n)}-x_{\ell_{j_{1}}(n)}\not\in\mathbb{Z}\theta+\mathbb{Z}\) by assumption. Since moreover the right-hand side of (2) tends to zero as \(n\) tends to infinity, we have that \(i_{j_{2}}(n)-i_{j_{1}}(n)=\omega(1)\). On the other hand, if \(\ell_{j_{2}}(n)=\ell_{j_{1}}(n)\) then we even have \(i_{j_{2}}(n)-i_{j_{1}}(n)\geq r_{n}=\omega(\log r_{n})\) by the law of best approximation. Hence we certainly have \(i_{d}(n)-i_{1}(n)=\omega(\log r_{n})\). Finally, defining \(i_{0}(n):=0\) we have \(i_{1}(n)-i_{0}(n)=\omega(1)\) by the requirement that \(T^{i_{1}(n)}(x_{\ell_{1}(n)})\in[1-\|r_{n}\theta\|,1)\) and the fact that \(\|r_{n}\theta\|\) converges to \(0\). Setting \(i_{d+1}(n):=s_{n}\) for all \(n\), we also have \(i_{d+1}(n)-i_{d}(n)=\omega(1)\) by the maximality condition in the definition of \(s_{n}\). **Condition S4.** Consider \(m\in\Delta_{n}\) satisfying Condition (i) above, i.e., such that \(T^{m}(x_{\ell})\in[1-\|r_{n}\theta\|,1)\) for some \(\ell\in\{1,\ldots,k\}\). Then we have \[u_{m}^{(\ell)}=0,\,u_{m+1}^{(\ell)}=1\quad\text{and}\quad u_{m+r_{n}}^{(\ell) }=1,\,u_{m+r_{n}+1}^{(\ell)}=0\,.\] Moreover for all \(\ell^{\prime}\neq\ell\) and \(n\) sufficiently large we have \[u_{m}^{(\ell^{\prime})}=u_{m+r_{n}}^{(\ell^{\prime})}\quad\text{and}\quad u_{m +1}^{(\ell^{\prime})}=u_{m+r_{n}+1}^{(\ell^{\prime})}\,.\] We conclude that \(u_{m}+u_{m+1}=u_{m+r_{m}}+u_{m+r_{n}+1}\), establishing Condition S4. ## 4 A Transcendence Result **Theorem 5**.: _Let \(A\) be a finite set of algebraic numbers and suppose that \(\boldsymbol{u}\in A^{\omega}\) is a stuttering sequence. Then for any algebraic number \(\beta\) with \(|\beta|>1\), the sum \(\alpha:=\sum_{n=0}^{\infty}\frac{\mathfrak{u}_{n}}{\beta^{n}}\) is transcendental._ Proof.: Suppose for a contradiction that \(\alpha\) is algebraic. By scaling we can assume without loss of generality that \(A\) consists solely of algebraic integers. Let \(K=\mathbb{Q}(\beta)\) be the field generated over \(\mathbb{Q}\) by \(\beta\) and write \(S\subseteq M(K)\) for the set comprising all infinite places of \(K\) and all finite places of \(K\) corresponding to prime-ideal divisors of the ideal \(\beta\mathcal{O}_{K}\). Applying the stuttering condition (for a value of \(w\) to be determined later), we obtain \(d\geq 2\) such that for all \(n\in\mathbb{N}\) there are positive integers \(r_{n},s_{n},i_{1}(n),\ldots,i_{d}(n)\) satisfying conditions S1-S4. By condition S2, for all \(n\) if we define \[c_{j}(n):=(u_{i_{j}(n)+r_{n}}-u_{i_{j}(n)})+(u_{i_{j}(n)+r_{n}+1}-u_{i_{j}(n)+1}) \beta^{-1},\quad j\in\{1,2,\ldots,d\}\] and \(\alpha_{n}:=\sum_{j=0}^{r_{n}}u_{j}\beta^{r_{n}-j}\) then we have \[\left|\beta^{r_{n}}\alpha-\alpha-\alpha_{n}-c_{1}(n)\beta^{-i_{1}(n)}-\cdots-c_ {d}(n)\beta^{-i_{d}(n)}\right|<|\beta|^{-s_{n}}\,, \tag{3}\] Note that \(c_{1}(n),\ldots,c_{d}(n)\) are non-zero by Condition S4. By passing to a subsequence we can furthermore assume without loss of generality that \(c_{1}=c_{1}(n),\ldots,c_{d}=c_{d}(n)\) are constant, independent of \(n\). To set up the application of the Subspace Theorem, define a family of linear forms \(L_{i,v}\), for \(1\leq i\leq 3+d\) and \(v\in S\), by \[L_{i,v}(x_{1},\ldots,x_{3+d}) := x_{i}\text{ for all }(i,v)\neq(3,v_{0})\text{, and}\] \[L_{3,v_{0}}(x_{1},\ldots,x_{3+d}) := \alpha x_{1}-\alpha x_{2}-x_{3}-\sum_{j=1}^{d}c_{j}x_{3+j}\,.\] Write \(\boldsymbol{b}_{n}:=\left(\beta^{r_{n}},1,\alpha_{n},\beta^{-i_{1}(n)},\ldots,\beta^{-i_{d}(n)}\right)\) and let \(M\geq 2\) be an upper bound of the set of real numbers \[\{|\gamma|_{v}:\gamma\in\{\beta\}\cup A,\,v\in S\}\;.\] Then for all \(v\neq v_{0}\) we have \[|L_{3,v}(\boldsymbol{b}_{n})|_{v}=|\alpha_{n}|_{v}\leq\sum_{j=0}^{r_{n}}M^{j+ 1}\leq M^{r_{n}+2}\,,\] while \(|L_{3,v_{0}}(\boldsymbol{b}_{n})|_{v_{0}}\leq|\beta|^{-s_{n}/\deg(\beta)}\) by (3). Furthermore, for \(i\neq 3\), by the product formula we have \(\prod_{v\in S}|L_{i,v}(\boldsymbol{b}_{n})|_{v}=1\). Altogether we have \[\prod_{v\in S}\prod_{i=1}^{d+3}|L_{i,v}(\boldsymbol{b}_{n})|_{v}\leq M^{(r_{n} +2)|S|}\cdot|\beta|^{-s_{n}/\deg(\beta)}\,. \tag{4}\] Since \(s_{n}\geq wr_{n}\) we have that for \(w\) sufficiently large the right-hand side of (4) is less than \(|\beta|^{-s_{n}/2\deg(\beta)}\). On the other hand there exists a constant \(c\) such that the height of \(\boldsymbol{b}_{n}\) satisfies the bound \(H(\boldsymbol{b}_{n})\leq|\beta|^{\varepsilon_{n}}\) for all \(n\). Thus there exists \(\varepsilon>0\) such that the right-hand side of (4) is at most \(H(\boldsymbol{b}_{n})^{-\varepsilon}\) for all \(n\). Since \(\boldsymbol{b}_{n}\) is a vector of \(S\)-units we can apply the Subspace Theorem to obtain a non-zero linear form \(L(x_{1},\ldots,x_{3+d})\) with coefficients in \(K\) such that \(L(\boldsymbol{b}_{n})=0\) for infinitely many \(n\in\mathbb{N}\). Denote by \(\operatorname{vars}(L)\subseteq\{x_{1},\ldots,x_{3+d}\}\) the set of variables that appear in \(L\) with non-zero coefficient. We claim that \(x_{3}\in\operatorname{vars}(L)\). Indeed, suppose for a contradiction that \(x_{3}\not\in\operatorname{vars}(L)\). Then for all \(n\), \(L(\boldsymbol{b}_{n})\) is a fixed linear combination of the numbers \(\beta^{r_{n}},1,\beta^{-i_{1}(n)},\ldots,\beta^{-i_{d}(n)}\). By Item S3 the gaps between successive exponents in these powers of \(\beta\) tend to infinity with \(n\) and hence a fixed linear combination of such powers cannot vanish for arbitrarily large \(n\). We have that \(L(\boldsymbol{b}_{n})\) is a linear combination of a most \(r_{n}+d+1\) powers of \(\beta\), whose respective exponents lie in the set \(\{0,1,\ldots,r_{n}\}\cup\{-i_{1}(n),\ldots,-i_{d}(n)\}\). From Item S3 there exists \(j_{0}\in\{1,\ldots,d-1\}\) such that \(i_{j_{0}+1}(n)-i_{j_{0}}(n)=\omega(\log r_{n})\). By Proposition 2 the condition \(L(\boldsymbol{b}_{n})=0\) entails, for \(n\) sufficiently large, that \(\operatorname{vars}(L)\) is contained either in \(\{x_{1},\ldots,x_{j_{0}+3}\}\) or in \(\{x_{j_{0}+4},\ldots,x_{d}\}\). Since we know that \(x_{3}\in\operatorname{vars}(L)\) the former inclusion applies. We have established that \(x_{3}\in\operatorname{vars}(L)\subseteq\{x_{1},\ldots,x_{j_{0}+3}\}\). Thus by a suitable linear combination of the forms \(L_{3,v_{0}}\) and \(L\), so as to eliminate the variable \(x_{3}\), we obtain a non-zero linear form \(L^{\prime}(x_{1},\ldots,x_{3+d})\) with algebraic coefficients that does not mention \(x_{3}\) and such that \(|L^{\prime}(\boldsymbol{b}_{n})|<|\beta|^{-s_{n}}\) for infinitely many \(n\). Note that \(L^{\prime}(\boldsymbol{b}_{n})\) is a fixed linear combination of at most \(d+2\) powers of \(\beta\), with respective exponents in the set \(\{r_{n},0,-i_{1}(n),\ldots,-i_{d}(n)\}\). Moreover by Item S3 the gaps between consecutive elements of this set tend to infinity with \(n\). It follows that \(|L^{\prime}(\boldsymbol{b}_{n})|\gg|\beta|^{-i_{d}(n)}\). But since \(s_{n}-i_{d}(n)=\omega(1)\), this contradicts \(|L^{\prime}(\boldsymbol{b}_{n})|<|\beta|^{-s_{n}}\). We have the following immediate corollary of Theorem 4 and Theorem 5. **Theorem 6**.: _Let \(\beta\) be an algebraic number with \(|\beta|>1\). Let \(0<\theta<1\) be irrational and let \(x_{1},\ldots,x_{k}\in I\) be such that \(x_{i}-x_{j}\not\in\mathbb{Z}\theta+\mathbb{Z}\) for \(i\neq j\). For \(i=1,\ldots,k\), define \(\alpha_{i}:=\sum_{n=0}^{\infty}\frac{u_{n}^{(i)}}{\beta^{n}}\), where \(\langle u_{n}^{(i)}\rangle_{n=0}^{\infty}\) is the \(\theta\)-coding of \(x_{i}\). Then the set \(\{1,\alpha_{1},\ldots,\alpha_{k}\}\) is linearly independent over the field \(\overline{\mathbb{Q}}\) of algebraic numbers._ ## 5 Application to Limit Sets of Contracted Rotations Let \(0<\lambda,\delta<1\) be real numbers such that \(\lambda+\delta>1\). We call the map \(f=f_{\lambda,\delta}:I\to I\) given by \(f(x):=\{\lambda x+\delta\}\) a _contracted rotation_ with slope \(\lambda\) and _offset_\(\delta\). Associated with \(f\) we have the map \(F=F_{\lambda,\delta}:\mathbb{R}\to\mathbb{R}\), given by \(F(x)=\lambda\{x\}+\delta+\lfloor x\rfloor\). We call \(F\) a _lifting_ of \(f\): it is characterised by the properties that \(F(x+1)=F(x)+1\) and \(\{F(x)\}=f(\{x\})\) for all \(x\in\mathbb{R}\). The _rotation number_\(\theta=\theta_{\lambda,\delta}\) of \(f\) is defined by \[\theta:=\lim_{n\to\infty}\frac{F^{n}(x_{0})}{n}\,,\] where the limit exists and is independent of the initial point \(x_{0}\in\mathbb{R}\). If the rotation number \(\theta\) is irrational then the restriction of \(f\) to the _limit set_\(\bigcap_{n\geq 0}f^{n}(I)\) is topologically conjugated to the rotation map \(T=T_{\theta}:I\to I\) with \(T(y)=\{y+\theta\}\). The closure of the limit set is a Cantor set \(C=C_{\lambda,\delta}\), that is, \(C\) is compact, nowhere dense, and has no isolated points. On the other hand, if \(\theta\) is rational then the limit set \(C\) is the unique periodic orbit of \(f\). For each choice of slope \(0<\lambda<1\) and irrational rotation number \(0<\theta<1\), there exists a unique offset \(\delta\) such that \(\delta+\lambda>1\) and the map \(f\) has rotation number \(\theta\). It is known that such \(\delta\) must be transcendental if \(\lambda\) is algebraic [12]. The main result of this section is as follows: **Theorem 7**.: _Let \(0<\lambda,\theta<1\) be such that \(\lambda\) is algebraic and \(\theta\) is irrational. Let \(\delta\) be the unique offset such that the contracted rotation \(f_{\lambda,\delta}\) has rotation number \(\theta\). Then every element of the limit set \(C_{\lambda,\delta}\) other than \(0\) and \(1\) is transcendental._ A special case of Theorem 7, in which \(\lambda\) is assumed to be the reciprocal of an integer, was proven in [3, Theorem 1.2]. In their discussion of the latter result the authors conjecture the truth of Theorem 7, i.e., the more general case in which \(\lambda\) may be algebraic. As noted in [3], while \(C_{\lambda,\delta}\) is homeomorphic to the Cantor ternary set, it is a longstanding open problem, formulated by Mahler [9], whether the Cantor ternary set contains irrational algebraic elements. Proof of Theorem 7.: For a real number \(0<x<1\) define \[\xi_{x} := \sum_{n\geq 1}\left(\lceil x+(n+1)\theta\rceil-\lceil x+n\theta \rceil\right)\lambda^{n}\] \[\xi^{\prime}_{x} := \sum_{n\geq 1}\left(\lfloor x+(n+1)\theta\rfloor-\lfloor x+n \theta\rfloor\right)\lambda^{n}\,.\] Note that for all \(x\) the binary sequence \(\langle\,\lceil x+(n+1)\theta\rceil-\lceil x+n\theta\rceil:n\in\mathbb{N}\,\rangle\) is the coding of \(-x-\theta\) by \(1-\theta\) (as defined in Section 3) and hence is Sturmian of slope \(1-\theta\). Similarly, the binary sequence \(\langle\,\left[x+(n+1)\theta\right]-\left[x+n\theta\right]:n\in\mathbb{N}\,\rangle\) is the coding of \(x+\theta\) by \(\theta\) and hence is Sturmian of slope \(\theta\). Thus for all \(x\), both \(\xi_{x}\) and \(\xi_{x}^{\prime}\) are Sturmian numbers. It is shown in [3, Lemma 4.2]2 that for every element of \(y\in C_{\lambda,\delta}\setminus\{0,1\}\), either there exists \(z\in\mathbb{Z}\) and \(0<x<1\) with \(x\not\in\mathbb{Z}\theta+\mathbb{Z}\) such that Footnote 2: The proof of the lemma is stated for \(\beta\) an integer but carries over without change for \(\beta\) algebraic. \[y=z+\xi_{0}-\xi_{-x}\] or else there exists a strictly positive integer \(m\) and \(\gamma\in\mathbb{Q}(\beta)\) such that \[y=\gamma+\left(1-\beta^{-m}\right)\xi_{0}^{\prime}\,.\] In either case, transcendence of \(y\) follows from Theorem 6. **Acknowledgements.** The authors would like to thank Pavol Kebis and Andrew Scoones for helpful feedback and corrections.
2310.09445
Observational Constraints on Sunyaev-Zeldovich Effect Halos Around High-z Quasars
We present continuum observations from the Atacama Large Millimeter/submillimeter Array (ALMA) of 10 high-redshift ($2.2 \le z \le 2.7$) ultraluminous quasars (QSOs) and constrain the presence of hot, ionized, circum-galactic gas in a stacking analysis. We measure a Compton-y parameter profile with a peak value of $(1.7 \pm 1.1) \times 10^{-6}$ at a radius of $\sim50$ kpc. We compare our stacked observations to active galactic nucleus (AGN) feedback wind models and generalized Navarro-Frenk-White (gNFW) pressure profile models to constrain the wind luminosity and halo mass of the stacked QSOs. Our observations constrain the observed stack's halo mass to $<1\times 10^{13}M_{\odot}$ and the stack's feedback wind power $<1\times 10^{12}L_{\odot}$, which is $<1$% of the bolometric luminosity of the quasar.
Kyle Massingill, Brian Mason, Mark Lacy, Bjorn H. C. Emonts, Ilsang Yoon, Jianrui Li, Craig Sarazin
2023-10-13T23:37:52Z
http://arxiv.org/abs/2310.09445v1
# Observational Constraints on Sunyaev-Zeldovich Effect Halos Around High-z Quasars ###### Abstract We present continuum observations from the Atacama Large Millimeter/submillimeter Array (ALMA) of 10 high-redshift (\(2.2\leq z\leq 2.7\)) ultraluminous quasars (QSOs) and constrain the presence of hot, ionized, circum-galactic gas in a stacking analysis. We measure a Compton-y parameter profile with a peak value of \((1.7\pm 1.1)\times 10^{-6}\) at a radius of \(\sim 50\) kpc. We compare our stacked observations to active galactic nucleus (AGN) feedback wind models and generalized Navarro-Frenk-White (gNFW) pressure profile models to constrain the wind luminosity and halo mass of the stacked QSOs. Our observations constrain the observed stack's halo mass to \(<1\times 10^{13}M_{\odot}\) and the stack's feedback wind power \(<1\times 10^{12}L_{\odot}\), which is \(<1\%\) of the bolometric luminosity of the quasar. ## 1 Introduction The lack of super massive galaxies at low redshift (Drory and Alvarez, 2008; Treu et al., 2005) and the "down-sizing" (Cowie et al., 1996) of star-forming galaxies below \(z\sim 2\) indicates some process regulating the growth of galaxies on long time scales. Active galactic nucleus (AGN) feedback has been postulated as the mechanism regulating galaxy formation (Scannapieco and Oh, 2004; Bower et al., 2006) and remains the best explanation for the observed galaxy sizes. An AGN has enough energy to heat gas in the circumgalactic medium (CGM) and even possibly expel gas out of the galaxy completely (Silk and Rees, 1998; Bower et al., 2006). It is not yet well understood what role different feedback mechanisms play in stifling galaxy growth (Ostriker et al., 2010; Fabian, 2012). The relative importance of feedback mechanisms, such as jets, winds, and radiation, is still being studied. In order to better understand AGN feedback, observations must be made to constrain the energy of the the different feedback mechanisms. Quasar winds or outflows have been detected directly in X-ray for two low-z cases (Greene et al., 2014; Harrison et al., 2014). X-ray detections of outflows from high-\(z\) systems where AGN are most powerful are more difficult, as the faint, diffuse X-ray emission from the winds is hard to detect against the strong point source emission from the quasar (e.g. Kar Chowdhury et al., 2022). Instead, detections can be obtained using the Sunyaev-Zel'dovich effect (SZE) (Sunyaev and Zeldovich, 1972) around QSOs. The thermal SZE (tSZE) is the process where cosmic microwave background (CMB) emission is distorted by traveling through hot gas, inducing an inverse Compton scattering. Bulk motion of electrons in the gas also produce a kinetic SZE (kSZE). In the case of AGN feedback, most studies focus on detecting the tSZE from the superposition of multiple wind events in the CGM of the quasar, when the tSZE is dominant (Scannapieco et al., 2008). By stacking large numbers of quasars in tSZE maps from mm-wave telescopes a number of studies have claimed to detect a significant tSZE signal from quasar winds (e.g. Chatterjee et al., 2010; Ruan et al., 2015; Crichton et al., 2016; Verdier et al., 2016; Hall et al., 2019; Meinke et al., 2021). With the large beams of the mm-wave telescopes used and uncertainties regarding the contamination of the SZ signal by dust emission from the quasar host galaxies, however, it can be hard to tell if the detections correspond simply to a combination of dust emission and the tSZE from virialized gas in the massive dark matter haloes in which the quasars reside, or whether there is an extra component of the tSZE due to quasar feedback. tSZE from hot intracluster medium has been detected in the Spiderweb radio galaxy (Mascolo et al., 2023). Typical AGN exist in relatively rich environments, even at high redshift. Quasar-quasar clustering analyses (e.g. White et al., 2012; Timlin et al., 2018; Eftekharzadeh et al., 2019), lensing of the Cosmic Microwave Background by quasar hosts (Geach et al., 2019), the kinematics of the warm ionized gas from quasar absorption lines (Lau et al., 2018) and Ly-\(\alpha\) haloes (Fossati et al., 2021) all suggest that quasars at \(z\sim 1-4\) lie in dark matter haloes of masses \(\approx 2-10\times 10^{12}h^{-1}M_{\odot}\), consistent with galaxy groups in the local Universe. Most estimates for radio-quiet quasars are towards the lower end of that range, whereas radio-loud AGN cluster more strongly, consistent with halo masses at the upper end of that range (Miley and Breuck, 2008; Retana-Montenegro and Rottgering, 2017). These relatively high mass haloes led Cen and Safarzadeh (2015) and Soergel et al. (2017) to argue that the tSZE signal from stacking studies is dominated by that from the virialized haloes of the quasars. By using a halo occupation distribution model for quasar clustering Chowdhury and Chatterjee (2017) suggested that the signal from feedback in the stacking studies made prior to 2017 is significantly higher than that expected from the virialized gas in the quasar halo only at high redshifts (\(z>2.5\)). Some of the ambiguities of stacking studies can be overcome using high resolution images from mm-wave interferometers, such as ALMA. These studies enable the subtraction of emission by the quasar host and any companion or foreground/background galaxies. The morphology of the SZE signal can also be compared to, for example, that of emission line gas to determine whether the signal is associated with an outflow, or more generally with the halo in which the quasar resides. However, these observations are challenging, requiring \(\lower 2.15pt\hbox{$\;\buildrel>\over{\sim}\;$}10\) hours of integration time on ALMA to detect a signal around even the most luminous quasars. It is also not possible to separate the tSZE from any kSZE contribution without further very deep observations at other frequencies. To date, there is only one example of an SZE detection around a single QSO, the hyper-luminous (bolometric luminosity \(\sim 10^{15}L_{\odot}\)) quasar HE 0515-4414 (Lacy et al., 2019). We therefore decided to try a different approach to constrain the tSZE signal from a sample of more typically-luminous quasars, also by using ALMA, but stacking the signal from a moderate number (\(\approx 10\)) of ultraluminous quasars (with bolometric luminosities \(\sim 10^{14}L_{\odot}\)). These quasars were selected to have extended Ly\(\alpha\) nebulae around them (Cai et al., 2019), which allow a rough estimate of the typical halo mass in the sample to be made based on the gas dynamics (uncertain due to radiative transfer effects and possible non-gravitational motions in the Ly\(\alpha\)-emitting gas). Using ALMA allows us to take advantage of the capability to subtract emission from discrete sources, both in the field and associated with the quasar, whilst still building up enough signal-to-noise to make a detection or place a meaningful limit on the tSZE signal from feedback. In this paper we present observations of ten ultraluminous QSOs at \(2.2\leq z\leq 2.7\) observed by the ALMA 12m array. We constrain the Compton parameter of the observed population. We compare our measured Compton parameters to generalized Navarro-Frenk-White (gNFW) (Arnaud et al., 2010) profiles and spherical feedback models. ## 2 Observations Our ALMA QSO targets were observed by project 2019.1.01251.S. This was a commensal program between our continuum study of the SZE and emission-line observations of CO(4-3) and [CI] as part of the SUPERCOLD-CGM survey (Li et al., 2023), which both benefited from sensitive low-surface-brightness observations with ALMA Band 4. Ten pointings were used to observe ten QSO's on the ALMA 12m array band 4 (\(\sim 145\) GHz) with an angular resolution of \(\sim\)2\({}^{{}^{\prime\prime}}\). Targets were observed by the 12m array for \(1-2\) hours in order to achieve an root mean square RMS of \(\approx 11\mu\)Jy in the continuum. Observations of these sources were also made with the ALMA 7m array but at a much higher noise level, so we do not utilize them in our analysis here. See Table 1 for a full list of targets and the achieved signal to noise ratio of the observations. Observations were made with four 2 GHz spectral windows (SPWs), two containing line emission (from CO and CI) and two only containing continuum. ### Data Reduction and Calibration The standard pipeline calibrations of the ALMA data (Hunter et al., 2023) were utilized for the QSO observations. Data reduction scripts supplied with the ALMA archival data were used together with the Common Astronomy Software Application (CASA; The CASA Team et al., 2022) version 5.6.1-8, which includes the ALMA pipeline (Masters et al., 2020) version r42866. We reduced the data by first splitting off the calibrated on source observations from the full data set then flagging spectral line in each SPW. For flagging spectral lines, an image cube with a channel width of \(\sim 8\) MHz (base bands have a full bandwidth of 1875 MHz) was produced for each base band. Then channels with localized emission \(4\sigma\) or more above the continuum of the channel were flagged. Flagged channels were then ignored in subsequent cleaning/imaging. All four SPWs were then combined to make a single, aggregate continuum image. Imaging and cleaning was done using CASA task "tclean". We imaged an area of 60\(\times\)60 arcseconds centered on each quasar using a pixel scale of 0.4\({}^{{}^{\prime\prime}}\)/pixel. We started the imaging process by first cleaning the ALMA 12m images at full resolution. We cleaned using a robustness of 0.5. For each of the ten targets, the sources (the QSO and any nearby serendipitous sources) were masked manually; we then cleaned the masked region to the threshold of the overall image RMS. All subsequent analysis was performed upon the clean residual images, which have effectively had the central QSO (and any strong, positive emission) subtracted. We applied Gaussian tapers to the (u,v)-data during imaging to improve surface brightness measurements of extended signals. We made tapered images for each source at \(\sim 6^{{}^{\prime\prime}}\) and \(\sim 3^{{}^{\prime\prime}}\) resolutions. The object Q1228+3128 (previously identified as a radio loud QSO in Li et al., 2021) was detected with very high (\(\sim 307\sigma\)) signal-to-noise ratio (SNR) in the ALMA continuum observations such that the image was dynamic range limited and benefited from self-calibration. To self-calibrate we use the central source to correct for changes in phase. We solved for phase in intervals of scan length following ALMA NAASC recommended self-calibration procedure (Brogan et al., 2018). However, even with self-calibration we determined the observation to not have enough dynamic range to detect the \(\mu\)Jy level signal of the tSZE. Q1228+3128 was therefore excluded from the analyses. Our second highest QSO SNR detection in the continuum observations was Q1416+2649 with a much lower significance of \(\sim 21\sigma\) (we do not utilize self-calibration for this Q1416+264). Continuum emission from the central QSO was clearly detected in 9 of the 10 targets, and weakly (\(\sim 3\sigma\)) detected in the other. ### Stacked SZE Signal We stacked the source subtracted ALMA images by first measuring the median absolute deviation (MAD) of each residual image and applying weighting to each residual image based on the MAD. The masks from the cleaning stage (see section 2.1 Data Reduction and Calibration) were applied to the residuals and masked regions were not included in the stack. This was done to ensure any leftover source emission was not added to the final stacked image. All the residuals were then stacked by summation in units of Jy/beam. We produced a \(\sim 6^{{}^{\prime\prime}}\) and \(\sim 3^{{}^{\prime\prime}}\) resolution stack. We interpret the stacked continuum flux observations as SZE signal by converting to Compton parameter (\(y\)). Observed decrement in the continuum flux will correspond to a positive Compton parame \begin{table} \begin{tabular}{l l c c c c c} \hline \hline Target & Position & SNR & Redshift & log\({}_{10}\)(\(L_{\rm bol}\)/\(L_{\odot}\)) & RMS \\ & & (\(\sigma\)) & (\(z\)) & & (Jy/beam) \\ \hline Q0050+0051 & ICRS 00:50:21.2200, 00:51:35.000 & 14 & 2.22 & 13.79 & 2.14e-5 \\ \hline Q0052+0140 & ICRS 00:52:33.6700, 01:40:40.800 & 11 & 2.30 & 13.99 & 2.00e-5 \\ \hline Q0101+0201 & ICRS 01:01:16.5400, 02:01:57.400 & 13 & 2.46 & 13.91 & 1.91e-5 \\ \hline Q1227+2848 & ICRS 12:27:27.4800, 28:48:47.900 & 7 & 2.26 & 13.76 & 1.64e-5 \\ \hline Q1228+3128 & ICRS 12:28:24.9700, 31:28:37.700 & 307 & 2.22 & 14.55 & 2.80e-5 \\ \hline Q1230+3320 & ICRS 12:30:35.4700, 33:20:00.500 & 13 & 2.32 & 13.66 & 1.99e-5 \\ \hline Q1416+2649 & ICRS 14:16:17.3800, 26:49:06.200 & 21 & 2.29 & 13.51 & 1.86e-5 \\ \hline Q2121+0052 & ICRS 21:21:59.0400, 00:52:24.100 & 3 & 2.37 & 13.69 & 2.17e-5 \\ \hline Q2123\(-\)0050 & ICRS 21:23:29.4600, \(-\)00:50:52.900 & 7 & 2.65 & 14.52 & 2.21e-5 \\ \hline Q0107+0314 & ICRS 01:07:36.9000, 03:14:59.200 & 9 & 2.28 & 13.76 & 2.18e-5 \\ \hline \end{tabular} \end{table} Table 1: Quasars observed in this study, their position and the signal to noise ratio (SNR) achieved in the \(6^{{}^{\prime\prime}}\) tapered continuum ALMA 12m observations. Here SNR is defined as the peak brightness of the QSO divided by overall RMS of the image. We have estimated tapered RMS from the median of the absolute deviations from the median (MAD) of the source subtracted images. We rescaled the MAD as \(RMS=k\times MAD\), where \(k\approx 1.4826\), so as to provide the same results as the RMS for a Gaussian distribution. \(L_{bol}\) is the bolometric luminosity of the Quasar. Figure 1: Stacked QSO environments expressed as a Compton-y map. A \(6^{{}^{\prime\prime}}\) Gaussian taper has been applied to the ALMA 12m observations. Regions with source flux have been masked. Bins of differing radius have been made, with the width of each bin being about \(\sim 6.7^{{}^{\prime\prime}}\) wide. ter. Our stacking analysis will not be sensitive to the non-spherically symmetric kSZE. For stacked QSO environments we use a pure thermal SZE model to determine the Compton parameters. We therefore assume \(v_{r}/c<<(k_{B}T_{e})/(m_{e}c^{2})f_{1}(x)\). Following Sazonov and Sunyaev 1998, the change in intensity of the CMB due to SZE is given by: \[\Delta I(x)=\frac{2k_{B}T_{CMB}}{\lambda^{2}}\frac{x^{2}e^{x}}{(e^{x}-1)^{2}} \frac{k_{B}T_{e}}{m_{e}c^{2}}f_{1}(x), \tag{1}\] where \(x\equiv h\nu/k_{B}T_{CMB}\), and \(f_{1}(x)=x\coth(x/2)-4\). \(\tau\) is the Thompson scattering optical depth along the line of sight and \(f_{1}(x)\) is the frequency dependence of thermal SZE. We therefore describe the spectral distortion along a line of sight as the Compton parameter (Sunyaev and Zeldovich, 1972): \[y=\frac{\tau k_{B}T_{e}}{m_{e}c^{2}}. \tag{2}\] We convert observed flux into Compton parameter for the purposes of comparing observations to theoretical models. Fig. 1 shows the Compton parameter of the stacked QSO environments tapered to an effective beam of \(\sim 6^{\prime\prime}\). A region of high \(y\) can be seen in the east side of the image, at a radius of \(\sim 5^{\prime\prime}\) from the center, peaking at \(y=2.4e-5\) or \(\sim 2.6\sigma\). Regions of null pixels exist (prominently seen as white pixels at the center of Fig. 1) due to masking as described earlier in this section. ## 3 Analysis ### gNFW Models In order to compare our observations to theoretical models of different halo masses, we use a generalized Navarro-Frenk-White (gNFW) pressure profile: \[\mathbb{P}(x)=\frac{P_{0}}{(c_{500}x)^{\gamma}[1+(c_{500}x)^{\alpha}]^{(\beta -\gamma)/\alpha}} \tag{3}\] as described in Arnaud et al. 2010 and Nagai et al. 2007. \(x\equiv r/r_{s}\) and \(r_{s}=r_{500}/c_{500}\), where \(r_{500}\) is the radius containing matter at 500 times the ambient density. We treat the halo as a cube divided into cells; the cube is \(300^{3}\) cells. Each is cell is \(0.4^{{}^{\prime\prime}}\) per side and the total volume of the cube, within which we are integrating the line-of-sight pressure, is \(\sim 1\times 10^{10}\) kpc\({}^{3}\)at our average redshift of \(z=2.34\). We then calculate the pressure of each cell: \[P(r)=P_{500}\left[\frac{M_{500}}{3\times 10^{14}h_{70}^{-1}M_{\odot}}\right]^{ \alpha p+\alpha_{P}^{\prime}(x)}\mathbb{P}(x) \tag{4}\] with parameters: \[[P_{0},c_{500},\gamma,\alpha,\beta]=\] \[\left[8.403h_{70}^{-3/2},1.177,0.3081,1.0510,5.4905\right] \tag{5}\] where \(P_{500}\) and \(M_{500}\) are the corresponding pressure and mass respectively. To determine \(y\) we integrate the pressure through the cube. We generate halo simulation images (see section 3.3) for halo masses of \(1\times 10^{12}M_{\odot}\), \(3\times 10^{12}M_{\odot}\), \(1\times 10^{13}M_{\odot}\), and \(3\times 10^{13}M_{\odot}\). For the nominal mass case of \(3\times 10^{12}M_{\odot}\), \(y\sim 10^{-6}\) out to a radius of \(\sim 100\) kpc (or \(\approx 12^{{}^{\prime\prime}}\) at \(z=2.34\)) from the center of the profile. ### Feedback Models The gNFW models do not include the effects of feedback. We therefore constructed a set of simple spherical wind models of AGN feedback using the prescription of Rowe and Silk (2011) as adapted for AGN winds by Lacy et al. (2019). These give the radius of the bubble, \(R\): \[R=\beta\left(\frac{L_{W}T^{3}}{\Omega_{b}\rho_{c}(1+z)^{3}(1+b_{Q}\delta)} \right)^{1/5} \tag{6}\] where \(\beta=0.8828\), \(L_{W}\) is the wind power, \(T\) is the age of the outflow, \(\Omega_{b}\) is the fraction of the critical density of the Universe \(\rho_{c}\) in baryons, \(b_{Q}\approx 13\) is the cosmological bias factor for quasars and \(\delta=180\) is the assumed overdensity corresponding to collapsed structures. The pressure inside the bubble is approximately constant and is: \[P_{\rm bub}=\frac{2}{5}\frac{3}{4\pi}\frac{L_{W}T}{R^{3}} \tag{7}\] and the peak Compton-\(y\) signal (through the center of the bubble): \[y_{\rm max}=1.08\frac{P_{e}\sigma_{T}}{m_{e}c^{2}}2R \tag{8}\] where the electron pressure, \(P_{e}=P_{\rm bub}/1.92\), \(\sigma_{T}\) is the Thompson cross-section and \(m_{e}\) the electron mass. We constructed 2-D models of the SZE signal to then run through the simulator, as described below. ### Simulating Observations The gNFW pressure profiles and wind models are simulations of sky images. In order to compare them to our observations in the image plane, we had to add the effects of the ALMA beam to our simulated images. We do this by utilizing the ALMA simulator, a facility within CASA (The CASA Team et al., 2022). The ALMA simulator takes a sky image and a given set of observational conditions and produces the expected ALMA visibilities. This is done by convolving the telescope configuration with the theoretical sky image. We used simulator task "simpredict" from CASA version 5.6.1-8. For each of calculated models, we produce a set of theoretical visibilities based on the QSO observational conditions. These simulated visibities are then imaged using the same parameters as the QSOs (see Fig. 2 for an example). These imaged models now have the same primary beam and side lobe effects as the observations. ### Radial Profile Analysis To compare the gNFW models to our observations, the Compton parameter of the stack was analyzed as a radial profile in bins slightly wider then the width of the tapered beam. We utilized the lower resolution stack for this analysis, as the pressure profiles are relatively extended ( \(>10^{{}^{\prime\prime}}\)) sources. The beam of the smoothed ALMA image is \(\sim 6^{{}^{\prime\prime}}\) while we used \(\sim 6.7^{{}^{\prime\prime}}\) wide bins. The central pixels of the image (out to \(r\sim 3^{{}^{\prime\prime}}\)) were not utilized in the radial profile analysis as that region is mostly masked out. We measured the mean Compton parameter and RMS in each bin of the stack. Figure 3 shows our radial profile analysis comparing the stacked QSO images to gNFW profiles of different halo masses. The feedback models were prepared using the same procedure as the gNFW models (see section 3.3). We compared the feedback models to the higher resolution stack (\(\sim 3^{{}^{\prime\prime}}\) beam) in order to be sensitive to wind bubbles with radii on order of a few arcseconds predicted by recent simulations (Chakraborty et al., 2023). We analyzed this stack in Compton-\(y\) as a radial profile using bins with a width of \(\sim 5.3^{{}^{\prime\prime}}\). As on the lower resolution image, the central pixels of the image (out to \(r\sim 3^{{}^{\prime\prime}}\)) were not utilized in the radial profile analysis. We compared this profile to the feedback models at a variety of wind powers (\(L_{W}\)) and two outflow ages (\(T\)). The chosen outflow ages are based on the estimated age of a previously detected feedback wind bubble in Lacy et al. 2019 and are of the order of the Salpeter timescale for quasar growth (e.g. Shen, 2013). Figures 4 and 5 show our radial profile analysis. ## 4 Results and Discussion ### Halo Mass Signal from a gNFW halo (CMB decrement) is strongest at the center of the profile, which would not be detectable by these observations as they are centered on continuum bright QSOs. On the image plane we instead try to fit the regions around the QSO to the profile the same distance from the center. In our stacked data we do see a Compton signal in the inner radial bin (see \begin{table} \begin{tabular}{l l l} \hline \hline Model Halo Mass & \(\chi^{2}\) & p-value \\ \hline \(1\times 10^{12}M_{\odot}\) & 4.03 & 0.4 \\ \hline \(3\times 10^{12}M_{\odot}\) & 4.38 & 0.36 \\ \hline \(1\times 10^{13}M_{\odot}\) & 8.02 & 0.09 \\ \hline \(3\times 10^{13}M_{\odot}\) & 31.43 & \(2.51\times 10^{-6}\) \\ \hline \end{tabular} \end{table} Table 2: Results of image plane analysis, comparing stacked observations of QSO environments to gNFW halo profiles at four different halo masses as a goodness of fit \(\chi^{2}\) and associated statistical p-value with four degrees of freedom. All models use a concentration of 1. Figure 3: Radial profile analysis comparing the stacked QSO images to gNFW profiles of four different halo masses. Both the real and theoretical observations contain primary beam effects from the telescope configuration. Uncertainty in the bin mean is \(\sigma=RMS/\sqrt{beams-1}\), where “beams” is the number of ALMA beams in the bin. Figure 2: This is an example of a feedback model (\(T=1\times 10^{8}\)years and \(L_{W}=1\times 10^{12}L_{\odot}\)). LEFT: The wind model calculated as described in section 3.2. RIGHT: The wind model convolved with the ALMA beam using the simulator described in section 3.3. In the right hand image there are primary beam and sidelobe effects such as the negative Compton-y ring at a radius of \(\sim 0.16^{{}^{\prime}}\). figure 3), but the error is large due to the small radius; this bin only contains \(\sim 10\) beams. The other notable features of the gNFW halo when observed through the ALMA simulator are the dark sidelobes. Larger mass halo models will have brighter (in Compton-\(y\)) centered decrement and darker sidelobes than lower mass models. Our observations lack dark sidelobes that would indicate a higher mass halo. We quantify the relationship between the stacked QSO observations and the simulated gNFW models by calculating the \(\chi^{2}\) and p-value (where "p" is the probability that the difference between model and observations are due to chance) for each halo mass model compared to the stacked observations. \(\chi^{2}\) is calculated from 4 bins in the case of halo mass. Looking at Table 2 we can see that as we go to higher halo mass models, it becomes statistically less likely that they are consistent with our observations. The gNFW model with halo mass \(1\times 10^{13}M_{\odot}\) has a p-value 0.09. We can therefore constrain our observed stack of QSOs to have a halo mass \(<1\times 10^{13}M_{\odot}\) with 90% confidence. ### Feedback We have modeled feedback as a spherical wind bubble that will cause a positive Compton-\(y\) signal within the radius of the bubble. Sidelobe effects from the ALMA observatory are accounted for by the simulator and cause negative Compton-\(y\) signal around the feedback wind bubble. On the image plane we fit the regions around the QSO to the feedback models at the same distance from image center. We quantify the relationship between the stacked QSO observations and the simulated feedback \begin{table} \begin{tabular}{l l l} \hline \hline \multicolumn{3}{c}{\(T=1\times 10^{8}\)years} \\ \hline \(L_{W}\) & \(\chi^{2}\) & p-value \\ \hline \(1\times 10^{11}L_{\odot}\) & 1.55 & 0.17 \\ \hline \(3\times 10^{11}L_{\odot}\) & 1.88 & 0.09 \\ \hline \(1\times 10^{12}L_{\odot}\) & 3.32 & 0.01 \\ \hline \(3\times 10^{12}L_{\odot}\) & 7.78 & \(2.51\times 10^{-7}\) \\ \hline \(1\times 10^{13}L_{\odot}\) & 13.68 & \(2.18\times 10^{-13}\) \\ \hline \end{tabular} \end{table} Table 4: Results of image plane analysis, comparing stacked observations of QSO environments to outflow age \(1\times 10^{8}\) years feedback models at five different wind powers as a goodness of fit \(\chi^{2}\) and associated statistical p-value with five degrees of freedom. Figure 4: Radial profile analysis comparing the stacked QSO images to feedback models with an outflow age of \(T=3\times 10^{7}\) years. Both the real and theoretical observations contain primary beam effects from the telescope configuration. Uncertainty in the bin mean is \(\sigma=RMS/\sqrt{\rm beams-1}\), where “beams” is the number of ALMA synthesized beams in the bin. Figure 5: Radial profile analysis comparing the stacked QSO images to feedback models with an outflow age of \(T=1\times 10^{8}\) years. Both the real and theoretical observations contain primary beam effects from the telescope configuration. Uncertainty in the bin mean is \(\sigma=RMS/\sqrt{\rm beams-1}\), where “beams” is the number of ALMA synthesized beams in the bin. \begin{table} \begin{tabular}{l l l} \hline \hline \multicolumn{3}{c}{\(T=3\times 10^{7}\)years} \\ \hline \(L_{W}\) & \(\chi^{2}\) & p-value \\ \hline \(1\times 10^{11}L_{\odot}\) & 1.49 & 0.19 \\ \hline \(3\times 10^{11}L_{\odot}\) & 1.6 & 0.16 \\ \hline \(1\times 10^{12}L_{\odot}\) & 2.08 & 0.06 \\ \hline \(3\times 10^{12}L_{\odot}\) & 4.93 & \(1.6\times 10^{-4}\) \\ \hline \(1\times 10^{13}L_{\odot}\) & 30.56 & \(<1\times 10^{-16}\) \\ \hline \end{tabular} \end{table} Table 3: Results of image plane analysis, comparing stacked observations of QSO environments to outflow age \(3\times 10^{7}\) years feedback models at five different wind powers as a goodness of fit \(\chi^{2}\) and associated statistical p-value with five degrees of freedom. models by calculating the \(\chi^{2}\) and p-value for each wind power and outflow age model compared to the stacked observations. Looking at Tables 3 and 4 we see that in both outflow age cases, as we go to higher power models, it becomes statistically less likely that they are consistent with our observations. At a wind power of \(1\times 10^{12}L_{\odot}\) and an outflow age \(1\times 10^{8}\) years the p-value is 0.01 We can therefore constrain our observed stack of QSOs to have a feedback wind power \(<1\times 10^{12}L_{\odot}\), or \(\lower 2.15pt\hbox{$\;\buildrel<\over{\sim}\;$}1\%\) of the bolometric luminosity of the quasar. ## 5 Conclusions We use the constraints on the tSZE from our stack of quasar observations to show that (1) if the quasars are in quiescent halos of virialized gas, the halo mass is \(<1\times 10^{13}M_{\odot}\) (consistent with estimates \(\sim 3\times 10^{12}M_{\odot}\) from clustering analyses), and (2), if feedback from thermal winds extends to spatial scales \(\sim 100\) kpc, these winds carry \(<1\%\) of the bolometric luminosity of the quasars. This finding is consistent with the only direct detection of SZE around a single QSO, Lacy et al., 2019, which found the hyperluminous quasar HE 0515-4414 to have a wind luminosity of \(\sim 0.01\%\) of the bolometric luminosity of the quasar. The \(\chi^{2}\) of our best fitting models is similar to zero signal. Further fitting of halo and feedback models is therefore not merited by the data available in this survey. We do not see evidence of a strongly peaked central decrement that would be indicative of a gNFW profile or feedback wind bubble. We note that the simulations of Chakraborty et al. (2023) show that strong jet-mode feedback is effective at suppressing the SZE from both the halo and thermal winds, thus jet feedback may be occurring in these objects. Only one of the quasars (Q1228+3128, which was not included in our stacking analysis) is currently radio loud, however there could be intermittent jet activity (e.g. Nyland et al., 2020) and it has been show that non radio-loud quasars can still have radio mode feedback that effect the QSO environment on scales of \(\sim\)10-100 kpc (Villar-Martin et al., 2021). ## 6 Acknowledgments This paper makes use of the ALMA data: 2019.1.01251.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), MOST and ASIAA (Tai-wan), and KASI (Republic of Korea), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. We thank the North American ALMA Science Center faculty and staff for their support. We thank the Arizona State University SZ science group, including Seth Cohen, Phil Mauskopf, Sean Bryan, Jenna Moore, Emily Lunde, Jeremey Meinke, Skylar Grayson and Evan Scannapieco, for their advice and contributions.
2303.11332
Deep Learning for Video-based Person Re-Identification: A Survey
Video-based person re-identification (video re-ID) has lately fascinated growing attention due to its broad practical applications in various areas, such as surveillance, smart city, and public safety. Nevertheless, video re-ID is quite difficult and is an ongoing stage due to numerous uncertain challenges such as viewpoint, occlusion, pose variation, and uncertain video sequence, etc. In the last couple of years, deep learning on video re-ID has continuously achieved surprising results on public datasets, with various approaches being developed to handle diverse problems in video re-ID. Compared to image-based re-ID, video re-ID is much more challenging and complex. To encourage future research and challenges, this first comprehensive paper introduces a review of up-to-date advancements in deep learning approaches for video re-ID. It broadly covers three important aspects, including brief video re-ID methods with their limitations, major milestones with technical challenges, and architectural design. It offers comparative performance analysis on various available datasets, guidance to improve video re-ID with valuable thoughts, and exciting research directions.
Khawar Islam
2023-03-21T05:50:53Z
http://arxiv.org/abs/2303.11332v2
# Deep Learning for Video-based Person Re-Identification: A Survey ###### Abstract Video-based person re-identification (video re-ID) has lately fascinated growing attention due to its broad practical applications in various areas, such as surveillance, smart city, and public safety. Nevertheless, video re-ID is quite difficult and is an ongoing stage due to numerous uncertain challenges such as viewpoint, occlusion, pose variation, and uncertain video sequence, etc. In the last couple of years, deep learning on video re-ID has continuously achieved surprising results on public datasets, with various approaches being developed to handle diverse problems in video re-ID. Compared to image-based re-ID, video re-ID is much more challenging and complex. To encourage future research and challenges, this first comprehensive paper introduces a review of up-to-date advancements in deep learning approaches for video re-ID. It broadly covers three important aspects, including brief video re-ID methods with their limitations, major milestones with technical challenges, and architectural design. It offers comparative performance analysis on various available datasets, guidance to improve video re-ID with valuable thoughts, and exciting research directions. + Footnote †: journal: 1 Footnote 1: Corresponding author: Khawar Islam ## 1 Introduction With the endless efforts of computer vision and deep learning researchers, deep learning has accomplished exceptional success in person re-ID. In a few years, deep learning shows remarkable results in video re-ID and gives new birth to surveillance systems. With the rapid improvement in multimedia technology, video re-ID has gained much more attention in academia and the industrial sector over the last ten years Zheng et al. (2016); Nambiar et al. (2019). The dominant reason for video re-ID popularity is to provide a wide range of services for public safety such as tracking each person with a unique ID, preventing crimes, behavior analysis, forensic investigation, etc. Almasawa et al. (2019). In intelligent video surveillance applications, video re-ID is defined as recognizing an individual person through various non-overlapping cameras from the huge number of gallery images Chen et al. (2020). It is one of the intriguing computer vision problems that are present among inter-camera variance challenges such as background clutter, occlusion, viewpoint, illumination changes, human pose variation and etc. Video re-ID is an extended way of image-based person re-ID. Rather than comparing image pairs, pairs of video sequences are provided to the re-ID algorithm. The essential and important task of the video re-ID algorithm is to obtain temporal features from video sequences. Compare with image-based information, videos naturally comprise more information and evidence than individual images. Lately, numerous methods have been developed for video re-ID Zhou et al. (2017); Zhang et al. (2017). Most existing approaches emphasize extracting spatial and temporal features present in a video and then applying the re-ID algorithm to obtained features. In general, taking a video from different surveillance cameras like CCTV from different outside places. Then, detect persons in a video sequence and create a bounding box on it. Due to the high volume of data, it is difficult to draw manually bounding boxes and annotate each person's image for training. Different studies Ren et al. (2015); Li et al. (2017); Lan et al. (2018) trained detectors to detect persons in a video sequence. Next, training a new re-ID model on highly noisy data based on previously annotated data. At last, giving query (probe) person image to re-ID model to find query person in a large set of candidate gallery Ye et al. (2021). The main role of video re-ID is to extract spatiotemporal features from video sequences. Some previous studies directly utilized person re-ID methods for images with some extension and applied for video. These approaches extract spatiotemporal information from each image independently by utilizing a recurrent neural network, feature aggregation function, and different pooling operations to obtain a frame-level information (e.g. appearance) representation. These above-mentioned techniques view different video frames with equal importance when needed frame-level features. However, these approaches extract abstract-level global features from the human body, while ignoring several local visual cues from a body such as a gait, hairs and etc. Person re-ID in videos taken by multiple non-overlapping cameras which is a more practical implementation than images and achieves growing research trends Wang et al. (2014); Zhou et al. (2017). In practical terms, videos captured from surveillance cameras with the involvement of pedestrians are the actual videos for person re-ID because these videos contain useful abundant information and spatial temporal features of a pedestrian that includes different human poses with diverse view angles. Nevertheless, recognizing discriminative portions of pedestrians against noisy data and extracting their features is an intriguing vision problem that is complicated for matching persons. Several video re-ID methods McLaughlin et al. (2017); Zhang et al. (2017) utilize CNN and RNN networks to extract spatio-temporal features from images and employ a pooling strategy to aggregate them. However, following these procedures, the task of matching persons becomes more sensitive when there are some noisy samples in data due to cluttered background or occlusion. While comparing two images of a person, each frame contributes equally to the matching task. For instance, if two persons are occupied with the same occluded object, the same appearance on occluded objects gives a false positive result in person re-ID. ### Contribution of this survey paper Most of the researchers focused on and surveyed traditional re-ID methods. Several survey papers covered conventional \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt} p{113.8pt}} \hline \hline **Survey** & **Focus** & **Major Contribution** & **Video re-ID** & **Publication** \\ \hline Mazzon et al. (2012) & Crowd & \multicolumn{2}{c}{Drawbacks of existing approaches} & \multirow{2}{*}{Partial} & \multirow{2}{*}{PRL} \\ \cline{1-1} Satta (2013) & Appearance Descriptors & & & & \\ \cline{1-1} Gala and Shah (2014) & Open-Closed & \multicolumn{2}{c}{Highlighted public datasets with current evaluation} & \multirow{2}{*}{Partial} & \multirow{2}{*}{IVC} \\ \cline{1-1} Zheng et al. (2016b) & Image \& Video & & & \\ \cline{1-1} \cline{3-5} Lavi et al. (2018) & re-ID & \multicolumn{2}{c}{Survey on deep neural networks techniques} & \multirow{2}{*}{X} & \multirow{2}{*}{arXiv} \\ \cline{1-1} \cline{3-5} Wang et al. (2018a) & & & & \\ \cline{1-1} \cline{3-5} Wu et al. (2019) & \multicolumn{2}{c}{re-ID} & \multicolumn{2}{c}{Traditional methods and architectural perspectives} & \multirow{2}{*}{X} & \multirow{2}{*}{CAAI-TIT} \\ \cline{1-1} \cline{3-5} Wu et al. (2019) & \multicolumn{2}{c}{Image} & \multicolumn{2}{c}{Survey of SOTA methods with feature designing} & \multirow{2}{*}{P} & \multirow{2}{*}{Neur-computing} \\ \cline{1-1} \cline{3-5} Masson et al. (2019) & \multicolumn{2}{c}{Image} & \multicolumn{2}{c}{Exensively covered pruning methods, strategies} & \multirow{2}{*}{X} & \multirow{2}{*}{JIVP} \\ \cline{1-1} \cline{3-5} Nambiar et al. (2019) & \multicolumn{2}{c}{Gait} & \multicolumn{2}{c}{Covered bio-metric details, pose analysis} & \multirow{2}{*}{X} & \multirow{2}{*}{ACM-CS} \\ \cline{1-1} \cline{3-5} Leng et al. (2019) & \multicolumn{2}{c}{Open-world} & \multicolumn{2}{c}{Generalized open-world re-ID} & \multirow{2}{*}{P} & \multirow{2}{*}{IEEE-TCSVT} \\ \cline{1-1} \cline{3-5} Wang et al. (2019b) & \multicolumn{2}{c}{Heterogeneous} & \multicolumn{2}{c}{Focused on heterogeneous re-ID} & \multirow{2}{*}{Partial} & \multirow{2}{*}{IJCAI} \\ \cline{1-1} \cline{3-5} Wang et al. (2020) & \multicolumn{2}{c}{Image} & \multicolumn{2}{c}{Extensive review of previous Re-ID methods} & \multirow{2}{*}{X} & \multirow{2}{*}{IEEE-Access} \\ \cline{1-1} \cline{3-5} Ye et al. (2021) & \multicolumn{2}{c}{Image\&Video} & \multicolumn{2}{c}{Discussed closed-world and open-world re-ID} & \multirow{2}{*}{Partial} & \multirow{2}{*}{IEEE-PAMI} \\ \cline{1-1} \cline{3-5} Xiangtan et al. (2021) & \multicolumn{2}{c}{Text \& Image} & \multicolumn{2}{c}{Extensively reviewed person search methods} & \multirow{2}{*}{X} & \multirow{2}{*}{IJCAI} \\ \cline{1-1} \cline{3-5} Lin et al. (2021) & \multicolumn{2}{c}{Image \& Video} & \multicolumn{2}{c}{Discussion about dataset and evaluation} & \multirow{2}{*}{Partial} & \multirow{2}{*}{arXiV} \\ \cline{1-1} \cline{3-5} **Ours** & \multicolumn{2}{c}{**Video**} & \multicolumn{2}{c}{**Briefly discuss video re-ID methods**} & \multirow{2}{*}{**Fully**} & \multirow{2}{*}{**CVIU**} \\ \cline{1-1} \cline{3-5} & \multicolumn{2}{c}{**Discuss unique architectures, loss functions**} & & & \\ \cline{1-1} \cline{3-5} & \multicolumn{2}{c}{**Performance analysis of current methods**} & & & \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison between existing survey papers and our survey paper. Our survey paper mainly focuses on video re-ID. techniques including feature learning and distance learning, and some of them broadly covered deep learning techniques for re-ID. As far as our deep analysis, there is no survey paper discussing the recent video re-ID methods, novel loss functions, architectural designs, and approaches for video re-ID perspective. In this paper, we discuss comprehensive recent methods published in top-tier conferences and journals. In a nutshell, the contributions discussed in this survey paper are summarized as follows: 1. To the best of our knowledge, this is the very first review paper to extensively cover deep learning methods for video re-ID instead of all types of person re-ID compared with recent existing surveys Ye et al. (2021); Wu et al. (2019); Almasawa et al. (2019). 2. We comprehensively cover deep learning techniques for video re-ID from multiple aspects, including global appearance methods, local part alignment methods, attention methods, graph methods, and transformer methods. 3. This survey paper broadly covers architectural designs, novel loss functions, existing work, and the rapid progress of deep learning for video re-ID. Thus, it gives the readers to overlook the entire video re-ID work. 4. Extensive comparison of top-ranking results on the benchmark datasets is performed. The development of video Re-ID and the challenges affecting video Re-ID systems are discussed, and a brief review and future discussion are given. ### Review of existing survey papers Our recent survey offers a comprehensive and in-depth re-view of video re-ID and is distinct from previous and existing surveys and studies, as we broadly include the particular area of intelligent video surveillance and its practical usage. A detailed literature of previous survey articles aiming at person re-ID and its deep learning-based approaches is present, and some of them focused on open-world and close-world re-ID. Still, as far as we know, there is no previous paper that deeply focuses on video-based person re-ID from a practical point of view. We classify the previous and existing work of literature on person re-ID into five major categories named re-ID in the crowd, DL for re-ID, appearance descriptor for re-ID, opened the world, and closed re-ID. The comprehensive application scenario, relevant contributions, and special considered factors about past survey papers are described in Table 1. The influential work on person re-ID applications is mentioned in Zheng et al. (2016); Almasawa et al. (2019). Mazzon et al. (2012) presented a state-of-the-art (SOTA) re-ID framework for crowd applications and implementation of the practical framework in crowded scenes where people's movement captured from body appearance and comprehensively covered the discussion about person re-ID applications in terms of appearance features (color, texture, shape), association (distance, learning, optim) and calibration (color, spatial-temp). Similarly, Riccardo Satta (2013), provided a comprehensive overview of appearance descriptors and challenging issues i.e., illumination changes, partial occlusion, changes in color response and pose and viewpoint variations. Furthermore, they covered global and local features with some "other cues". Furthermore, the author in Gala and Shah (2014) broadly discussed person re-ID problems, especially challenges related to the system level and component level. The authors discussed possible re-ID scenarios with comprehensively covered public datasets, evaluation metrics, and current work in re-ID. One of the major and remarkable surveys Zheng et al. (2016) focused on each aspect of re-ID and connected with instance retrieval and image classification. A brief milestone and technical perspectives of image/video-based re-ID with hand-crafted methods were discussed. Traditional methods for person re-ID Wang et al. (2018) were highlighted with further extended deep learning approaches such as CNN, RNN, and GAN to achieve person re-ID task and covered advantages and disadvantages. Similarly, the author in Lavi et al. (2018) briefly discussed person re-ID from a video surveillance perspective and covered specific novel re-ID loss functions. The authors provided detailed re-ID approaches and divided them into i.e., recognized, verified deep model, distance learning metrics, feature learning, video-based person re-ID models, and data augmentation models. Further, they also conducted some experiments on the base model with several people re-ID methods Wu et al. (2019). In Masson et al. (2019), a detailed analysis of pruning techniques for compressing re-ID models is presented. To strengthen person the work, the authors further experimented with different pruning techniques and applied them to the deep siamese neural network. Their finding shows that pruning methods substantially decrease the number of parameters without decreasing accuracy. Different from previous surveys, a gait-based Nambiar et al. (2019) person re-ID has been discussed and highlighted various biometric regions in the human body i.e., hard biometrics Figure 1: Timeline of the top-performing methods for video re-ID task. including face identity, fingerprint, DNA, eye retina, and palmprint. Similarly, soft biometrics are related to body measurement, eye color, gait, and hair/beard/mustache. Particularly, the authors in Almasawa et al. (2019) briefly discussed traditional and deep learning-based popular architectures and categories into image and video re-ID. Additionally, they compared to rank 1 results with SOTA methods and highlighted important challenges with future direction. Mostly person re-ID systems designed for closed-world settings, in Leng et al. (2019), the authors focused on open-world settings and discussed new trends of person re-ID in that area. They analyzed inconsistencies between open and closed-world applications and briefly discussed data-driven methods. Several specific surveys Mazzon et al. (2012); Nambiar et al. (2019); Wang et al. (2019) presented a depth literature review of some particular areas like heterogeneous re-ID Wang et al. (2019), the author studied the concept of Hetero re-ID. They provided a comprehensive literature review in infrared images, low-resolution images, text, and sketches. Afterward, the authors analyzed various datasets with evaluation metrics by giving future insights and providing new challenges areas in Hetero re-ID. Recently, the author Ye et al. (2021) conducted an extensive literature review of deep learning-based re-ID. Instead of focusing on an overview, they briefly covered limitations and advantages. The new AGW baseline is designed with a novel evaluation metric (mINP) for single and cross-modality re-ID tasks. However, the above survey papers of all presented covered person re-ID surveys do not focus on recent methods of VID re-ID and their solutions for intelligent video surveillance and practical applications. Precisely, we cover recent novel loss functions designed for video re-ID, architectural design, brief technical aspects of significant papers, and broadly discuss performance analysis with the most frequent datasets used for video-based re-ID. Several popular methods are illustrated in Fig. 1 ## 2 Video re-ID Methods This section discusses the feature representation learning approaches for video re-ID. We divide it into five main categories: a) Global Appearance Methods (subsection 2.1) b) Local Part Alignments Methods (subsection 2.2) c) Attention Methods (subsection 2.3) d) Graphs Methods (subsection 2.4) and e) Transformers Methods (subsection 2.5). ### Global Appearance Methods This class of methods extracts a single feature vector from a person's image without any supplementary information. Since person re-ID is originally applied for person retrieval problems Zhang et al. (2020), learning global feature is often ignored in previous studies when incorporating existing DL approaches into the video re-ID domain. As a pioneering work, Niall et al. (2016) introduces the first **Recurrent Deep Neural Network (RDNN)** architecture based on pooling and re-currency mechanism to combine all time-step data into a single feature vector. To compare different temporal modeling methods, Gao and Nevatia (2018) comprehensively study 3D ConvNET, RNN, temporal pooling, and temporal attention by fixed baseline architecture trained with triplet and softmax cross-entropy losses. Fu et al. (2019) address large-scale video re-ID problem by introducing their **Spatial Temporal Attention (STA)** method. Rather than extracting direct frame-level clues by using average pooling, a 2D ST map is used to measure clip-level feature representation without any additional clues. Generally, features extracted from a single frame contain a lot of noise, illumination, occlusion, and different postures. This results in the loss of discriminative information (e.g., appearance and motion). **Refining Recurrent Unit (RRU)**Liu et al. (2019) recovers the missing parts with the help of motion context and appearance from the previous frame. Another popular solution is to explicitly handle alignment problem corruption using occluded regions. Li et al. (2018) employs a unique diversity regularization expression formulated on Hellinger distance to verify the SA models which do not find similar body parts. Zhao et al. (2019) propose an attribute-based technique for feature re-weighting frame and disentanglement. Single frame features are divided into different categories of sub-features, and each category defines a specific semantic attribute. A two-stream network Song et al. (2019) that jointly handle detailed and holistic features utilize an attention approach to extract feature at the global level. Another network captures local features from the video and enhances the discriminative ST features by combining these two features. Different from Zhang et al. (2020), a **Global-guided Reciprocal Learning (GRL)** framework Liu et al. (2021) extracts fine-grained information in an image sequence. Based on local and global features, **Global-guided Correlation Estimation (GCE)** module generates feature correlation maps, locating low and high correlation regions to identify similar persons. Further, to handle multiple memory units and enhance temporal features, **Temporal Reciprocal Learning (TRL)** is constructed to gather specific clues. Li et al. (2021) improve the global appearance by jointly investigating global and local region alignments by considering inter-frame relations. ### Local Part Alignments Methods These methods extract local part/region that effectively prevents misalignment with other frames in a tracklet. Considering the persistent body structure with the combination of inconsistent body parts in terms of appearance, they are new to each other. The goal is to distinguish personal images based on visual similarity. To preserve structural relationship details, the **Structural Relationship Learning (SRL)**Bao et al. (2019) is proposed to extract structural relations in a refined and efficient way. SRL helps convolutional features to make the relation useful between regions and GCN. GCN allows learning the feature representations of the hidden layers which encode node features and local structural information of graph. Another popular solution is **Spatial-Temporal Completion network (STCnet)**Hou et al. (2019), a method explicitly handles partial occlusion by recovering the occluded part appearance. **Region-based Quality Estimation Network (RQEN)**Song et al. (2018) designs an end-to-end training technique with gradient and learns the partial quality of each person image and aggregates complementary partial details of video frames in a sequence. Different from previous methods, they utilize erasing techniques to penalize regularized terms during network training to prevent over-fitting. Hou et al. (2020) capture complementary affinities from video frames using an erasing strategy during training and testing. Based on the activated parts of previous frames, this approach erases the regions of each frame which ensures the frame concentrate on a new human part. To extract fine-grained cues, **Multi-Granularity Reference aided Attentive Feature Aggregation (MG-RAFA)** is proposed in Zhang et al. (2020) to jointly handle Spatio-temporal features. Semantic hierarchy is considered for each node/position from a global point of view. For the position of each feature, local affinities are utilized with reference to feature nodes which provide the global structural and appearance information to support different weights to local features. Li et al. (2021) considers a holistic feature for visual similarity of video frames while focusing on the quality that allows the recovery of misaligned parts. ### Attention Methods These methods usually ignore dissimilar pixels in training and prediction, employing similar pixels to make computational-friendly networks. Song et al. (2018) introduce a mask-guided network, where binary masks are used to coexist with corresponding person images to decrease background clutter. Similar to the prior work, Subramaniam et al. (2019), CO-Segmentation approaches have shown remarkable improvements in video re-ID over different baselines by integrating a **CoSegmentation-based Attention (COSAM)**Subramaniam et al. (2021) block among different layers in CNN networks. These CO-segmentation methods are able to extract unique features between person images and use them for channel and spatial-wise attention. In a different work in video re-ID, Chen et al. (2019) learn spatial-temporal features and calculate an attention score map to specify the quality of different components of a person. In real-world applications, the motion patterns of humans are the dominant part of re-ID. The Flow Guided-Attention network Kiran et al. (2021) is designed to fuse images and sequence of optical flow using CNN feature extractor which allows encoding of temporal data among spatial appearance information. The Flow Guided-Attention depends on joint SA between optical flow and features to take out unique features among them. Additionally, an approach to aggregate features is proposed for longer input streams for improved representation of video sequences. Several studies focus on multi-grained and multi-attention approaches to concentrate on important parts of the human body. Hu et al. (2020) introduce **Concentrated Multi-grained Multi-Attention Network (CMMANet)**, multi-attention blocks are proposed to obtain multi-grained details by processing intermediate multi-scale features. Moreover, multiple-attention sub-modules in multi-attention blocks can automatically discover multiple discriminative regions in the frame sequence. Relevant to multi-branch networks, Hou et al. (2021) propose an innovative and computational-friendly video re-ID network that differs from the existing frameworks. **Bilateral Complementary Network (BiCnet)** preserves spatial features from the original image and down-sampling approach to broaden receptive fields and **Temporal Kernel Selection (TKS)** module captures the temporal relationship of videos. Different from previous studies, Chen et al. (2020) introduces an end-to-end 3D framework to capture salient features of pedestrians in spatial-temporal domains. In this framework, salient 3D bins are selected with the help of two-stream networks and an RNN model to extract motion and appearance information. ### Graph Methods After the remarkable success of the CNN model Krizhevsky et al. (2012) in image understanding and reconstruction, academic and industrial researchers have focused on developing convolutional approaches for graph data. Recently, researchers combine re-ID methods with graph models and explore Video re-ID Yan et al. (2016). Cheng et al. (2018) develop a training network that jointly handles conventional triplet and contrastive losses through a joint laplacian form that can take complete benefit of ranking data and relationships between training samples. In Shen et al. (2018), a novel unsupervised algorithm is formulated, which maps the ranking mechanism in the person re-ID method. Then, the formulation procedure is extended to be able to utilize ranking results from multiple algorithms. Only matching scores produced by different algorithms can lead to consensus results. The key role of the person re-ID task is to efficiently calculate the visual resemblance among person images. Still, ongoing person re-ID methods usually calculate the similarity of different image pairs (investigated) and candidate lists separately whereas neglecting the association knowledge among various query-candidate pairs. To solve the above problems, **Similarity-Guided Graph Neural Network (SGGNN)**Chen et al. (2018) propose to generate a graph to illustrate the pairwise associations between query and candidate pairs (nodes) and utilize these associations to provide up-to-date query candidate correlation features extracted from the image in an end-to-end manner. Most re-ID approaches emphasize local features for similarity matching. Chen et al. (2018) combine multiple person images to estimate the association between local relation and global relation in their **Conditional Random Field (CRF)**. The benefit of this model is to learn local resemblance metrics from image pairs whereas considering the dependencies of all images in a collection, shaping group similarities. Yan et al. (2019) put more effort into person re-ID and employ context details. They first develop a contextual module called the instance expansion part, which emphasizes on relative attention part to find and purify beneficial context detail in the scene. One of the innovative works Wu et al. (2020) for video re-ID is graph-based adaptive representation. Existing studies ignore part-based features, which contain temporal and spatial information. This approach allows the association between contextual information and relevant regional features such as feature affinity and poses alignment connection, to propose an adaptive structure-aware contagiousness graph. Liu et al. (2021) present **Correlation and **Topology Learning (CTL)** method which generates robust and distinguish features. It captures features at multi-granularity levels and overcomes posing appearance problems. Lately, hyper GNNs have attracted a lot of attention and achieved dominant results in various computer vision research fields such as person re-ID Shen et al. (2018), action recognition Wang and Gupta (2018) and image recognition Chen et al. (2019). These hypergraph algorithms develop pairwise relationships on the basis of object interest. In general, a hypergraph is a graph in which edges independently work and can join any considerable number of vertices. The illustration of hypergraph as shown in Fig. 2 (b) Conversely, as represented in Fig. 2 (a) where an edge exactly links with two vertices in a simple graph. In MG hypergraph, as represented in Fig. 2 (d) hypergraphs with distinct spatio granularities are built utilizing numerous stages of features like body part throughout the video frames. In every hypergraph stage, novel temporal granularities are taken by hyperedges which connect a type of nodes in a graph such as body part features around separate temporal scales. The first **Multi-Granular Hypergraph (MGH)**Yan et al. (2020) hypergraph and innovative mutual information loss function are proposed to overcome the image retrieval problem. The MGH approach clearly supports multi-granular ST information from the frame sequence. Then, they propose an attention procedure to combine features presenting at the node level to obtain better discriminative graph representations. Remarkably, the proposed approach achieves 90% rank-1 accuracy which is amongst the highest accuracy on the MARS dataset. Label estimation in graph matching is closely related to person re-ID problems in unsupervised learning. Ye et al. (2019) present an unsupervised **Dynamic Graph Matching (DGM)** video re-ID approach to predict labels. This technique iterates the update process by utilizing a discriminative metric and correspondingly updated labels. ### Transformer Methods Recently, transformer shows a great interest in the computer vision field, and self-attention-based methods are proposed to solve visual problems. Inspired by recent development, Zhang et al. (2021) put forward the first step, propose first **SpatioTemporal transformer (STT)** and synthesize pre-training data strategy to reduce over-fitting for video re-ID task. In their network, the global module enables supplement to utilize the relation among patches from frames. To extract comprehensive features from videos, Liu et al. (2021) further explore transformers and introduce **Trigeminal Transformers (TMT)** with robust novel feature extractor that jointly transform raw videos into S, T, and ST domains. To capture fine-grained features and aggregate in multi-view features, a self-view transformer is proposed to enhance single-view features and a cross-view transformer is used to combine multiple features. A **Duplex SpatioTemporal Filtering Network (DSFN)**Zheng et al. (2021) architecture is designed to extract static and dynamic data from frame sequences for video re-ID. To enhance the capability of kernels, sparse-orthogonal constraints are developed to broaden the difference in temporal features. To collaborate with a group of kernels, they add additional channels to assist and extract ST clues from distinct features. A Hybrid **Dense Interaction Learning (DenseIL)** framework is presented in He et al. (2021) which utilizes both CNN and Attention mechanism for video re-ID. DenseIL consists of a CNN-based encoder which is responsible to extract efficient discriminative spatial features and a DI-based decoder densely modeling the ST inherent interaction among frames. ## 3 Novel Architectures Different from existing architectures, Jiang et al. (2021) propose a novel design to handle misalignment problems in video re-ID. **Self-Separated network (SSN)** provides an effective approach to deal with temporal and spatial variation of a person's body parts. SSN derives a two-round classification approach, leading to better training in pixel-wise and aggregated features. The improved **Coarse-to-Fine Axial Attention Network (CF-AAN)**Liu et al. (2021) is designed with the help of Link and re-Detect block which can align noisy tracklist on the image level. This module not only decreases computational costs but also achieves promising results. Various video re-ID methods are still suffering from pose changes and personal misalignment problems. To handle misalignment, Zhang et al. (2021) propose the **Reference-Aided Part-Aligned (RAPA)** that focuses on different parts of the body and disentangles the discriminative features. **Reference Feature Learning (RFL)** pose-based module is provided to capture uniform standards for alignment. Aligning the body parts in intra-video, relations, and attention-based **Part Feature Disentangling (PFD)** blocks are designed to locate and match body parts through frames. Most video re-ID methods focus on the important region of the image, therefore, these methods can easily lose out on fine Figure 2: **Illustration of simple graph each line connected with two vertices. In a hypergraph, each edge is connected with more than two vertices. In a multi-granularity graph, each node models specific spatial granularity, and each hypergraph is connected with multiple nodes.** grained hints in image sequences. Different from previous studies, the novel GRL Liu et al. (2021) framework is introduced along with reciprocal learning and correlation estimation. The GCE module creates the feature maps of local and global features that helps to locate the low regions and high regions to identify a similar person. Then, a novel TRL approach is introduced to improve the high-correlation semantic information. Gu et al. (2020) propose **Appearance Preserving 3D Convolution (AP3D)** and **Appearance-Preserving Module (APM)**, which align neighborhood feature maps in pixel-level. 3D ConvNets model temporal information on the basis of preserving the quality of visual appearance. It may be easier to aggregate AP3D with current 3DConNet by substituting prior 3D-Conv filters to AP3Ds. In video re-ID, personal attributes and visual appearance are key to matching identities, and both features significantly contribute to the tracking of pedestrians. Novel TALNet Liu et al. (2020) is proposed to focus on attribute-temporal learning by constructing a branch network with the help of SA and temporal-semantic context. ## 4 Loss Functions Loss function plays a major and crucial role in discriminating the learned features. In general, the softmax loss separates the learned features rather than discriminates. The main goal of designing a person re-ID loss function is to enhance representation with an efficiency loss. We highlight several of the most influential loss functions for video re-ID. ### Attention and CL Loss Pathak et al. (2020) introduce CL centers online soft mining loss which utilizes center vectors from center loss as class label vector representations to crop out those frames that contain higher noise because it contains high variance compared to the original classifier weights. Additionally, they penalize the model by giving maximum attention scores to those frames that have randomly deleted patches. Those random erased frames are labeled as 1 otherwise 0 and N is the number of total frames. \[\text{AL}=\frac{1}{N}\sum_{i=1}^{N}\text{label}(i)*\text{ Attention }_{score}(i) \tag{1}\] ### Weighted Triple-Sequence Loss (WTSL) Jiang et al. (2020) explicitly encode frame-based image level information into video level features that can decrease the effect of outlier frame. Intra-class distance in WTSL makes similar videos closer and inter-class distance pushes dissimilar videos further apart. \[\text{L}_{WTSL}=\sum_{i=1}^{N}\left[\left\|\text{I}\text{F}_{a}^{i}-\text{F}_ {p}^{i}\right\|_{2}^{2}-\left\|\text{F}_{a}^{i}-\text{F}_{n}^{i}\right\|_{2}^ {2}+\alpha\right]_{2} \tag{2}\] where \(\alpha\) represents margin, N is the number of triple-sequences and P represents person ID. The \(\text{F}_{a}\) is a closer feature to its own class centroid and far away from other class centroids. ### Symbolic Triplet Loss (STL) Aruna Kumar et al. (2020) propose STL which utilizes the Wasserstein metric to overcome the representation problem which allows obtaining the distance between feature vectors that are symbolic. \[D_{w}\left(\psi_{i},\psi_{j}\right)=\sum_{m=1}^{M}\sum_{r=1}^{T}{\psi_{im}}^{- 1}(t)-{\psi_{jm}}^{-1}(t) \tag{3}\] where \(\psi_{i}\) and \(\psi_{j}\) denote the distributions of multi-dimensional feature vectors at the \(i^{th}\) and \(\text{j}^{th}\). \(\psi_{i}{}^{-1}(t)\) is the quantile function and M is the feature of each video. ### Weighted Contrastive Loss (WCL) Wang et al. (2019) construct WCL by the combination of traditional contrastive loss. The purpose of this loss function is to allocate an appropriate weight for every proper image pair. \[L_{WCL}(N)=\frac{1}{2}\frac{\sum_{(x,x_{i})\in N}w_{ij}^{-1}\max\left(0, \alpha-d_{ij}\right)^{2}}{\sum_{(x,x_{i})\in N}w_{ij}^{-1}} \tag{4}\] \begin{table} \begin{tabular}{l c c c c c c} \hline **Reference and Venue** & **Method** & **Extractor** & **L. Function** & **LR** & **Optimizer** & **Epochs** \\ \hline Wang et al. (2014) ECCV & DVR & HOG3D & Hinge & — & — & — \\ Karanam et al. (2015) CVPR & SRID & Schmid, Gabor filters & — & — & — & — \\ Liu et al. (2015) ICCV & STFV3D & Fisher Vector & — & — & — & — \\ Wu et al. (2016) ARXIV & Deep RCN & — & — & — & — & — \\ \multirow{2}{*}{You et al. (2016) CVPR} & \multirow{2}{*}{TDL} & HOG3D, Color & & & \\ & & Histograms, LBP & & & \\ Chen et al. (2016) IEEE-SRL & OFEI & LBP & — & — & — & — \\ Chen et al. (2016) ECCV & RFA-Net & LBP, HSV, Lab & Softmax & 0.001 to & — & 400 \\ Niall et al. (2016) CVPR & CNN and RNN & Cross Entropy & — & 0.001 & SGD & 500 \\ Zhou et al. (2017) CVPR & JS-TRNN & TAM and SRM & Triplet & — & — & — \\ Liu et al. (2017) CVPR & QAN & — & Softmax and Triplet & — & — & — \\ Xu et al. (2017) ICCV & ASTPN & — & CE and Hinge & 0.001 & SGD & 700 \\ Chung et al. (2017) ICCV & 2SCNN & CNN and RNN & Softmax & 0.001 & SGD & 1000 \\ Gao et al. (2021) ACM\_MM & CMA & CNN+RNN & Softmax & 0.001 & SGD & 800 \\ \hline \end{tabular} \end{table} Table 2: Training configuration of novel architectures. LR denotes learning rate and L represents loss \[L_{WCL}(P,N)=(1-\lambda)L_{WCL}(P)+\lambda L_{WCL}(N) \tag{5}\] where hyperparameter \(\lambda\) handles the contribution of both positive and negative sets towards final value of contrastive loss. ### Triplet Loss Chen et al. (2019) design triplet loss to conserve ranking relationship among videos of pedestrian triplets. In triplet loss, the distance between feature pairs belonging to similar classes decreases, while the distance between feature pairs of different classes increases. \[\begin{split}& L_{trii}=\sum_{i,j\in\Omega}\left[d_{g}(i,j)-d_{g}(i,k)+m _{g}\right]_{+}\\ &+\sum_{i,j\in\Omega}\lambda\left[d_{i}(i,j)-d_{d}(i,k)+m_{l} \right]_{+}\end{split} \tag{6}\] where \(m_{g}\) and \(m_{l}\) represent thresholds margin to restrict the distance gap between positive and negative samples and \([x]^{+}\) is the max function \(\max(0,x)\). ### Regressive Pairwise Loss (RPL) Liu et al. (2018) develop RPL to improve pairwise similarity by combining all positive sets in one single subspace. It helps with the soft margin between positive sets and is harder than the general triplet loss. \[\begin{split} L_{p}\left(x_{i},x_{j},y\right)=y\cdot\max\left\{d \left(x_{i},x_{j}\right)-\log(\alpha),0\right\}\\ +(1-y)\cdot\max\left\{\alpha-d\left(x_{i},x_{j}\right),0\right\} \end{split} \tag{7}\] where \(y\) denotes label whether \(x_{i}\) and \(x_{j}\) are similar people. If a person is from the same identity it is represented as 1 otherwise 0. When y = 0, RPL pushes samples far away from each other beyond margin \(\alpha\). When y = 1, RPL pulls the samples together within distances no more than \(\log(\alpha)\). ## 5 Datasets and Metrics We first describe the statistics of benchmark datasets that are frequently used for evaluating video re-ID methods. Secondly, we broadly review the performance of previous superior methods in chronological order. Lastly, we analyze results based on several major factors for video re-ID. ### Training and Testing Datasets Since video re-ID is a real-world problem and closer to a video surveillance scenario. During past years, various demanding datasets have been constructed for video re-ID: MARS Zheng et al. (2016), DukeMTMC-VID Wu et al. (2018) and iLIDS-VIDWang et al. (2014), these three datasets are commonly used for training and evaluation, because of the large number of track-lets and pedestrian identities. #### 5.1.1 Mars The dataset is constructed based on six synchronized CCTV cameras. It comprises \(1,261\) pedestrians with different varieties of images (poor image quality, poses, colors, and illuminations) captured by two cameras. It is extremely difficult to match pedestrian images because it contains \(3,248\) distractors to make the dataset more real-world. #### 5.1.2 DukeMTMC-VID It is a subgroup of the DukeMTMC dataset which purely consists of 8 cameras with high-resolution images. It is one of the large-scale datasets where pedestrian images are cropped using manual hand-drawn bounding boxes. Overall, it comprises 702 identities, \(16,522\) training images \(17,661\) gallery images, and \(2,228\) probe images. #### 5.1.3 iLIDS-Vid It is one of the challenging datasets which contains 300 pedestrians captured by two CCTV cameras in public. Due to the public images, it contains lighting, viewpoint changes, different similarities, background clutter, and occlusions. It consists of a 600 sequence of images of 300 diverse individual images. Each sequence of the pedestrian images has a range length of 23 to 192 and the number of frames is 73. ### Evaluation Protocol The are two standard evaluation protocols for evaluating video re-ID methods which are mAP and CMC. CMC is the probability of top top-K correct matches in a retrieval list. Another evaluation metric is mAP, which measures the average retrieval accuracy with multiple GT. ## 6 Analysis and Future Direction We broadly review the top-performing methods from video re-ID perspectives. We mostly focus on the work published in 2018 till now. Specifically, we include STAN Li et al. (2018), Snippet Chen et al. (2018), STA Fu et al. (2019), ADFD Zhao et al. (2019), VRSTC Hou et al. (2019), GLTR Li et al. (2019), COSAM Subramaniam et al. (2019), STE-NVAN Liu et al. (2019), MG-RAF Zhang et al. (2020), MGH Yan et al. (2020), STGCN Yang et al. (2020), TCLNet Hou et al. (2020), AP3D Gu et al. (2020), AFA Chen et al. (2020), PSTAWang et al. (2021) DenseIL He et al. (2021), STMN Eom et al. (2021), STRF Aich et al. (2021), SANet Bai et al. (2021), DPRAM Yang et al. (2021), HMN Wang et al. (2021), GRL Liu et al. (2021), and TMT Liu et al. (2021). We summarize the video re-ID results on three widely used benchmark datasets. Table. 3 highlights the backbone, mAP and R-1 results, and methods. Firstly, with the recent development of self-attention-based methods, several video re-ID methods have obtained higher mAP and top-1 accuracy (Liu et al. (2021) 91.2%) on the widely used MARS dataset. Especially, DenseIL He et al. (2021) achieves the highest mAP of 87.0% but rank-1 accuracy is 90.8% which is slightly lower than TMTLiu et al. (2021) on MARS dataset. The advantage of the DenseIL He et al. (2021) method is to simultaneously use CNN and attention-based architecture to efficiently encode spatial information into discriminative features. Those methods focus on long-range relationships and specific part-level information on an input signal. Various popular methods separately learn weights and spatial-temporal features Hou et al. (2019, 2020). Another observation in Zhang et al. (2021) illustrates that capturing and aggregating pedestrian cues is spatial-temporal while ignoring discrepancies including background areas, viewpoint, and occlusions. However, in a real-world scenario, the visual data contains a lot of diverse modalities such as recording information, camera ID, etc. Most studies focus on visual similarity by matching probe images into gallery images. Thus, it neglects textual information which is not a good idea. Proposing a new method that extracts visual-textual information at the same time would be helpful in a real-world environment and it will also help to provide more accurate results. Secondly, annotating new datasets with accurate labels on different CCTV cameras is an expensive and laborious task. In most cases, annotated data are wrong-labeled due to various factors such as person visibility, background clutter, and noise issues in images. Several researchers focus on unsupervised methods Ye et al. (2018, 2019) and active learning approaches Wang et al. (2018) to alleviate the annotation problem. Still, the accuracy of unsupervised video re-ID methods degrades significantly compare to supervised video re-ID methods. In the future, introducing a unique video re-ID method that facilitates clustering and label assignment will be considered to improve existing unsupervised methods. Further, designing a specific data augmentation policy in a re-ID search space can easily increase the overall performance for all re-ID methods. Finally, the accuracy on three challenging datasets reaches a difficult state, where the performance gap is less than 1% accuracy like PSTA Wang et al. (2021) and DenseIL He et al. (2021) on the DukeVID dataset. As a result, it is still difficult to select the best superior method. On iLIDS, the rank-1 performance of PSTA Wang et al. (2021) is 91.5% and TMT Liu et al. (2021) is 91.3%. However, most video re-ID architectures are complex in terms of the number of parameters for learning invariant feature representations on combined datasets. Meanwhile, re-ID methods use metric learning techniques like euclidean distance to calculate feature similarity which is time-consuming and slow retrieval and not applicable in real-world applications. How to design a new strategy to replace metric learning strategies still needs more research. Thus, further exploration of video re-ID approaches remains an interesting area for future research. ## 7 Conclusion This paper presents a comprehensive review of global appearance, local part alignment methods, graph learning, attention, and transformer model in video re-ID. We provide specific loss functions with mathematical representation to help new researchers to use them instead of using straightforward common loss functions for video re-ID. Finally, we highlight widely and frequently used datasets for evaluating video re-ID techniques and analyze the performance of different methods and provide future research direction.
2307.05868
Photon-induced droplet-like bound states in one-dimensional qubit array
We consider an array of $N_e$ non-interacting qubits or emitters that are coupled to a one-dimensional cavity array with tunneling energy $J$ and non-linearity of strength $U$. The number of cavities is assumed to be larger than the number of qubits. Working in the two-excitation manifold, we focus on the bandgap regime where the energy of two excited qubits is off-resonant with the two-photon bound state band. A two-step adiabatic elimination of the photonic degrees of freedom gives rise to a one-dimensional spin Hamiltonian with effective interactions; specifically, the Hamiltonian features constrained single-qubit hopping and pair hopping interactions not only between nearest neighbors but also between next-to-nearest and next-to-next-to-nearest spins. For a regularly arranged qubit array, we identify parameter combinations for which the system supports novel droplet-like bound states whose characteristics depend critically on the pair hopping. The droplet-like states can be probed dynamically. The bound states identified in our work for off-resonance conditions are distinct from localized hybridized states that emerge for on-resonance conditions.
J. Talukdar, D. Blume
2023-07-12T01:46:55Z
http://arxiv.org/abs/2307.05868v1
# Photon-induced droplet-like bound states in one-dimensional qubit array ###### Abstract We consider an array of \(N_{e}\) non-interacting qubits or emitters that are coupled to a one-dimensional cavity array with tunneling energy \(J\) and non-linearity of strength \(U\). The number of cavities is assumed to be larger than the number of qubits. Working in the two-excitation manifold, we focus on the bandgap regime where the energy of two excited qubits is off-resonant with the two-photon bound state band. A two-step adiabatic elimination of the photonic degrees of freedom gives rise to a one-dimensional spin Hamiltonian with effective interactions; specifically, the Hamiltonian features constrained single-qubit hopping and pair hopping interactions not only between nearest neighbors but also between next-to-nearest and next-to-next-to-nearest spins. For a regularly arranged qubit array, we identify parameter combinations for which the system supports novel droplet-like bound states whose characteristics depend critically on the pair hopping. The droplet-like states can be probed dynamically. The bound states identified in our work for off-resonance conditions are distinct from localized hybridized states that emerge for on-resonance conditions. ## I Introduction Qubits or, more generally, few-level emitters coupled to a cavity array provide a platform with which to investigate fundamental aspects of matter-light interactions. Topics of interest include the generation of photon-mediated entanglement between non-interacting separated qubits [1; 2; 3; 4], of ultrastrong matter-light interactions [5; 6; 7; 8; 9; 10; 11], of broad matter-light hybrid bound states [12; 13; 14; 15; 16; 17; 18], and of effective photon-photon interactions [19; 20; 21; 22]. Photonic baths have been realized using nanophotonic wave guides [23; 24; 25; 26; 27; 28; 29; 30], superconducting resonators [31; 32; 33; 34], and plasmonic waveguides [35]. Qubit realizations include Rydberg atoms [36; 37], quantum dots [38], and transmon qubits [39; 40; 41; 42; 43]. It was recently shown that the addition of a Kerr-like non-linearity to the tight-binding Hamiltonian, which accounts for the tunnel-coupling of the single-mode cavities, leads to intriguing and qualitatively novel phenomena if the energy of two excited qubits is tuned to be in resonance with the two-photon bound state band that exists due to the Kerr-like non-linearity [45; 46; 47]. For two qubits initialized in their excited state, e.g., the non-trivial mode structure of the bath, i.e., the cavity array with non-linearity, was shown to support emission dynamics that ranges from exponential decay to fractional populations to Rabi oscillations [46; 47]. For many qubits, supercorrelated radiance was predicted [45]. This work instead investigates the off-resonant or band-gap regime [48] within the framework of Schrodinger quantum mechanics. To reduce the high-dimensional Hilbert space to a physically intuitive and numerically more tractable model, effective constrained single-qubit and pair hopping interactions are derived through a two-step procedure that adiabatically eliminates single- and two-photon processes. The resulting effective one-dimensional spin Hamiltonian, which lives in the two-excitation manifold (i.e., two flipped spins), is shown to capture the key features of the full Hamiltonian. The effective constrained single- and two-qubit hopping interactions, which are derived under the assumption that the coupling strength \(g\) between an emitter and a cavity is small compared to the tunneling energy \(J\), are directly proportional to \(g^{2}\) and \(g^{4}\), respectively. Even though the scaling of the effective interactions with \(g\) suggests that the single-qubit hopping dominates over the two-qubit hopping, we identify a parameter regime where the latter, which depends on the non-linearity \(U\), impacts the eigenstate characteristics appreciably. Specifically, the pair hopping interaction favors localization of excited qubits in or near the middle of the qubit array, giving rise to a new class of droplet-like bound states. These bound states are distinct from two-string bound states that exist, e.g., in the XXX spin Hamiltonian that is solvable via the Bethe ansatz [49; 50]. Unlike Hamiltonian that are tractable via the Bethe ansatz, our emergent one-dimensional spin model features non-negligible nearest-neighbor, next-to-nearest-neighbor, and next-to-next-to-nearest-neighbor interactions. It is shown that the radiation dynamics, if initiated from an initial state that contains two qubit excitations but no photons, depends strongly on how the two qubit excitations are distributed among all possible two-qubit excitation eigenkets. A fully symmetric initial state is shown to induce oscillatory dynamics between the droplet-like ground state and a scattering state. Dependence of the dynamics on the initial state is, of course, a well known phenomenon that has, e.g., been exploited in the study of phase transitions and critical points as well as in sensing applications. The remainder of this article is organized as follows. Section II introduces the system Hamiltonian and the reduction of the Hilbert space to the qubit degrees of freedom. Section III shows that the effective qubit Hamiltonian supports a new class of liquid-like or droplet-like bound states. Section IV illustrates that these droplet-like states can be probed dynamically. Last, a summary and outlook are provided in Sec. V. ## II Derivation of effective qubit Hamiltonian Section II.1 introduces the total Hamiltonian \(\hat{H}\) of the matter-light hybrid system. Focusing on the band gap regime of the photonic lattice, Sec. II.2 derives the effective spin Hamiltonian \(\hat{H}_{\text{spin}}\). ### Total Hamiltonian \(\hat{H}\) The total Hamiltonian \(\hat{H}\) reads \[\hat{H}=\hat{H}_{\text{qubit}}+\hat{H}_{\text{bath}}+\hat{H}_{\text{qubit-bath}}, \tag{1}\] where \(\hat{H}_{\text{qubit}}\) is the Hamiltonian of the uncoupled qubits, \(\hat{H}_{\text{bath}}\) the bath Hamiltonian, and \(\hat{H}_{\text{qubit-bath}}\) the qubit-bath coupling Hamiltonian. The qubit system consists of \(N_{e}\) qubits with a transition energy of \(\hbar\omega_{e}\) between the ground state \(\ket{g}_{j}\) and the excited state \(\ket{e}_{j}\) of the \(j\)th qubit (see purple ovals and rectangular box in top-left corner in Fig. 1). We are interested in the regime where the qubits form a regularly arranged finite lattice (\(N_{e}\) finite and much greater than 1). The qubit Hamiltonian \(\hat{H}_{\text{qubit}}\) is given by \[\hat{H}_{\text{qubit}}=\frac{\hbar\omega_{e}}{2}\sum_{j=1}^{N_{e}}(\hat{ \sigma}_{j}^{z}+\hat{I}_{j}), \tag{2}\] where \(\hat{\sigma}_{j}^{z}=\ket{e}_{j}\bra{e}-\ket{g}_{j}\bra{g}\) and \(\hat{I}_{j}^{z}=\ket{e}_{j}\bra{e}+\ket{g}_{j}\bra{g}\). The bath Hamiltonian \(\hat{H}_{\text{bath}}\) is a one-dimensional tight-binding Hamiltonian with non-linearity \(U\), \[\hat{H}_{\text{bath}}= \hbar\omega_{e}\sum_{n=1}^{N}\hat{a}_{n}^{\dagger}\hat{a}_{n}-J \sum_{n=1}^{N}\left(\hat{a}_{n}^{\dagger}\hat{a}_{n+1}+\hat{a}_{n+1}^{\dagger} \hat{a}_{n}\right) \tag{3}\] \[+\frac{U}{2}\sum_{n=1}^{N}\hat{a}_{n}^{\dagger}\hat{a}_{n}^{ \dagger}\hat{a}_{n}\hat{a}_{n},\] where \(\hat{a}_{n}^{\dagger}\) and \(\hat{a}_{n}\), respectively, create and destroy a photon at the \(n\)th cavity (blue box in Fig. 1). In our calculations, the number of cavities \(N\) is chosen such that the results are independent of \(N\); we find that \(N=501\) is sufficiently large for the \(N_{e}\) considered. In Eq. (3), \(\hbar\omega_{c}\) is the single-mode photon energy, \(J\) (\(J>0\)) denotes the tunneling energy of the tunnel coupled cavities, and \(U\) is the non-linear onsite interaction. The Kerr-like non-linearity in Eq. (3) corresponds to effectively repulsively interacting photon pairs (\(U>0\)) or effectively attractively interacting photon pairs (\(U<0\)). In our work, we consider a negative \(U\), which gives rise to two-photon bound states \(\psi_{K,b}\) with center-of-mass wave vector \(K\) and energy \(E_{K,b}\), in addition to the two-photon scattering continuum (blue and dark green regions in Fig. 2) [51; 52; 53; 54]. The black line in Fig. 1 shows a sketch of a two-photon bound state wave function \(\psi_{K,b}\) that extends over several lattice sites. Accounting for all allowed center-of-mass wave vectors \(K\), the two-photon bound states give rise to an energy band (green and dark green regions in Fig. 2). For large values of the onsite interaction strength \(|U|\) (\(|U|/J>4\)), the two-photon bound state band does not overlap with the two-photon scattering continuum. For \(|U|/J=1\), as considered in this paper, the upper part of the two-photon bound state band overlaps with the lower part of the two-photon scattering continuum (the overlap region is shown in dark green in Fig. 2). The difference between the energy \(2\hbar\omega_{e}\) of the two-qubit excited state and the \(K=0\) two-photon bound state energy \(E_{0,b}\), which coincides with the bottom of the two-photon bound state band, defines the detuning \(\delta\), \[\delta=2\hbar\omega_{e}-E_{0,b}. \tag{4}\] The band gap regime, which is the focus of the present work, is characterized by negative detunings \(\delta\). The qubits are coupled to the photons through the system-bath or qubit-bath Hamiltonian \(\hat{H}_{\text{qubit-bath}}\), \[\hat{H}_{\text{qubit-bath}}=g\sum_{j=1}^{N_{e}}\left(\hat{a}_{n_{j}}\hat{ \sigma}_{j}^{+}+\hat{a}_{n_{j}}^{\dagger}\hat{\sigma}_{j}^{-}\right), \tag{5}\] Figure 1: Schematic of the set up. The \(j\)th qubit is coupled with strength \(g\) to the \(n_{j}\)th cavity of a one-dimensional cavity array (blue boxes) with lattice spacing \(a\). The cavities are tunnel-coupled to nearest neighbors with strength \(J\) (blue lines between two neighboring blue boxes). For more than one photonic excitation, there exists an onsite interaction \(U\) between photons. As a result of the onsite interaction, the cavity array (bath) supports two-photon bound states. One of these is shown by the black line above the cavity array. The distance between two neighboring qubits is denoted by \(x\). In the schematic, \(x\) is equal to \(a\); values of \(x/a=0\) and 2 are also discussed in this work. The top-left rectangular box illustrates a qubit, i.e., a two-level system with a transition energy \(\hbar\omega_{e}\) between the ground state \(\ket{g}\) and the excited state \(\ket{e}\). where \(\hat{\sigma}_{j}^{+}\) is the raising operator (\(\hat{\sigma}_{j}^{+}=\ket{e}_{j}\bra{g}\)) and \(\hat{\sigma}_{j}^{-}\) the lowering operator (\(\hat{\sigma}_{j}^{-}=\ket{g}_{j}\bra{e}\)) of the \(j\)th qubit. The label \(n_{j}\) can take any value between 1 and \(N\). In this work, the qubits are assumed to be arranged in a regular pattern with spacing \(x\), where \(x/a=n_{j}-n_{j-1}\). Related works considered regularly placed impurity qubits coupled to an atomic array [55; 56]. Our figures concentrate on \(x/a=1\). For reference, a larger qubit spacing \(x/a=2\) as well as the case where the qubits are all coupled to the same cavity (\(x/a=0\)) are discussed in the text. Since the counter rotating terms are excluded in Eq. (5), our treatment is restricted to the weak coupling regime, i.e., \(g\ll J\). The requirement that single- and two-photon processes are off-resonant [\(|(\hbar\omega_{c}-2J)-\hbar\omega_{e}|>g\) and \(|\delta|>g\)] can, for negative \(\delta\) as considered in this work, be combined into one equation, namely \[|U|>4J\sqrt{\Big{(}1+\frac{g}{4J}\Big{)}^{2}-1}. \tag{6}\] For fixed \(U/J\), Eq. (6) puts an upper limit on \(g/J\). Conversely, for fixed \(g/J\), Eq. (6) puts a lower limit on \(|U|/J\). The total Hamiltonian conserves the number of total excitations (sum of qubit and photonic excitations) [12; 13; 14; 15]. As a consequence, the Hilbert spaces with 0, 1, 2,... total excitations are decoupled. This work focuses on the two-excitation manifold. ### Effective spin Hamiltonian \(\hat{H}_{\text{spin}}\) As mentioned above, we focus on negative detunings such that the energy of two excited qubits is in resonance with the band gap. We find that the band gap physics in the two-excitation manifold is well described by the spin Hamiltonian \(\hat{H}_{\text{spin}}\), which is derived by adiabatically eliminating the photon degrees of freedom in a two-step process (see Appendix A for details). We emphasize that the approach taken here is distinct from the master equation approach pursued in Ref. [45]. The first step is, in spirit, identical to prior work [45; 46; 47]. Neglecting the two-photon scattering continuum and adiabatically eliminating the single-photon states, effective constrained single-qubit hopping interactions of strength \(W_{jl}\) (see \(\hat{H}_{\text{single}}\) below), effective interactions between states with two and no qubit excitations [\(F_{K,b}\) in Eq. (A12)], and effective interactions between two two-photon bound states with wave vector \(K\) and \(K^{\prime}\) [\(G_{K,K^{\prime}}\) in Eq. (A13)] arise. While the latter two interactions were discussed in Refs. [45; 46; 47], the effective qubit hopping interaction was not. The reason is that Refs. [45; 46; 47] focused on \(N_{e}=2\) (\(\hat{H}_{\text{single}}\) vanishes for \(N_{e}=2\)). The hopping Hamiltonian \(\hat{H}_{\text{single}}\) reads \[\hat{H}_{\text{single}}=\frac{1}{2}\sum_{i,j,l=1}^{N_{e}}\Big{(} W_{jl}\hat{\sigma}_{i}^{+}\hat{\sigma}_{j}^{+}\hat{\sigma}_{i}^{-}\hat{\sigma}_{ l}^{-}+\] \[W_{il}\hat{\sigma}_{i}^{+}\hat{\sigma}_{j}^{+}\hat{\sigma}_{l}^{ -}\hat{\sigma}_{j}^{-}\Big{)}. \tag{7}\] Since the triple sum includes terms where two or three of the indices are equal, the order of the operators in Eq. (7) is important. As discussed in more detail below, \(\hat{H}_{\text{single}}\) describes constrained single-qubit hopping or constrained flip-flop interactions. We find that the effective interactions \(G_{K,K^{\prime}}\) contribute negligibly to the band gap physics considered in this work; thus, they are set to zero. Calculations that treat the full Hamiltonian \(\hat{H}\) show that the photonic contribution to the eigenstates is smaller than 10% for the parameter combinations considered in this work. This motivates our second approximation, namely, the adiabatic elimination of the states \(\hat{B}_{K}^{\dagger}\ket{g,\cdots,g,\text{vac}}\), i.e., basis kets that describe a photon pair with wave vector \(K\), with the qubits in the ground state. Step two yields the effective spin Hamiltonian \(\hat{H}_{\text{spin}}\) (see Appendix A for details), \[\hat{H}_{\text{spin}}=\hat{H}_{\text{single}}+\hat{H}_{\text{pair}}, \tag{8}\] where \[\hat{H}_{\text{pair}}=\sum_{i=1}^{N_{e}-1}\sum_{j=i+1}^{N_{e}}\sum_{l=1}^{N_{e }-1}\sum_{h=l+1}^{N_{e}}Y_{ij,lh}\hat{\sigma}_{i}^{+}\hat{\sigma}_{j}^{+}\hat{ \sigma}_{l}^{-}\hat{\sigma}_{h}^{-}. \tag{9}\] The effective four-qubit (or two-qubit hopping) interactions \(Y_{ij,lh}\) emerge from the interactions \(F_{K,b}\) (see below). As might be expected naively, \(W_{jl}\) and \(Y_{ij,lh}\) are directly Figure 2: Schematic of the energy bands for the one- and two-excitation manifolds for fixed \(\omega_{c}\), \(\omega_{e}\), and \(U\). This work focuses on the band gap regime, i.e., negative detunings \(\delta\). Adiabatic elimination of the gray single-photon and green two-photon bound state bands introduces the effective interactions \(W\) and \(Y\) (see Fig. 3 for an illustration of these interactions), respectively, between qubit groups. The two-photon scattering continuum is far off-resonant and does not play a role. Explicit expressions for the energy bands can be found, e.g., in Ref. [53]. proportional to \(g^{2}\) and \(g^{4}\), respectively, since they emerge as a consequence of the first and second adiabatic elimination steps, respectively. The effective spin Hamiltonian \(\hat{H}_{\rm spin}\) is independent of the photonic degrees of freedom. The characteristics of the cavity array and the geometric arrangement of the qubits (i.e., the value of \(x\)) enter through the interaction strengths \(W_{jl}\) and \(Y_{ij,lh}\). We now highlight selected properties of the single- and two-qubit hopping interactions. Figure 3(a) illustrates the constrained single-qubit hopping interaction \(W_{jl}\hat{\sigma}_{i}^{+}\hat{\sigma}_{j}^{+}\hat{\sigma}_{i}^{-}\hat{\sigma}_ {l}^{-}\). The term "constrained" is used since the hopping of the excitation from qubit \(l\) to qubit \(j\) (\(\hat{\sigma}_{j}^{+}\hat{\sigma}_{l}^{-}\) piece) depends on the number of excitations at qubit \(i\) (\(\hat{\sigma}_{i}^{+}\hat{\sigma}_{i}^{-}\) piece; in this example, we assume \(i\neq j\) and \(i\neq l\)). If qubit \(i\) is excited, hopping from qubit \(l\) to qubit \(j\) occurs with strength \(W_{jl}\). If, in contrast, qubit \(i\) is not excited, hopping from qubit \(l\) to qubit \(j\) does not take place. We refer to the excited qubit \(i\) as a spectator. We emphasize that our treatment does not assume that the system is in the Markovian regime. After the first adiabatic elimination, basis kets with two excited qubits are coupled to each other via \(\hat{H}_{\rm single}\) if they contain a common excited qubit. The second adiabatic elimination leaves \(\hat{H}_{\rm single}\) unchanged. Thus, in the Hilbert space spanned by the \(N_{e}(N_{e}-1)/2\) two-excitation qubit states, \(\hat{H}_{\rm single}\) couples each basis ket that contains two excited qubits to \(2(N_{e}-2)\) other basis kets as well as to itself. While we refer to \(W(0)\) as onsite hopping interaction, it is also known as "self interaction" or "self energy" (see, e.g., Ref. [45]). The strength \(W_{jl}\), \[W_{jl}=W(0)\exp(-|n_{j}-n_{l}|a/L_{0}), \tag{10}\] of the constrained single-qubit hopping interaction falls off exponentially as a function of \(|n_{j}-n_{l}|a\), i.e., the difference between the cavities \(n_{j}\) and \(n_{l}\) that the qubits \(j\) and \(l\) are coupled to. The onsite hopping energy \(W(0)\) and length \(L_{0}\) read \[W(0)=-\frac{2J\left(\frac{g}{2J}\right)^{2}}{\sqrt{\left(\frac{\Delta}{2J} \right)^{2}-1}} \tag{11}\] and \[L_{0}=-\frac{a}{\ln\left(\frac{\Delta}{2J}-\sqrt{\left(\frac{\Delta}{2J} \right)^{2}-1}\right)}, \tag{12}\] respectively, where \[\Delta=\hbar\left(\omega_{c}-\omega_{e}\right)=\frac{1}{2}\left(-\delta+4J \sqrt{1+\left(\frac{U}{16J}\right)^{2}}\right). \tag{13}\] Figure 4 shows the onsite hopping energy \(W(0)\) and length \(L_{0}\) for fixed \(g/J\) and \(U/J\) as a function of the dimensionless detuning \(\delta/J\). It can be seen that \(W(0)/J\) is negative and that the magnitude of \(W(0)\) increases with decreasing \(|\delta/J|\). Larger \(|W(0)|\) (note, Fig. 4 shows \(W(0)\) as opposed to \(|W(0)|\)) are accompanied by larger \(L_{0}\). For the detuning considered in this work (\(|\delta/J|\ll 1\)), \(W_{jl}\) is--for \(x/a=1\)--appreciable not only for nearest neighbor hopping but also for next-to-nearest and next-to-next-to-nearest neighbor hopping. Next, we discuss the effective pair hopping interaction \(Y_{ij,lh}\). Figure 3(b) illustrates \(Y_{ij,lh}\hat{\sigma}_{i}^{+}\hat{\sigma}_{j}^{+}\hat{\sigma}_{l}^{-}\hat{ \sigma}_{h}^{-}\), which annihilates excitations at the \(l\)th and \(l\)th qubit and creates Figure 3: Schematic of constrained single-qubit hopping interaction \(W\) and pair hopping interaction \(Y\) entering into \(\hat{H}_{\rm spin}\). (a) The term \(W_{jl}\hat{\sigma}_{i}^{+}\hat{\sigma}_{j}^{+}\hat{\sigma}_{i}^{-}\hat{ \sigma}_{l}^{-}\) (here illustrated assuming \(i\neq j\neq l\)) describes the annihilation of an excitation at the \(l\)th qubit and the creation of an excitation at the \(j\)th qubit (solid blue arrow). This corresponds to the hopping of an excitation with strength \(W_{jl}\), with the excitation at the \(i\)th qubit acting as a “spectator”, i.e., the single-qubit hopping is only allowed if qubit \(i\) is excited. (b) The term \(Y_{ij,lh}\hat{\sigma}_{i}^{+}\hat{\sigma}_{j}^{+}\hat{\sigma}_{l}^{-}\hat{ \sigma}_{h}^{-}\) (here illustrated assuming \(i\neq j\neq l\neq h\)) describes the annihilation of excitations at qubits \(l\) and \(h\), and the creation of excitations at qubits \(i\) and \(j\) (solid purple arrows). This corresponds to the hopping of a pair of excitations with strength \(Y_{ij,lh}\). The open blue and open purple arrows show selected additional constrained hopping and pair hopping interactions, respectively. excitations at the \(j\)th and \(i\)th qubit. The effective pair hopping interaction \(Y_{ij,lh}\) is given by \[Y_{ij,lh}=-\frac{g^{4}}{NJ^{2}}\sum_{K}\frac{F_{K,b}(n_{i},n_{j})F_{K,b}^{*}(n_{ l},n_{h})}{\Delta_{K,b}}, \tag{14}\] where \(F_{K,b}\) is given in Eq. (10). As \(W_{jl}\), \(Y_{ij,lh}\) is negative. The pair hopping interaction \(\hat{H}_{\rm pair}\) couples each excited qubit pair to all other excited qubit pairs. Figure 5 shows the interaction \(Y_{ij,lh}\) for \(x/a=1\) as functions of the pairs \((i,j)\) and \((l,h)\) for \(N_{e}=60\). Specifically, the indices that specify the states \(\hat{\sigma}_{i}^{+}\hat{\sigma}_{j}^{+}\ket{g,\cdots,g}\) are organized based on the separation between the excited qubits, i.e., \(\ket{j-i}\). In a qubit array with \(N_{e}\) qubits, there are \(N_{e}-1\) basis states with a separation of \(\ket{j-i}=1\), \(N_{e}-2\) basis states with a separation of \(\ket{j-i}=2\), and so on. The "lower left block" corresponds to \(\ket{j-i}=\ket{h-l}=1\) [the pairs \((i,j)\) and \((l,h)\) both take the values \((1,2)\), \((2,3)\), \(\cdots\), \((59,60)\)]. The "upper right block" corresponds to \(\ket{j-i}=\ket{h-l}=9\) [the pairs \((i,j)\) and \((l,h)\) both take the values \((1,10)\), \((2,11)\), \(\cdots\), \((51,60)\)]. Note that Fig. 5 only considers a subset of pairs, i.e., \(\ket{j-i}\leq 9\) and \(\ket{h-l}\leq 9\). Within each block, the interaction is most negative along the diagonal and falls off approximately Lorentzian as one moves away from the diagonal. Moreover, starting with the block in the lower left corner, the interactions on the diagonal are less negative as one moves to blocks characterized by larger separations. A key characteristic of the interaction \(Y_{ij,lh}\) is that it is--within each block--constant along the diagonal, along the off-diagonal, and so on. The fall-off of the interactions as one moves away from the diagonal within each block indicates that the \(Y_{ij,lh}\) interaction depends on the actual locations of the involved qubits in the spin chain. This implies that it is energetically more favorable for two excitations to be located in the middle of the chain than at the edge of the chain since the pair can hop to the left and to the right when located at the center and only to one side when located at the edge. This location dependence is critical for the formation of the droplet-like states discussed in the next section. ## III Stationary solutions Since we are working in the regime where \(g/J\ll 1\), it might be expected naively that the constrained single-qubit hopping term \(\hat{H}_{\rm single}\), which is directly proportional to \(g^{2}\), dominates over the pair hopping term \(\hat{H}_{\rm pair}\), which is directly proportional to \(g^{4}\). While this is, indeed, the case in an appreciable portion of the parameter space, we show that there exists a parameter window in which the pair hopping interaction qualitatively changes the system characteristics. It is noted that a fourth-order two-photon virtual process, which is proportional to \(g^{4}\), was observed experimentally in transmon qubits coupled to a photonic crystal [43]. Specifically, this section shows that the \(Y\)-term has a "pinning effect" that leads to the emergence of liquid- or droplet-like bound states. Droplet states are self-bound and incompressible, and their excitation spectrum can be divided into compressional and surface modes [44]. We will show that the states referred to as droplet-like in this work are incompressible (their size is not solely set by the extent of the emitter array but by the entirety of system parameters). Moreover, the ground state is accompanied by a sequence of excitations that resemble compressional modes. While our analysis is based on the approximate spin Hamiltonian \(\hat{H}_{\rm spin}\), we checked that this Hamiltonian captures the key features of the full system Hamiltonian \(\hat{H}\) qualitatively and in many cases even quantitatively correctly. The main advantage of using \(\hat{H}_{\rm spin}\) comes from the fact that it allows for a transparent interpretation of the results, in addition to being an interesting model in its own right. We start by setting \(\hat{H}_{\rm pair}=0\). We find it useful to compare \(\hat{H}_{\rm single}\) to the unconstrained one-qubit hopping Hamiltonian \(\hat{\tilde{H}}_{\rm single}\), where \(\hat{\tilde{H}}_{\rm single}=2\sum_{i,j=1}^{N_{e}}W_{ij}\hat{\sigma}_{i}^{+} \hat{\sigma}_{j}^{-}\). This Hamiltonian emerges (without the factor of 2) when one works in the single-excitation manifold and adiabatically eliminates the single-photon states [16; 17; 18]. \(\hat{H}_{\rm single}\) differs from \(\hat{\tilde{H}}_{\rm single}\) because of the presence of the "spectator", i.e., the constraint makes the Hamiltonian \(\hat{H}_{\rm single}\) considered in our work unique. To highlight the differences, red and blue circles in Fig. 6 show the eigenen Figure 5: Contour plot of the effective dimensionless interaction \(Y_{ij,lh}J^{3}/g^{4}\) for \(U/J=-1\), \(\delta/J=-1/50\), \(N_{e}=60\), and \(x/a=1\). The \(x\)- and \(y\)-axis are labeled by the index pairs \((i,j)\) and \((l,h)\); the plot includes all \((i,j)\) and \((l,h)\) pairs with \(\ket{j-i}\leq 9\) and \(\ket{h-l}\leq 9\). In each block, the separation (i.e., \(j-i\) and \(h-l\)) between the two qubit excitations is fixed while the “center-of-mass coordinates” [i.e., \((i+j)/2\) and \((l+h)/2\)] are changing. As an example, the blue rectangle corresponds to a block with \(\ket{j-i}=2\) and \(\ket{h-l}=3\). Values of \((x,\,y)=(100,\,150)\), e.g., correspond to \((i,j)=(41,43)\) and \((l,h)=(33,36)\). ergies of \(\hat{H}_{\rm single}\) and \(\hat{\hat{H}}_{\rm single}\), respectively, for \(N_{e}=60\), \(g/J=1/50\), \(U/J=-1\), \(\delta/J=-1/50\), and \(x/a=1\). The constraint introduces an upshift of the eigenenergies for all eigenstates. The upshift is larger for the more negative eigenenergies (measured relative to the bottom \(E_{0,b}\) of the two-photon bound state band) than the less negative eigenenergies. Interestingly, both \(\hat{H}_{\rm single}\) and \(\hat{\hat{H}}_{\rm single}\) support step like pattern, with each plateau containing close to \(N_{e}\) eigenstates for the energetically lowest lying states. For the higher excited states, the steps are less pronounced. Reference [15] referred to the energy band formed by the qubit dominated states as a metaband. While we observe, similarly to Ref. [15], that the width of the band decreases with increasing \(x\), it is important to point out that that work considered qubit-array physics in the single-excitation manifold on resonance (and not off-resonance as in our case) and for significantly stronger coupling strengths (\(g/J\) of order 1). For comparison, the black circles in Fig. 6 show the eigenenergies for \(\hat{H}_{\rm spin}\). It can be seen that \(\hat{H}_{\rm pair}\) appreciably impacts the 10 or so energetically lowest lying states and less so the higher-lying states. Importantly, the energies of the lowest few eigenstates of \(\hat{H}_{\rm spin}\) are pushed down due to the presence of the \(\hat{H}_{\rm pair}\) term. The downshift of the energies is associated with significant changes of the character of the eigenstates, i.e., a change from delocalized scattering states to localized bound states. We refer to this as "pinning" (see below for details). The energy spectrum shown in Fig. 6 is unique to a qubit spacing of \(x/a=1\). For larger spacings, but otherwise identical parameters, the hopping energies are smaller and the step-like pattern is washed out. Moreover, the most strongly bound states are less separated from the other states than for \(x/a=1\) (i.e., \(\hat{H}_{\rm pair}\) introduces a smaller downshift for the ground state for \(x/a=2\) than for \(x/a=1\)). For \(x=0\), there exist three degenerate energy levels: for the same \(N_{e}\), \(g/J\), \(U/J\), and \(\delta/J\) as considered in Fig. 6, the \(x/a=0\) spectrum for \(\hat{H}_{\rm spin}\) contains a single state with energy \(E-E_{0,b}=-0.2256J\), \(N_{e}-1\) states with energy \(E-E_{0,b}=-0.0629J\), and \(N_{e}(N_{e}-3)/2\) states with energy \(E-E_{0,b}=\delta=-0.02J\). To understand the influence of \(\hat{H}_{\rm pair}\) on the eigenspectrum, we use first-order non-degenerate perturbation theory. Treating \(\hat{H}_{\rm pair}\) as a perturbation, the first-order correction \(E_{n}^{(1)}\) to the eigenenergy \(E_{n}^{(0)}\) of the \(n\)th droplet-like eigenket \(\ket{\phi_{n}^{(0)}}\) of \(\hat{H}_{\rm single}\) is given by \[E_{n}^{(1)}=\bra{\phi_{n}^{(0)}}\hat{H}_{\rm pair}\ket{\phi_{n}^{(0)}}. \tag{15}\] Figure 7(a) considers the six energetically lowest-lying droplet-like states (\(n=1-6\)). These droplet-like states correspond to state numbers 1, 2, 3, 4, 7, and 10. Figure 7(a) shows that the perturbation energies (the open blue circles show \(E_{n}^{(0)}+E_{n}^{(1)}\)) lie below the zeroth-order energies \(E_{n}^{(0)}\) (green open triangles), i.e., the stronger binding of \(\hat{H}_{\rm spin}\) compared to \(\hat{H}_{\rm single}\) due to \(\hat{H}_{\rm pair}\) is captured qualitatively in first-order perturbation theory. Higher-order corrections, which account for the mixing of the unperturbed states \(\ket{\phi_{n}^{(0)}}\) play a larger role for the ground state (\(n=1\)) than for the excited droplet-like states (\(n=2-6\)). Figure 7(b) focuses on the energy of the lowest-lying droplet-like state and shows that energy as a function of the detuning. The detuning marked by an arrow is identical to the detuning used in Fig. 7(a). For large to moderate detunings, the results from the perturbation calculation (blue open circles) agree well with the exact diagonalization of \(\hat{H}_{\rm spin}\) (black solid circles). For relatively small detunings, however, deviations are visible. While the first-order perturbative energy improves upon the unperturbed energy, higher-order corrections play an increasingly more important role. To characterize the eigenstates \(\ket{\phi_{E}}\) of \(\hat{H}_{\rm spin}\), we expand them in terms of the basis kets \(\sigma_{i}^{+}\sigma_{j}^{+}\ket{g,\cdots,g}\), \[\ket{\phi_{E}}=\sum_{i=1}^{N_{e}-1}\sum_{j=i+1}^{N_{e}}c_{i,j}^{(E)}\sigma_{i} ^{+}\sigma_{j}^{+}\ket{g,\cdots,g}, \tag{16}\] and analyze the expansion coefficients \(c_{i,j}^{(E)}\) as well as the pair correlation function \(P_{\rm pair}(\alpha)\), which measures the likelihood that the two excitations are located at qubits that are separated by \(\alpha\). The corresponding operator is given by \[\hat{P}_{\rm pair}(\alpha)=\sum_{i=1}^{N_{e}-\alpha}\hat{\sigma}_{i}^{+}\hat{ \sigma}_{i+\alpha}^{+}\ket{g,\cdots,g}\bra{g,\cdots,g}\hat{\sigma}_{i}^{-}\hat {\sigma}_{i+\alpha}^{-}, \tag{17}\] where \(\alpha\) takes the values 1, 2, \(\cdots\), \(N_{e}-1\). For example, if \(\alpha=1\), the excitations are located at neighboring spins. In terms of the expansion coefficients, the pair correlation function for the eigenstate \(\ket{\phi_{E}}\) is given by \(P_{\rm pair}(\alpha)=\sum_{i=1}^{N_{e}-\alpha}\ket{c_{i,i+\alpha}^{(E)}}\). Figure 6: Eigenenergy, measured with respect to \(E_{0,b}\), for \(N_{e}=60\), \(g/J=1/50\), \(U/J=-1\), \(\delta/J=-1/50\), and \(x/a=1\) as a function of the state index. The black filled circles, red open squares, and blue open circles show the energy for \(\hat{H}_{\rm spin}\), \(\hat{H}_{\rm single}\), and \(\hat{\hat{H}}_{\rm single}\), respectively. Inset: Blow-up of the lower part of the energy spectrum. Figures 8(a) and 8(b) show \(P_{\rm pair}(\alpha)\) for the ground state for \(N_{e}=60\), \(g/J=1/50\), \(U/J=-1\), and \(x/a=1\) for two different detunings, namely \(\delta/J=-1/50\) and \(-3/20\). The blue dotted lines are obtained using \(\hat{H}_{\rm spin}\). The full Hamiltonian \(\hat{H}_{\rm full}\) (black solid lines) yields results that are quite similar to those for \(\hat{H}_{\rm spin}\), thus providing evidence that \(\hat{H}_{\rm spin}\) yields faithful results. For small \(|\delta/J|\) [Fig. 8(a)], the pair correlation function peaks at \(\alpha=1\) and is essentially zero for \(\alpha\gg 1\). This indicates that the two excited qubits want to stay together. The fall-off of \(P_{\rm pair}(\alpha)\) suggests that the ground state corresponds to a bound state. This interpretation is confirmed by calculations for larger arrays (larger \(N_{e}\)) with otherwise identical parameters. We find that \(P_{\rm pair}(\alpha)\) for the ground state remains essentially unchanged when \(N_{e}\) is increased, i.e., the size of the ground state is independent of \(N_{e}\), thereby justifying the classification as a self-bound state. For larger \(|\delta/J|\) [Fig. 8(b)], in contrast, the pair correlation function peaks at \(\alpha\approx 10\) for \(\hat{H}_{\rm full}\) and \(\hat{H}_{\rm spin}\). This indicates that the two excited qubits have a tendency to spread out over the entire array. This interpretation is supported by the fact that the fall-off of the pair correlation function moves to larger \(\alpha\) for larger \(N_{e}\) but otherwise identical parameters. Correspondingly, we classify the ground state considered in Fig. 8(b) as unbound. The inclusion of \(\hat{H}_{\rm pair}\) in the effective spin Hamiltonian \(\hat{H}_{\rm spin}\) (blue dotted line) is crucial. A comparison of the blue dotted line [\(P_{\rm pair}(\alpha)\) for \(\hat{H}_{\rm spin}\)] and red dashed line [\(P_{\rm pair}(\alpha)\) for \(\hat{H}_{\rm single}\)] reveals that \(\hat{H}_{\rm pair}\) has a pinning effect: it enhances, as already alluded to in Sec. II.2, the probability to find excitations located at qubits that are close to each other. The effect is very prominent in Fig. 8(a), where the red line is much broader than the blue line. If \(\hat{H}_{\rm pair}\) is neglected and \(N_{e}\) is increased, the red line in Fig. 8 does not maintain its size, as is the case for \(\hat{H}_{\rm spin}\), but increases. This unequivocally shows that \(\hat{H}_{\rm pair}\) is responsible for the emergence of self-bound states. Figure 9 shows the real part of the coefficients \(c_{i,j}^{(n)}\) for the four energetically lowest lying droplet-like bound states (\(n=1-4\)) for \(N_{e}=60\), \(g/J=1/50\), \(U/J=-1\), \(\delta/J=-1/50\), and \(x/a=1\); the imaginary part is equal to zero. The droplet-like states shown in Fig. 9 correspond to the state numbers 1, 2, 3, and 4. Figure 9 employs relative and center-of-mass coordinates \(r\) and \(R\), respectively, of the two excited qubits, \(r=|j-i|\) and \(R=(i+j)/2\). The whi Figure 7: Energy of droplet-like states, measured with respect to \(E_{0,b}\), for \(N_{e}=60\), \(g/J=1/50\), \(U/J=-1\), and \(x/a=1\) as a function of (a) the excitation number \(n\) for \(\delta/J=-1/50\) and (b) the detuning \(\delta/J\) for \(n=1\). The black filled circles, green open triangles, red open squares, and blue open circles show the energies for \(\hat{H}_{\rm spin}\), \(\hat{H}_{\rm single}\), the variational wavefunction given in Eqs. (18)-(20), and for the perturbative calculation (\(\hat{H}_{\rm pair}\) is treated in first-order perturbation theory), respectively. In (a), the red and black symbols are nearly indistinguishable. In (b), all four calculations yield, on the scale shown, nearly indistinguishable energies except when \(|\delta/J|\) is extremely small. The arrow in (b) marks the detuning used in (a). Figure 8: Pair correlation function \(P_{\rm pair}(\alpha)\) for the ground state as a function of the separation \(\alpha\) between two excited qubits for \(N_{e}=60\), \(g/J=1/50\), \(U/J=-1\), \(x/a=1\), and (a) \(\delta/J=-1/50\) and (b) \(\delta/J=-3/20\). The black solid, blue dotted, and red dashed lines are for \(\hat{H}_{\rm full}\), \(\hat{H}_{\rm spin}\), and \(\hat{H}_{\rm single}\), respectively. The inset in (a) replots the blue dotted line and additionally shows the variational results by green open circles. \(r\geq 2R\) for \(R<N_{e}/2\) and \(r\geq 2(N_{e}-R)\) for \(R\geq N_{e}/2\) is unphysical as there is a constraint of \(i<j\) on the eigen-coefficients due to the bosonic character or, equivalently, the exchange symmetry of the excitations. The small white dots, which exist in the physical \(i<j\) portions in Fig. 9, result from the transformation from the \((i,j)\) spin indices to the \((R,r)\) coordinates. In Figs. 9(a)-9(d), the magnitude of the coefficients \(c_{i,j}^{(n)}\) decreases with increasing \(r\) for fixed \(R\). Along the \(R\) coordinate, the number of nodes increases from zero for the ground state [\(n=1\) in Fig. 9(a)] to three for the third excited droplet-like state [\(n=4\) in Fig. 9(d)]. The nodes are to a very good approximation parametrized by \(R_{\rm node}\approx\) constant, i.e., they are, on the scale of Fig. 9, independent of \(r\). In what follows, we use a variational ansatz to understand the length scale that governs the droplet-like states and the number of droplet-like states that are supported by a qubit array of size \(N_{e}\). Since Fig. 9 suggests that the expansion coefficients of the \(n\)th droplet-like eigenstate decouple when plotted as functions of the relative coordinate \(r\) and the center-of-mass coordinate \(R\), we introduce the product ansatz \[c_{r,R}^{(n)}=Q^{(n)}(R)q(r). \tag{18}\] Here, the function \(Q^{(n)}(R)\), \[Q^{(n)}(R)=\sqrt{\frac{2}{N_{e}}}\sin\left(\frac{n\pi}{N_{e}}R\right), \tag{19}\] corresponds to the \(n\)th particle in the box wave function and the function \(q(r)\), \[q(r)=2\sqrt{\frac{L_{r}^{3}}{\pi a^{3}}}\left[\frac{1}{(r-1)^{2}+(\frac{L_{r}} {a})^{2}}\right], \tag{20}\] to an \(n\)-independent Lorentzian with characteristic length \(L_{r}\). The length \(L_{r}\) is treated as a variational parameter. By construction, the variational states with different \(n\) are orthogonal. Figure 7(a) compares the variational energies (red open squares) of the six droplet-like states that are supported by the qubit array for \(N_{e}=60\), \(g/J=1/50\), \(U/J=-1\), \(\delta/J=-1/50\), and \(x/a=1\) with those obtained by diagonalizing \(\hat{H}_{\rm spin}\) (black solid circles). We see that the variational energies agree extremely well with the exact eigenenergies of \(\hat{H}_{\rm spin}\). In Fig. 7(b), the energy of the ground droplet-like state is shown as a function of \(\delta/J\) for the same \(N_{e}\), \(g/J\), \(U/J\), and \(x/a\) as used in Fig. 7(a). For large to moderate, in magnitude, detunings, the energies from the variational calculation (red open squares) agree well with the exact eigenenergies of \(\hat{H}_{\rm spin}\) (black solid circles). For small detunings, small deviations are visible. The variational calculation not only predicts the eigenenergy accurately but also the corresponding eigenstates. As an example, the green open circles in the inset of Fig. 8(a) show the pair correlation function obtained by the variational treatment; it agrees well with the results obtained for \(\hat{H}_{\rm spin}\) (blue dotted line). The number of droplet-like states supported by the qubit array is approximately equal to \(aN_{e}/L_{r}\). Intuitively, this can be understood as follows. The system develops additional nodes along the \(R\) direction till the spacing between the nodes is comparable to the size of the droplet-like state along the \(r\) direction. For \(g/J=1/50\), \(U/J=-1\), \(\delta/J=-1/50\), and \(x/a=1\), the variational ground state energy is minimized for \(L_{r}\approx 10a\). The qubit array with \(N_{e}=60\) supports six droplet-like states, in agreement with the estimate \(aN_{e}/L_{r}\approx 6\). As the qubit array spacing \(x\) is changed from \(a\) to \(2a\), the number of droplet-like bound states decreases from six to four. For \(x=3a\), droplet-like bound states are no longer supported. Similar results are found for other parameter combinations. We note that \(\hat{H}_{\rm spin}\) also supports more highly excited modes, which have nodes along the \(r\)-coordinate. The variational treatment of these energetically higher-lying droplet-like states is beyond the scope of this work. ## IV Dynamics This section discusses the dynamics for negative \(\delta\) (band-gap regime) for two different initial states in the two-excitation manifold, namely the partially symmetric state \(|{\rm PS}\rangle\), \[|{\rm PS}\rangle=\frac{1}{\sqrt{N_{e}-1}}\sum_{i=1}^{N_{e}-1}\sigma_{i}^{+} \sigma_{i+1}^{+}\left|g,\cdots,g\right\rangle, \tag{21}\] and the fully symmetric state \(|{\rm FS}\rangle\), \[|{\rm FS}\rangle=\frac{\sqrt{2}}{\sqrt{N_{e}(N_{e}-1)}}\sum_{i=1}^{N_{e}-1} \sum_{j=i+1}^{N_{e}}\sigma_{i}^{+}\sigma_{j}^{+}\left|g,\cdots,g\right\rangle. \tag{22}\] Figure 9: Contour plots of the expansion coefficients \(c_{ij}^{(n)}\) as functions of \(R\) and \(r\) for \(N_{e}=60\), \(g/J=1/50\), \(U/J=-1\), \(\delta/J=-1/50\), and \(x/a=1\). The coefficients are obtained by diagonalizing the effective Hamiltonian \(\hat{H}_{\rm spin}\). (a), (b), (c), and (d) are for \(n\)=1 (droplet-like ground state), 2 (droplet-like first excited state), 3 (droplet-like second excited state), and 4 (droplet-like third excited state), respectively. The fully symmetric state is a superposition of all basis kets (all basis kets contribute with an expansion coefficient \(\frac{\sqrt{2}}{\sqrt{N_{e}(N_{e}-1)}}\)). The partially symmetric state, in contrast, only considers basis kets for which the excited qubits are nearest neighbors. Figures 10(a) and 10(b) show the decomposition of the states \(|\)PS\(\rangle\) and \(|\)FS\(\rangle\), respectively, into the energy eigenstates \(|\phi_{E}\rangle\) of \(\hat{H}_{\rm spin}\) for \(N_{e}=60\), \(g/J=1/50\), \(U/J=-1\), \(\delta/J=-1/50\), and \(x/a=1\). The state \(|\)PS\(\rangle\) has finite overlap with a large number of eigenstates from all over the eigenspectrum. The ground state contributes about 10% and the other states 3% or less. For the state \(|\)FS\(\rangle\) [Fig. 10(b)], in contrast, there are two energy eigenstates that dominate and together contribute 89% [52.6%, red square in Fig. 10(b), and 36.4%, blue triangle in Fig. 10(b)]. The lowest eigenstate, which contributes 52.6%, has droplet-like character while the excited eigenstate, which contributes 36.4%, has scattering characteristics. Since the fully symmetric initial state is dominated by two eigenstates, the dynamics is expected to feature Rabi-like two-state oscillation dynamics. The dynamics for the partially symmetric state, in contrast, is expected to display features of dephasing, at least over certain time scales, due to the superposition of many energy eigenstates. Figure 11 shows the time dependence of the probability that two excitations belong to nearest neighbor qubits, i.e., qubits that are separated by \(\alpha=1\) (black solid line), to qubits that are separated by \(\alpha=6\) (red dashed line), and to qubits that are separated by \(\alpha=21\) (blue dotted line). These observables are for the same parameters as those used in Fig. 10. The time evolution of \(P_{\rm pair}(\alpha,t)\) for the initial states \(|\)PS\(\rangle\) [Fig. 11(a)] and \(|\)FS\(\rangle\) [Fig. 11(b)] is--as already anticipated based on the initial state decomposition--distinct. In Fig. 11(a), \(P_{\rm pair}(\alpha,t)\) for \(\alpha=1\) decays with damped oscillations. The damping or decay are attributed to the fact that a large number of eigenstates contribute to the initial state with comparable weight, giving rise to dephasing. In Fig. 11(b), \(P_{\rm pair}(\alpha,t)\) oscillates with nearly undamped amplitude for all \(\alpha\) considered. The slight "distortions" of the oscillations are caused by dephasing effects of the eigenstates that contribute to the fully symmetric initial state with a small weight, i.e., less than 5%. The oscillation period of \(t\approx 2000\hbar/J\) corresponds to an energy of \(0.0031J\). This energy agrees with the difference in energies of the two eigenstates that have the largest overlap with the initial state \(|\)FS\(\rangle\) [red square and blue triangle in Fig. 10(b)]. Figure 12 shows the spin-spin correlation function \(P_{\rm corr}(i,j,t)\), \[P_{\rm corr}(i,j,t)=\langle\psi(t)|\,\hat{\sigma}_{i}^{+}\hat{ \sigma}_{j}^{+}\,|g,\cdots,g\rangle\times\] \[\langle g,\cdots,g|\,\hat{\sigma}_{i}^{-}\hat{\sigma}_{j}^{-}\,| \psi(t)\rangle \tag{23}\] at eight different times ranging from zero in Fig. 12(a) Figure 10: Square of the absolute value of the projection of the initial state \(|\psi(0)\rangle\) onto the energy eigenstates \(|\phi_{E}\rangle\) of \(\hat{H}_{\rm spin}\) as a function of the eigenenergy \(E\), measured relative to the bottom \(E_{0,b}\) of the two-photon bound state band, for \(N_{e}=60\), \(g/J=1/50\), \(U/J=-1\), \(\delta/J=-1/50\), and \(x/a=1\). (a) The initial state is \(|\psi(0)\rangle=|\)PS\(\rangle\). (b) The initial state is \(|\psi(0)\rangle=|\)FS\(\rangle\). The red square and blue triangle correspond to the two largest values of \(|\,\langle\phi_{E}|\)FS\(\rangle\,|^{2}\). to \(Jt/\hbar=7500\) in Fig. 12(h) for \(N_{e}=60\), \(g/J=1/50\), \(U/J=-1\), \(\delta/J=-1/50\), and \(x/a=1\). The initial state is \(|\)FS\(\rangle\). The plots in the left column are for \(Jt/\hbar=0\), 2220, 4440, and 6540. Comparison with Fig. 11(b) shows that \(P_{\rm pair}(\alpha=1,t)\) takes on a local minimum at these times. The plots in the right column of Fig. 12, in contrast, are such that \(P_{\rm pair}(\alpha=1,t)\) takes on a local maximum. We can see that \(P_{\rm corr}(i,j,t)\) is mostly concentrated around the middle of the diagonal in the right column while it is much more spread out in the left column. The observation that the spin-spin correlations alternate between being more localized and being more spread out can be readily explained by the fact that the initial state \(|\)FS\(\rangle\) is dominated by contributions from the ground droplet-like state and a delocalized scattering state [red square and blue triangle Fig. 10(b)]. This suggests that the droplet-like ground state can be probed by initializing the qubit array in the fully symmetric state \(|\)FS\(\rangle\). The calculations presented consider the ideal case scenario, where the excited state qubit has an infinite lifetime, the photon loss from the cavities is ignored, and imperfections--such as, e.g., a finite spread \(\Delta J\) of the tunneling energies \(J\) and a finite spread \(\Delta\omega_{c}\) of the cavity frequencies \(\omega_{c}\) that exist to a varying degree in experiment--are neglected. To observe the oscillations displayed in Fig. 11(b), the time scales associated with spontaneous qubit decay, photon losses, and "dephasing" due to the spread of system parameters must be larger than about \(10^{4}\hbar/J\). In the following discussion, we assume that the excited state lifetime of the qubit is longer than the time scale for photon losses. A finite "bare" photon lifetime of \(\hbar\kappa^{-1}\) leads to a characteristic decay time \((\Gamma_{c})^{-1}\) that scales, for \(|\delta_{0}|\ll 2J\), as \(\Gamma_{c}=p_{\rm ph}\kappa/\hbar\), where \(p_{\rm ph}\approx g^{2}J^{-1/2}\delta_{0}^{-3/2}/4\) and \(\delta_{0}\) denotes the detuning in the single-excitation manifold, \(\delta_{0}=(\hbar\omega_{c}-2J)-\hbar\omega_{e}\)[15; 45]. Physically, the multiplicative factor \(p_{\rm ph}\) can be understood as arising from the admixture of the photonic degrees of freedom to the hybridized bound state in the single-excitation manifold. Rewriting \(\delta_{0}\) in terms of the detuning \(\delta\) in the two-excitation manifold, we find \[p_{\rm ph}=\frac{g^{2}}{4\sqrt{J}}\left[\frac{1}{2}\left(-\delta+\sqrt{U^{2}+1 6J^{2}}\right)-2J\right]^{-3/2}. \tag{24}\] For \(\delta/J=-1/50\) and \(-3/20\), as used in this paper, \(p_{\rm ph}\) is equal to \(5\times 10^{-3}\) and \(2\times 10^{-3}\), respectively. To observe multiple oscillations, \((\Gamma_{c})^{-1}\) must be much larger than \(10^{4}\hbar/J\); the equal sign holds for \(\kappa/J=2\times 10^{-2}\) and \(5\times 10^{-2}\), respectively. Superconducting circuit experiments have realized an eight cavity system with \(U/h=-255\) MHz, \(J/h=5-20\) MHz, and \(\kappa/h=5\) kHz [34]. This translates to \(\kappa/J=2.5\times 10^{-4}\) to \(10^{-3}\), i.e., experiments are already operating in a regime where the photon lifetime is sufficiently long to observe the predicted phenomena. For fixed spreads \(\Delta J\) and \(\Delta\omega_{c}\), one may attempt to increase \(\delta\) such that the spreads become, if measured as a multiple of the detuning, smaller. Since a larger \(\delta\) corresponds to a smaller photon contribution \(p_{\rm ph}\) and hence a longer time scale for the photon losses, there is some room to optimize the parameters for a specific experimental set-up. While challenging, we conclude that the theory predictions put forward in this paper can be tested in state-of-the-art experiments. ## V Conclusion This paper discussed the time-independent and time-dependent behaviors of a qubit array coupled to a nonlinear photonic waveguide. Our interest was in the regime where the two-qubit transition energy lies in the band gap below the two-photon bound state band that is supported by the one-dimensional waveguide. We focused our attention on the two-excitation manifold. Even though the qubits are not interacting with each other, effective interactions--mediated by the waveguide--are introduced between qubits as a result of a two-step adiabatic elimination process. The resulting effective spin Hamiltonian, which was shown to accurately reproduce the key characteristics of the full Hamiltonian, features constrained single-qubit hopping and pair hopping interactions. The emergence of the latter critically depends on the presence of the Kerr-like non-linearity \(U\). The effective spin Hamiltonian was shown to support a new class of droplet-like bound states that arise due to the pair hopping interaction. These droplet-like states extend over many qubit lattice sites and can be probed dynamically. For the fully symmetric initial state, the populations were found to oscillate back and forth between a droplet-like bound state and a delocalized scattering state. While most of our discussion focused on \(N_{e}=60\), \(g/J=1/50\), \(U/J=-1\), and \(\delta/J=-1/50\), we emphasize that the characteristics discussed in this paper are also observed for other parameter combinations. For fixed \(g/J\), \(\delta/J\), \(N_{e}\), and \(x/a\), we find that the number of droplet-like states supported by the qubit array decreases as \(U/J\) becomes more negative. As \(|U|/J\) increases, the two-photon bound state becomes more localized and hence the overall strength of the pair hopping interaction becomes less negative. Whether or not droplet-like bound states exist also depends on the qubit array spacing \(x\). If the separation between two neighboring qubits is increased, the number of droplet-like states supported by the array decreases. The giant droplet-like bound states discovered in this work provide an intriguing example of utilizing structured baths to engineer effective spin-spin interactions that support quantum states with non-trivial correlations. The droplet-like states considered in this paper, which emerge in the two-excitation sub-space, are distinct from the two-excitation scattering states considered in Ref. [17] in the absence of the non-linearity \(U\). They are also distinct from hydrid qubit-photon states that emerge in the single-excitation manifold [45]. Possible extensions may focus on topological wave guides [57; 58], higher-dimensional baths, superlattice-type arrangements of the qubits, qubits with multiple transition frequencies [59], multi-level emitters, qubits with multiple point contacts [60], and higher-excitation manifolds [61]. In all these scenarios, it will be interesting to explore the interplay between constrained single-qubit and pair hopping interactions. ## VI Acknowledgement Support by the National Science Foundation through grant number PHY-2110158 is gratefully acknowledged. This work used the OU Supercomputing Center for Education and Research (OSCER) at the University of Oklahoma (OU). ## Appendix A Derivation of \(\hat{H}_{\text{spin}}\) Starting with the full Hamiltonian \(\hat{H}\), this appendix derives the effective spin Hamiltonian \(\hat{H}_{\text{spin}}\). The adiabatic elimination procedure discussed in this appendix is illustrated in Fig. 13. _Time-dependent wave packet_: Throughout, we assume that the two-photon scattering states can be neglected. This is justified since we are working in a regime where the two-qubit transition energy is far detuned from the two-photon scattering continuum. Under this approximation, the wavepacket \(|\psi(t)\rangle\) in the two-excitation manifold can be written as [45; 46; 47] \[|\psi(t)\rangle=\exp(-2\imath\omega_{e}t)\Bigg{[}\sum_{i=1}^{N_{e }-1}\sum_{j=i+1}^{N_{e}}d_{ij}(t)\sigma_{i}^{+}\sigma_{j}^{+}|g,\cdots,g,\text {vac}\rangle+\sum_{i=1}^{N_{e}}\sum_{k}c_{ik}(t)\sigma_{i}^{+}\hat{a}_{k}^{ \dagger}|g,\cdots,g,\text{vac}\rangle+\] \[\sum_{K}c_{K,b}(t)\hat{B}_{K}^{\dagger}|g,\cdots,g,\text{vac} \rangle\Bigg{]}, \tag{10}\] where \(d_{ij}(t)\), \(c_{ik}(t)\), and \(c_{K,b}(t)\) denote expansion coefficients. The operator \(\hat{B}_{K}^{\dagger}\) creates a two-photon bound state with momentum \(K\), \(|\psi_{K,b}\rangle=\hat{B}^{\dagger}|\text{vac}\rangle\). Inserting Eq. (A) into the time-dependent Schrodinger equation, we obtain a set of coupled differential equations \[\imath\hbar\dot{d}_{ij}(t)=\frac{g}{\sqrt{N}}\sum_{k}\left[\exp(\imath kan_{i} )c_{jk}(t)+\exp(\imath kan_{j})c_{ik}(t)\right], \tag{11}\] \[\imath\hbar\dot{c}_{ik}(t)=\Delta_{k}c_{ik}(t)+\frac{g}{\sqrt{N}}\sum_{j=1,j \neq i}^{N_{e}}\exp(-\imath kan_{j})d_{\widehat{i}\widehat{j}}(t)+\frac{g}{N} \sum_{K}M_{b}(k,n_{i},K)c_{K,b}(t), \tag{12}\] and \[\imath\hbar\dot{c}_{K,b}(t)=\Delta_{K,b}c_{K,b}(t)+\frac{g}{N}\sum_{i=1}^{N_{ e}}\sum_{k}[M_{b}(k,n_{i},K)]^{*}c_{ik}(t), \tag{13}\] where \(\tilde{i}=\min(i,j)\) and \(\tilde{j}=\max(i,j)\). The energy detunings \(\Delta_{k}\) and \(\Delta_{K,b}\) are given by \[\Delta_{k}=E_{k}-\hbar\omega_{e} \tag{10}\] and \[\Delta_{K,b}=E_{K,b}-2\hbar\omega_{e}, \tag{11}\] where \(E_{k}\) denotes the energy of a single photon with wave vector \(k\). The matrix elements \(M_{b}(k,n,K)\) are defined as [46; 47] \[M_{b}(k,n,K)=\sqrt{2}\times\] \[\sum_{m}\exp\left[m\left(k-\frac{K}{2}\right)a+m(K-k)a\right] \psi_{K,b}(m), \tag{12}\] where \(\psi_{K,b}(m)=\langle ma|\psi_{K,b}\rangle\) is the two-photon bound state wave function (\(ma\) denotes the relative distance between the two photons). Stationary and time-dependent solutions to the Schrodinger equation for \(\hat{H}_{\rm full}\) are obtained through exact diagonalization, excluding the basis kets that span the two-photon scattering continuum. To characterize the distribution of the excited qubits, we monitor the pair correlation function \(P_{\rm pair}(\alpha,t)\), \[P_{\rm pair}(\alpha,t)=\langle\psi(t)|\sum_{i=1}^{N_{e}-\alpha} \hat{\sigma}_{i}^{+}\hat{\sigma}_{i+\alpha}^{+}\left|g,\cdots,g,{\rm vac} \right\rangle\times\] \[\langle g,\cdots,g,{\rm vac}|\,\hat{\sigma}_{i}^{-}\hat{\sigma} _{i+\alpha}^{-}\left|\psi(t)\right\rangle, \tag{13}\] as well as the spin-spin correlation function \(P_{\rm corr}(i,j,t)\), \[P_{\rm corr}(i,j,t)=\langle\psi(t)|\,\hat{\sigma}_{i}^{+}\hat{ \sigma}_{j}^{+}\left|g,\cdots,g,{\rm vac}\right\rangle\times\] \[\langle g,\cdots,g,{\rm vac}|\,\hat{\sigma}_{i}^{-}\hat{\sigma} _{j}^{-}\left|\psi(t)\right\rangle. \tag{14}\] In what follows, we introduce several approximations that eliminate the photonic degrees of freedom from the problem and, in turn, introduce effective interactions between groups of qubits. Figure 13: Approximations made to obtain the effective spin Hamiltonian \(\hat{H}_{\rm spin}\) (right column) from the total Hamiltonian \(\hat{H}\) (left column). The schematic considers \(N_{e}=4\) as an example and shows only a subset of the basis kets. The red and black horizontal lines show a subset of basis kets. The blue, pink, purple, green, and dark red lines represent interactions. As a result of the adiabatic elimination of the single-photon states \(|eggg,k\rangle\), the interactions 1 and 2 give rise to the interaction \(W\) between the states \(|eggg,{\rm vac}\rangle\) and \(|eggg,{\rm vac}\rangle\) (solid pink line), the interactions 1 and 3 give rise to the interaction \(F\) between the states \(|eggg,{\rm vac}\rangle\) and \(|gggg,K_{1}\rangle\) (solid purple line), the interactions 2 and 3 give rise to the interaction \(F\) between the states \(|eggg,{\rm vac}\rangle\) and \(|gggg,K_{1}\rangle\) (dotted purple line), the interactions 1 and 4 give rise to the interaction \(F\) between the states \(|eggg,{\rm vac}\rangle\) and \(|gggg,K_{2}\rangle\) (dashed purple line), the interactions 2 and 4 give rise to the interaction \(F\) between the states \(|eggg,{\rm vac}\rangle\) and \(|gggg,K_{2}\rangle\) (dash-dotted purple line), and the interactions 3 and 4 give rise to the interaction \(G\) between the states \(|gggg,K_{1}\rangle\) and \(|gggg,K_{2}\rangle\) (green line). As a result of setting \(G\) to zero and adiabatically eliminating the two-photon bound states \(|gggg,K\rangle\), the interactions \(F\) (e.g., solid and dotted lines, and dashed and dash-dotted lines) give rise to the interaction \(Y\) between the states \(|eeggg\rangle\) and \(|eggg\rangle\) (solid dark red line). The down shifts of the red basis states in the middle and right columns represent the energy shifts (sometimes called Stark shifts) that are due to the adiabatic eliminations. _Adiabatic elimination of the single-photon states:_ Assuming that the changes of \(c_{ik}(t)\) with time can be neglected, i.e., \(\dot{c}_{ik}(t)=0\) in Eq. (10), the single-photon states \(\hat{\sigma}_{j}^{+}\ket{g,\cdots,g,k}\) can be adiabatically eliminated. This approximation breaks down when \(g\) is too large or the single-qubit transition energy is too close to the single-photon band. The resulting differential equations read \[\imath\hbar\dot{d}_{ij}(t)=\sum_{l=1,l\neq j}^{N_{e}}W_{il}d_{\vec{l}\vec{j}}(t) +\sum_{l=1,l\neq i}^{N_{e}}W_{lj}d_{\vec{i}\vec{l}^{\prime}}(t)+\frac{g^{2}}{ \sqrt{N}J}\sum_{K}F_{K,b}(n_{i},n_{j})c_{K,b}(t) \tag{12}\] and \[\imath\hbar\dot{c}_{K,b}(t)=\Delta_{K,b}c_{K,b}(t)+\frac{g^{2}}{\sqrt{N}J}\sum _{i=1}^{N_{e}}\sum_{j=i+1}^{N_{e}}F_{K,b}^{*}(n_{i},n_{j})d_{ij}(t)+\frac{g^{2} }{NJ}\sum_{K^{\prime}}G_{KK^{\prime}}(\vec{n})c_{K^{\prime},b}(t), \tag{13}\] where \(\tilde{l}=\min(l,j)\), \(\tilde{j}=\max(l,j)\), \(\tilde{i}=\min(l,i)\), \(\tilde{l}^{\prime}=\max(l,i)\), and \(\vec{n}=(n_{1},n_{2},\cdots,n_{N_{e}})\). It can be seen that the adiabatic elimination of the single-photon states introduces three effective interactions, namely \(F_{K,b}\), \(G_{KK^{\prime}}\), and \(W_{jl}\). The effective interaction \(F_{K,b}(n_{i},n_{j})\) between the states \(\hat{\sigma}_{i}^{+}\hat{\sigma}_{j}^{+}\ket{g,\cdots,g,\text{vac}}\) and \(\hat{B}_{K}^{\dagger}\ket{g,\cdots,g,\text{vac}}\) is given by [45; 46; 47] \[F_{K,b}(n_{i},n_{j})=\] \[-\sum_{k}\frac{J}{N\Delta_{k}}\Big{(}\exp(-\imath kan_{i})[M_{b}( k,n_{j},K)]^{*}+\] \[\exp(-\imath kan_{j})[M_{b}(k,n_{i},K)]^{*}\Big{)}. \tag{14}\] The effective interaction \(G_{KK^{\prime}}(\vec{n})\) between the states \(\hat{B}_{K}^{\dagger}\ket{g,\cdots,g,\text{vac}}\) and \(\hat{B}_{K^{\prime}}^{\dagger}\ket{g,\cdots,g,\text{vac}}\) is given by [45; 46; 47] \[G_{KK^{\prime}}(\vec{n})=-\sum_{j=1}^{N_{e}}\sum_{k}\frac{J}{N \Delta_{k}}[M_{b}(k,n_{j},K)]^{*}\times\] \[M_{b}(k,n_{j},K^{\prime}). \tag{15}\] The interactions \(F_{K,b}\) and \(G_{KK^{\prime}}\) have been discussed extensively in the context of the two-qubit system (\(\hat{H}\) with \(N_{e}=2\)) [45; 46; 47]. The effective interaction \(W_{jl}\), in contrast, does not exist for \(N_{e}=2\); it critically depends on having more than two qubits coupled to the cavity array. The functional form of \(W_{jl}\) is given in Eq. (10) of the main text. Equations (12)-(13) correspond to the effective Hamiltonian \(\hat{H}_{\text{adia},0}\), \[\hat{H}_{\text{adia},0}=\hat{H}_{\text{single}}+\frac{g^{2}}{J \sqrt{N}}\sum_{i=1}^{N_{e}}\sum_{j=i+1}^{N_{e}}\sum_{K}\Big{[}F_{K,b}(n_{i},n_ {j})\hat{\sigma}_{i}^{+}\hat{\sigma}_{j}^{+}\hat{B}_{K}+F_{K,b}^{*}(n_{i},n_{j })\hat{\sigma}_{i}^{-}\hat{\sigma}_{j}^{-}\hat{B}_{K}^{\dagger}\Big{]}+\] \[\sum_{K}\Delta_{K}\hat{B}_{K}^{\dagger}\hat{B}_{K}+\frac{g^{2}}{ NJ}\sum_{K}\sum_{K^{\prime}}G_{KK^{\prime}}(\vec{n})\hat{B}_{K}^{\dagger}\hat{B}_{K^{ \prime}}, \tag{16}\] where \(\hat{H}_{\text{single}}\) is given in Eq. (7) of the main text. For \(N_{e}=2\), Refs. [46; 47] found that the effective interaction \(G_{KK^{\prime}}\) plays a non-negligible role only when the transition energy \(2\hbar\omega_{e}\) of two qubits is in or nearly in resonance with the bottom of the two-photon bound state band. Since \(G_{KK^{\prime}}\) plays, in general, a negligible role away from the bottom of the band, it is useful to define the effective Hamiltonian \(\hat{H}_{\text{adia},1}\) by setting \(G_{KK^{\prime}}\) in \(\hat{H}_{\text{adia},0}\) to zero. The effective Hamiltonians \(\hat{H}_{\text{adia},0}\) and \(\hat{H}_{\text{adia},1}\) live in the \(\Big{(}\frac{N_{e}(N_{e}-1)}{2}+N\Big{)}\)-dimensional Hilbert space that is spanned by the states \(\hat{\sigma}_{i}^{+}\hat{\sigma}_{j}^{+}\ket{g,\cdots,g,\text{vac}}\) and \(\hat{B}_{K}^{\dagger}\ket{g,\cdots,g,\text{vac}}\) with wave vector \(K\). _Adiabatic elimination of the two-photon bound states:_ For the band gap physics considered in this paper, the energy \(2\hbar\omega_{e}\) of two excited qubits is not in resonance with the two-photon bound state band. Consequently, we adiabatically eliminate the two-photon bound states, i.e., we set the left hand side of Eq. (13) to zero. Using this to eliminate \(c_{K,b}(t)\) from Eq. (26), the resulting set of coupled equations--setting \(G_{KK^{\prime}}=0\)--reads \[\imath\hbar\dot{d}_{ij}(t)=\sum_{l=1,l\neq j}^{N_{e}}W_{il}d_{\vec{l }\vec{j}}(t)+\sum_{l=1,l\neq i}^{N_{e}}W_{lj}d_{\vec{l}\vec{i}^{\prime}}(t)+\\ \sum_{l=1}^{N_{e}-1}\sum_{h=l+1}^{N_{e}}Y_{ij,lh}d_{lh}(t). \tag{27}\] Equation (27) corresponds to the effective spin Hamiltonian \(\hat{H}_{\rm spin}\) given in Eq. (8) of the main text, which lives in the \(N_{e}(N_{e}-1)/2\)-dimensional Hilbert spanned by the qubit states \(\hat{\sigma}_{i}^{+}\hat{\sigma}_{j}^{+}\left|g,\cdots,g\right\rangle\).
2310.06776
Influence of additional neutrons on the fusion cross-section beyond the N=8 shell
Fusion enhancement for neutron-rich isotopes of oxygen on carbon nuclei was probed. To measure the fusion cross-section a $^{20}$O beam accelerated to E$_{lab}$/A=2.7 MeV bombarded the active-target detector MuSIC@Indiana with a fill gas of CH$_4$. Examination of the average fusion cross-section over the interval 12 MeV $\leq$E$_{c.m.}$$\leq$ 17 MeV for $^{16-20}$O + $^{12}$C reveals that while even isotopes of oxygen exhibit essentially the same cross-section, the cross-section for odd isotopes can be either enhanced or suppressed relative to the even A members of the isotopic chain. Theoretical models fail to explain the observed experimental results.
S. Hudan, H. Desilets, Rohit Kumar, R. T. deSouza, C. Ciampi, A. Chbihi, K. W. Brown
2023-10-10T16:51:10Z
http://arxiv.org/abs/2310.06776v1
# Influence of additional neutrons on the fusion cross-section beyond the N=8 shell ###### Abstract Fusion enhancement for neutron-rich isotopes of oxygen on carbon nuclei was probed. To measure the fusion cross-section a \({}^{20}\)O beam accelerated to E\({}_{lab}\)/A=2.7 MeV bombarded the active-target detector MuSIC@Indiana with a fill gas of CH\({}_{4}\). Examination of the average fusion cross-section over the interval 12 MeV \(\leq\)E\({}_{c.m.}\)\(\leq\) 17 MeV for \({}^{16-20}\)O + \({}^{12}\)C reveals that while even isotopes of oxygen exhibit essentially the same cross-section, the cross-section for odd isotopes can be either enhanced or suppressed relative to the even A members of the isotopic chain. Theoretical models fail to explain the observed experimental results. pacs: 21.60.Jz, 26.60.Gj, 25.60.Pj, 25.70.Jj The nature of neutron-rich nucleonic matter, both its structure as well as the reactions it undergoes, is of fundamental interest to the fields of both nuclear physics and nuclear astrophysics. Recent discovery of neutron star mergers as a nucleosynthetic site for heavy elements [1] underscores the importance of characterizing the behavior of extremely asymmetric nuclear matter. For reactions involving neutron-rich nuclei the central question is: "Do neutron-rich nuclei exhibit different behavior from more stable nuclei?" For increasingly neutron-rich nuclei approaching the neutron drip-line, one can expect a change based upon the increased importance of coupling to continuum states [2; 3], emergence of new collective modes [4], and changes to pairing/pairing dynamics [5; 6]. These changes can result in an enhancement of the fusion cross-section [7] as well as breakup and transfer [8]. Systematic experimental investigation of these topics for neutron-rich nuclei can provide insight into how nuclear reaction properties evolve with isospin, a central topic of next generation radioactive beam facilities [9; 10; 11]. Reactions with light nuclei present a unique opportunity to examine the character of extremely neutron-rich matter close to the drip-line and explore the impact on fusion, breakup, and transfer. The systematic study of heavy-ion fusion at near-barrier energies for an isotopic chain is a powerful tool to investigate neutron-rich matter at low-density probing the neutron and proton density distributions and how they evolve as the two nuclei approach and overlap [12; 13; 14; 15; 16]. At high energies, measurement of the interaction cross-section revealed that density distributions were spatially extended resulting in the discovery of halo nuclei [17; 18]. Fusion of neutron-rich nuclei at low energies probes both the increase in the spatial extent of the ground-state density distribution, as well as changes in dynamics as the system fuses. The density distribution probed by the fusion cross-section is not just the one-body density distribution but includes the quantal structure effects for the fusing system. The same interplay of attractive and repulsive forces in the two-component nucleonic matter that governs the composition and spatial extent of the low-density tails also impacts the collision dynamics. Comparison of the average fusion cross-section measured in \({}^{12-15}\)C + \({}^{12}\)C [19; 20] with neutron-excess clearly indicates an increased cross-section for \({}^{15}\)C [21]. Neither static nor dynamical models are able to describe this unexpected increase in the average fusion cross-section for \({}^{15}\)C, motivating investigation of isotopic chains for other light nuclei. Fusion of neutron-rich oxygen nuclei provides a good opportunity to investigate this topic further and specifically explore the impact of additional neutrons beyond the N=8 shell. Using radioactive beam facilities one can now explore fusion of neutron-rich oxygen beams to A=22 - nearly to the last neutron-bound nucleus \({}^{24}\)O [9; 11]. Closure of the 1p\({}_{1/2}\) shell with N=8 means that all isotopes with A\(>\)16 have their valence neutrons in the _sd_ shell in their ground state. Prior measurements revealed that for \({}^{19}\)O + \({}^{12}\)C a significant increase in the above-barrier fusion cross-section was observed as compared to \({}^{18}\)O [14]. Well above the systematic increase expected, this increase was interpreted as an extended tail of the neutron density distribution as the system fuses [16]. Whether more neutron-rich isotopes of oxygen also manifest an increased cross-section depends on the binding of the valence neutrons and the polarizability of the density distribution. On general grounds, pairing of the valence neutrons at the saddle point can impact this result. The present experiment provides the first measurement of the total fusion excitation function for \({}^{20}\)O + \({}^{12}\)C and compares the average above-barrier cross-section with that of less neutron-rich isotopes. The experiment was conducted using the SPIRAL1 facility at the GANIL accelerator complex in Caen, France. A primary beam of \({}^{22}\)Ne at E/A = 80.3 MeV bombarded a graphite target to produce a beam of \({}^{20}\)O. After acceleration to an energy of E/A = 2.7 MeV by the CIME cyclotron, the \({}^{20}\)O beam was selected in B\(\rho\) by the ALPHA spectrometer and transported to the experimental setup. A schematic of the experimental setup is shown in Fig. 1a. The beam passed first through a thin \(\approx\)35 \(\mu\)g/cm\({}^{2}\) aluminized mylar foil oriented at an angle of 45 degrees to the beam direction. Electrons ejected from the foil by the beam were accelerated and transported by an electromagnetic field [22] onto the surface of a 2D position-sensitive microchannel plate (MCP) detector. The MCP detector provided a continuous measure of the beam position, size, and intensity during the experiment. Following the tilted-foil MCP the beam impinged on the active target detector MuSIC@Indiana at an intensity up to \(\approx\)3\(\times\)10\({}^{4}\) ions/s. The MuSIC approach, in which the detector gas in a transverse-field, Frisch-gridded ionization chamber serves as both target and detection medium, has a couple of intrinsic advantages over thin-target measurements. Use of a MuSIC detector provides a direct energy and angle-integrated measure of the fusion products and allows simultaneous measurement of multiple points on the excitation function [19]. In addition, MuSIC detectors are self-normalizing since the incident beam is detected by the same detector as the reaction products. These advantages make MuSIC detectors a particularly effective means for measuring fusion excitation functions when available beam intensities are low [23; 24]. In this experiment, MuSIC@Indiana [24], operated with a fill gas of CH\({}_{4}\) gas (99.99% purity), was utilized to measure the fusion cross-section. The purity of the gas was independently assessed _in situ_ with a residual gas analyzer (MKS Microvision 2). During the experiment, in order to span the desired region of the excitation function, measurements were conducted at a sequence of pressures between 110 torr and 124 torr. The gas volume is separated from the high vacuum preceding it (\(\approx\) 7\(\times\)10\({}^{-7}\) torr) by a 25 mm diameter, 2.6 \(\mu\)m thick mylar foil. Passage of beam through the detector gas results in ionization characteristic of the incident ions as they traverse the detector. Segmentation of the anode into twenty strips along the beam direction, each 12.5 mm wide, yields a direct measure of this ionization trace. This segmentation of the anode plane means fusion events are associated with discrete locations (and therefore discrete energies) inside the detector. If fusion occurs at a particular location a considerably larger ionization signal is observed due to the larger specific ionization of the fusion product (evaporation residue). To clearly identify that fusion has occurred the evaporation residue is required to stop in the active volume which requires up to six anodes. Determination of the position at which the fusion occurs enables measurement of the fusion excitation function [23; 24]. To ensure a short collection time of the primary ionization produced by an incident ion, MuSIC@Indina was operated at a reduced electric field of \(\sim\)0.7 kV/cm/atm in the active volume. This field results in an electron drift velocity of \(\sim\)10 cm/\(\mu\)s for CH\({}_{4}\)[25], in principle allowing operation at beam intensities as high as 1\(\times\)10\({}^{5}\) ions/s. Further details on the design, operation, and performance of MuSIC@Indiana have been previously published [24; 26]. For each ion, the first two anodes of MuSIC@Indiana were used to identify the ion using a \(\Delta\)E1-\(\Delta\)E2 measurement and ensure the absence of beam contaminants. Since upto six anodes were used to identify that fusion has occurred, for a fixed incident energy and gas pressure, thirteen points on the fusion excitation function were extracted. A key feature of MuSIC@Indiana is the ability to precisely insert a small silicon surface barrier detector (SBD) into the active volume from downstream, allowing accurate determination of the energy loss of different ions in the gas. The calibrated SBD was precisely inserted into the active gas volume under remote control with an accuracy of \(<\)0.1 mm and used to measure the beam energy at each anode. This measurement was performed for each gas pressure utilized. Calibration with Figure 1: Panel a: Schematic of the experimental setup used to measure fusion of \({}^{20}\)O + \({}^{12}\)C. The tilted foil MCP detector, MuSIC@Indiana, the veto detector, and the downstream surface barrier silicon detector (SBD) are indicated. Panel b: Identification of the incident beam utilizing the energy deposit in the first two anodes of MuSIC@Indiana. different beams of ions [24] eliminates the sensitivity to energy loss calculations which have uncertainties as large as 15%. Readout of MuSIC@Indiana by the data acquisition system (DAQ) was triggered using a fast signal from the MuSIC@Indiana cathodes. The large amplitude of the signal associated with the deposit of several tens of MeV into the active volume made triggering the detector simple. For the majority of the \({}^{20}\)O data, MuSIC@Indiana was operated in self-triggering mode. In this mode the data acquired is self-normalizing. For a small fraction of the data, in order to improve the data acquisition readout live-time (typically \(\sim\)70% for 1\(\times\)10\({}^{4}\) ions/s in self-triggering mode), a small veto detector was inserted into the beam path downstream of the active volume of MuSIC@Indiana but within the same gas volume. This axial-field ionization chamber consisted of three \(\sim\)35 \(\mu\)g/cm\({}^{2}\) aluminized mylar foils spaced by 6 mm with a central anode. It provided a fast veto signal for the DAQ from un-reacted beam that exited the active volume of MuSIC@Indiana. Use of the veto allowed operation of the MuSIC@Indiana at a rate of 3\(\times\)10\({}^{4}\) ions/s, three times faster than without the veto detector, with a live time of \(\approx\)70%. When the veto detector was employed, downscaled beam was acquired to provide normalization. The fusion cross-section was calculated using the gated \(\Delta\)E1-\(\Delta\)E2 spectrum and the downscale factor. Comparison of the fusion cross-section acquired with and without the veto detector confirmed that the measured cross-section was independent of its use. Presented in Fig. 1b is the \(\Delta\)E\({}_{A0}\)-\(\Delta\)E\({}_{A1}\) spectrum of MuSIC@Indiana used to identify an incoming beam particle. Clearly evident is a single peak for \({}^{20}\)O without discernible contaminant peaks. A locus of points with a slope of approximately unity corresponds to pileup events of two beam particles in MuSIC@Indiana. Near horizontal and vertical bands correspond to pileup events associated with scattering from the mylar window as the beam enters the active volume. All analyzed events are tagged by requiring the identification condition indicated by the gate shown in Fig. 1b. This selection provides a count of the cleanly identified \({}^{20}\)O ions incident on the detector. Presented in Fig. 2 are representative ionization traces measured in MuSIC@Indiana. The black line in all panels represents the average energy deposit of an \({}^{20}\)O ion as it traverses the detector. Error bars indicate the standard deviation of the energy deposit in an anode. Indicated in panel a) is the ionization trace for a fusion event occurring in anode 5. Fusion of the \({}^{20}\)O with \({}^{12}\)C produces a \({}^{32}\)Si with an excitation energy of \(\approx\)30-50 MeV. De-excitation of this compound nucleus via emission of neutrons, protons, and \(\alpha\) particles results in an evaporation residue which, due principally to its larger atomic number as compared to the beam, manifests a larger ionization signal. Prior to fusion the incident \({}^{20}\)O ion loses energy as it traverses the gas. Knowledge of the position at which the fusion occurs thus allows determination of the energy at which the incident ion fuses. Fusion events are assumed to occur in the middle of the anode in which they are detected. This simple assumption is reasonable as the beam does not stop in MuSIC@Indiana [26]. Accurate determination of the fusion cross-section relies on the isolation of fusion events by their characteristic traces from competing processes. In addition to the veto detector unreacted beam is rejected by the presence of an appreciable signal (E\({}_{max}\)\(>\) 1 MeV) in the most downstream anode of MuSIC@Indiana. Following beam rejection, the principal contributions, aside from fusion, are from proton capture events or two-body (elastic) scattering. A representative trace for proton capture from the mylar window or early in the CH\({}_{4}\) gas is depicted in Fig. 2b. Clearly evident is the increased ionization due to the increased atomic number from \({}^{20}\)O to \({}^{21}\)F. portrayed in Fig. 2c is a representative trace of a two-body event. In this event the incident ion is scattered from a \({}^{12}\)C nucleus, retaining most of its energy. The back-scattered \({}^{12}\)C introduces a large energy deposit into anodes 6 and 7 while the forward-scattered \({}^{20}\)O provides beam-like ionization in the subsequent section of the detector. Two-body events are further distinguished by utilizing the left Figure 2: Representative traces from MuSIC@Indiana for different types of events. Indicated as the black line in all panels is the average trace for the \({}^{20}\)O beam. Error bars indicate the standard deviation of the energy distribution for the beam on each anode. a) fusion in anode 4 b) Proton capture on \({}^{1}\)H early in MuSIC@Indiana and c) a two-body event at anode 4. right segmentation of the anodes [23; 24]. To validate the capability of the present setup to accurately measure the fusion cross-section, the excitation function for \({}^{18}\)O + \({}^{12}\)C was measured with \({}^{18}\)O ions incident at E/A = 3.3 MeV. The resulting excitation function is displayed in Fig. 3a. Relatively good agreement of the present measurement (closed symbols) with the previously measured data is observed confirming the ability of the present experimental setup to accurately measure the fusion excitation function. When both vertical and horizontal error bars are considered only two points, at E\({}_{c.m.}\)\(\sim\)13.1 and 14.4 MeV, deviate significantly from the previously published data. The deviation of these points is not presently understood. Shown in Fig. 3b is the first measurement of the fusion excitation function for \({}^{20}\)O + \({}^{12}\)C in the energy interval 10.6 MeV \(\leq\)E\({}_{c.m.}\)\(\leq\)17.5 MeV. As expected, the excitation function manifests a general decrease with decreasing energy, consistent with a barrier controlled process. A sharp drop in cross-section is observed at E\({}_{c.m.}\)\(\approx\) 13 MeV which might signal a transition in the potential associated with individual _l_-waves [30; 31; 32] or presence of a resonance. To examine the dependence of the fusion cross-section as a function of neutron excess we calculate the average fusion cross-section in the interval 12 MeV\(\leq\)E\({}_{c.m.}\)\(\leq\)17 MeV. The choice of this energy interval is dictated by the region for which the fusion excitation function is measured with the high energy bound dictated by the \({}^{20}\)O data. For this reason, the energy interval used is slightly smaller than in prior work [16; 35]. In Fig. 4 the present experimental results are compared with the prior data for \({}^{16-19}\)O + \({}^{12}\)C [16]. Surprisingly, the \({}^{20}\)O data does not manifest the large increase previously observed for \({}^{19}\)O but exhibits a cross-section that is essentially the same as that of \({}^{18}\)O. The experimental data shown indicates that based on the average fusion cross-section different categories emerge. One category is the case of the even neutron number (spin-paired) nuclei \({}^{16,18,20}\)O which all exhibit essentially the same cross-section. The other category is that of the odd neutron number (spin-unpaired) nuclei \({}^{17}\)O and \({}^{19}\)O. In this latter category \({}^{17}\)O manifests a suppression of the fusion cross-section relative to the spin-paired nuclei while \({}^{19}\)O exhibits an enhancement. Interpretation of the data is aided by comparison with predictions of three theoretical models. A simple approach to describe the fusion cross-section is the Sao Paulo model which involves a double-folding of the ground-state density distribution of the colliding nuclei [39]. The frozen density distributions were calculated within a relativistic mean field (RMF) approach using the FSUGOLD interaction [40; 41] and the resulting average fusion cross-section is presented as the solid line (RMF-SP). It provides a completely static reference for the change in the fusion cross-section with increasing neutron excess by only accounting for the increased size of the colliding nuclei. To address the impact of dynamics we also performed coupled channels calculations which consider fusion dynamics through coupling to excited states due to mutual Coulomb excitation of the colliding nuclei. Calculations with this coupled-channels approach have been relatively successful at describing the Figure 3: Panel a) Comparison of the excitation function for \({}^{18}\)O + \({}^{12}\)C measured in the present experiment (solid symbols) along with prior measurements (open symbols) [26; 27; 28; 29; 13; 26]. Panel b) Fusion excitation function for \({}^{20}\)O + \({}^{12}\)C. Figure 4: Dependence of the average cross-section on neutron excess. Error bars reflect both the statistical and systematic uncertainties in the measurements. Data are taken from: \({}^{16}\)O [27; 28; 33; 34]; \({}^{17}\)O [35; 36; 37; 38; 27]; \({}^{18}\)O [26; 27; 28; 13; 26]; and \({}^{19}\)O [16]. fusion of nuclei at and near stability [42]. Predictions of the coupled channels model, CC, are depicted as the dashed lines in both Fig. 3 and Fig. 4. As evident in Fig. 3, the CC calculations are in good agreement with the experimental data for both \({}^{18}\)O and \({}^{20}\)O. For the lowest energies measured E\(\leq\)13 MeV the CC calculations slightly overpredict the measured data. In the present CC calculations only the first exited state is included. Comparison of the cross-section with and without the first excited state led us to ignore the contribution of higher-lying states which contribute progressively less to the fusion cross-section. For both the RMF-SP and CC calculations the dependence of the average fusion on neutron excess is similar. To examine the role of dynamics within a different framework we also performed density constrained time-dependent Hartree-Fock (DC-TDHF) calculations. Depicted in Fig. 3 as the dotted line, one observes that the DC-TDHF cross-sections are systematically higher than the experimental data although the shape of the excitation function is reasonably reproduced. In contrast to the RMF-SP and CC models the DC-TDHF predicts a significantly larger average fusion cross-section indicative of the difference in the initial neutron and proton density distributions and to a lesser extent the dynamics in the DC-TDHF model [16]. With the reference provided by the RMF-SP and CC calculations we note that the cross-section for \({}^{16}\)O and \({}^{18}\)O is reasonably well described while the slight decrease experimentally observed from \({}^{18}\)O to \({}^{20}\)O is less than the increase predicted by from the RMF-SP model which reflects the change in the static size. The fusion cross-section for \({}^{17}\)O and \({}^{19}\)O with an unpaired neutron deviates significantly from the RMF-SP and CC model predictions. It is particularly noteworthy that this deviation is in opposite directions for the two nuclei with a suppression evident for \({}^{17}\)O and an enhancement observed for \({}^{19}\)O, even though the one neutron separation energies of 4143 keV and 3956 keV for \({}^{17}\)O and \({}^{19}\)O respectively are essentially the same. The DC-TDHF model, which includes dynamics, provides a significantly larger cross-section than the CC calculations. While it is in better agreement with the \({}^{19}\)O cross-section it significantly over-predicts both the neutron-paired cases of \({}^{16,18,20}\)O as well as unpaired neutron case of \({}^{17}\)O. These results for both the spin-paired and spin-unpaired cases suggest that for this isotopic chain the near-barrier fusion cross-sections cannot be understood within the framework of a simple barrier which systematically evolves. Given the relatively strong binding of the neutrons in \({}^{20}\)O it is unlikely that neutron breakup is on average responsible for this near constancy of the average cross-section for \({}^{20}\)O as compared to \({}^{18}\)O. Rather, it is likely that the barrier/barrier distributions for the different nuclei do not follow a simple systematic trend. If the barrier/barrier distributions change significantly from one nucleus to another, reflecting the influence of initial structure and pairing on fusion, then calculation of an integrated cross-section over a common interval could introduce the observed behavior. Direct extraction of barrier distributions, possibly through the double derivative approach [43], requires higher resolution data than is presently available, motivating future experiments. Use of an active target technique allowed an effective first measurement of the fusion excitation function for \({}^{20}\)O + \({}^{12}\)C at near-barrier energies. A large increase of the average fusion cross-section previously observed for \({}^{19}\)O is not observed for \({}^{20}\)O. Rather, over the interval 12 MeV \(\leq\)E\({}_{c.m.}\)\(\leq\)17 MeV, the average fusion cross-section, \(<\)\(\sigma_{F}\)\(>\)\({}^{17}_{12}\), for \({}^{20}\)O is slightly decreased as compared to \({}^{18}\)O. In marked contrast to the even neutron number (N) cases, nuclei with an odd number of neutrons exhibit significantly different behavior. The deviation of the odd-N as compared to even-N nuclei could reflect differences in pairing/pairing dynamics or the initial shell structure that persist through fusion. Both static and dynamic theoretical models are unable to describe the behavior of the average fusion cross-section with neutron-excess for the entire isotopic chain. Future high-resolution measurements of the fusion excitation function for this isotopic chain with both paired and unpaired neutrons are critical in understanding fusion for neutron-rich light nuclei and investigating the role of initial structure and pairing on fusion. ###### Acknowledgements. We acknowledge the high quality beam and experimental support provided by the technical and scientific staff at the Grand Accelerateur National d'Ions Lourds (GANIL) that made this experiment possible. In particular we appreciate the assistance of D. Allal, B. Jacquot, and D. Gruyer. Scientific discussions with N. Alahari are gratefully acknowledged. We are thankful for the high-quality services of the Mechanical Instrument Services and Electronic Instrument Services facilities at Indiana University. This work was supported by the U.S. Department of Energy Office of Science under Grant No. DE-FG02-88ER-40404 and Indiana University. The research leading to these results has received funding from the European Union's HORIZON EUROPE Program under grant agreement n\({}^{*}\)101057511. R. deSouza gratefully acknowledges the support of the GANIL Visiting Scientist Program. Kyle Brown acknowledges support from Michigan State University.
2302.09654
The Half-Life of a Tweet
Twitter has started to share an impression_count variable as part of the available public metrics for every Tweet collected with Twitter's APIs. With the information about how often a particular Tweet has been shown to Twitter users at the time of data collection, we can learn important insights about the dissemination process of a Tweet by measuring its impression count repeatedly over time. With our preliminary analysis, we can show that on average the peak of impressions per second is 72 seconds after a Tweet was sent and that after 24 hours, no relevant number of impressions can be observed for ~95% of all Tweets. Finally, we estimate that the median half-life of a Tweet, i.e. the time it takes before half of all impressions are created, is about 80 minutes.
Juergen Pfeffer, Daniel Matter, Anahit Sargsyan
2023-02-19T18:48:15Z
http://arxiv.org/abs/2302.09654v2
# The Half-Life of a Tweet ###### Abstract Twitter has started to share an _impression_count_ variable as part of the available public metrics for every Tweet collected with Twitter's APIs. With the information about how often a particular Tweet has been shown to Twitter users at the time of data collection, we can learn important insights about the dissemination process of a Tweet by measuring its impression count repeatedly over time. With our preliminary1 analysis, we can show that on average the peak of impressions per second is 72 seconds after a Tweet was sent and that after 24 hours, no relevant number of impressions can be observed for \(\sim\)95\(\%\) of all Tweets. Finally, we estimate that the median half-life of a Tweet, i.e. the time it takes before half of all impressions are created, is about 80 minutes. Footnote 1: Data collection and analysis of this short paper are limited, due to the fact that this new feature was released just 10 days before the ICWSM 2023 submission deadline. ## Introduction The idea that information can lose its value over time has long been studied in library science and bibliometrics [1, 1]. A very important metric to assess this value loss is information _half-life_, which describes the time span in which half of the information value is lost. The information value of books can be measured with the number of times a particular book is borrowed from a library, and one way to characterize the value of a scientific article is the number of times an article is cited. Modeling these temporal observations allows us to model decay functions and estimate the time point of 50% under the curve. In the context of scientific literature, half-life periods are typically on the time level of years. When we turn to news articles, the half-life in terms of stories published in relation to a certain topic or event comes down to several days. With the advent of the 24-hour news cycle and the rise of social media, the information value of news has suffered an even faster decay [1]. On Twitter, presenting the number of likes and re-Tweets for every Tweet has been an integral part of the platform since its beginning and has been used in order to discuss a wide variety of scores for popularity and to approximate the reach and life span of a Tweet [1, 2]. So far, the actual number of how many people have seen a Tweet was only available for a user's own Tweets. Starting December 15, 2022, Twitter has been making the number of _views_--which is the name of the _impression_count_ in the platform's GUIs--visible for every Tweet via its web interface as well as via mobile APPs: "View counts show the total number of times a Tweet has been viewed. With view counts, you can easily see the reach..."2 On January 5, 2023, it was publicly announced3 that the impression count will now also be available via Twitter's API v2 for every Tweet as part of the public metric information. Footnote 3: [https://twitter.com/suhemparack/status/1611085481395224576](https://twitter.com/suhemparack/status/1611085481395224576) Questions and contributions.The availability of this feature in the API data has motivated our study. We utilize the Academic API [2], which is free for research purposes and allows for full-archive searches on Twitter, We try to answer the following questions, which also enumerate the contributions of this article: _How can we observe the diffusion dynamics of a Tweet in terms of reach over time?_ We will illustrate how the Academic API can be used to repeatedly collect information about the same Tweets in order to create a time series dataset of impressions. _What are the properties of the short-term temporal impression distribution, i.e., how many impressions happen when and when is the peak during the early phase?_ We show how to use the time series dataset to interpolate an average diffusion curve of impressions on a second timescale. _Can we show evidence that the diffusion process of Tweets comes to a relatively early stop so that we have sufficient overall impression counts in order to identify the half-life time points?_ We can show that the impression expansion slows down dramatically or even comes to a complete stop for the vast majority of Tweets very quickly so that we can focus our analyses on the first 24 hours of a Tweet's life. _Finally, can we determine the average half-life of a Tweet?_ We will show that, by ignoring a small number of very successful and long-lasting Tweets, we are able to define the median half-life of a Tweet with 79.5 minutes. Related Work Information half-life of scientific literature.Information half-life, i.e., the time it takes until an entity of information has lost half of its value, has been studied for decades in the context of scientific articles [1] and books in libraries [1]. Historically, half-lives ranging from 3 to 12 years have been observed, with longer half-lives in theoretical sciences [13]. The processes of discovery of new knowledge and the accumulation of existing knowledge underlying the citation process result in the observed half-life phenomenon. Publication delays [1] and forgetting knowledge [22] account for some differences in half-life across disciplines. When the half-life of academic material is modeled mathematically, exponential decay functions are used to describe the dynamics [1, 10]. Half-life of news media.For news stories, the journalistic production cycles have information decay built into the system as a way to keep readership, viewership [12], and revenues [11]. For a specific event that is covered in the news, the half-life is measured as the time until half of the corresponding articles appear. While there are nuanced differences in half-life patterns of media coverage caused by various forms of online and offline external factors [13], the analysis of the dynamics of coverage in printed news outlets reveals a faster decay in light of the emergence of social media [1]. Information decay in social mediaThe success of posts on many social media platforms is dependent on shares or views. In the case of Twitter, there are two main approaches to quantify the popularity of a tweet: utilizing the number of retweets, and the audience size, i.e., the number of users who had the tweet in their feed. In the past, one way to calculate the audience size was to use the number of followers for each person who retweets a post. Despite the advantages of potential audience size and of approximation techniques for audience size [10], the number of retweets, likes and comments have been used in numerous studies to quantify and predict the reach [1] and the lifespan of a tweet [15, 16]. The studies range from analyzing the effect of multimedia on tweet popularity [17, 18], the success of personification of brands on Twitter [1] to using social media engagement to not only improve predictions of the traffic flow of the news articles, but also to estimate the shelf-life, a variation of half-life, of the articles [12]. With the new _impression_count_ variable in Twitter's API data, we are--for the first time--able to directly get information about the reach of a certain Tweet. ## Data With Twitter's Academic API v2 [13], we have collected 22,144 Tweets on January 6, 2023, as well as the number of views of these tweets in the following way. During the time 9:00-20:00 UTC, we randomly selected ten individual minutes and collected all Tweets (excluding re-Tweets) from the \(42^{nd}\) second of these minutes, as described in Pfeffer et al. (2023). For every second of data (on average 2,214 Tweets), we first started to collect the Tweets exactly 10 seconds after the expiration of the second of interest. After this collection process had finished, we immediately restarted it and collected the same set of Tweets again. We have repeated this collection effort 99 times for every observed second of Twitter data. Since every single API call is limited to a maximum of 500 Tweets, several calls (happening at different timestamps) are necessary for data collection. Consequently, we have stored the exact time of data collection for every API call. For the following analyses, we kept 21,685 Tweets that were available (i.e., not deleted or hidden) in all 99 collection attempts. The time series of the Tweet views were, on average, collected over 1,893 seconds (\(\sim\)\(31.5\) minutes). While this dataset is sufficient for the majority of the statistical analysis of this article, we do not expect Tweet half-lives to be under half an hour. Consequently, we also collected a second dataset to get a longer period of view data. We performed the following second data collection similar to the approach described above. However, this time we have collected impression counts of about 5,000 Tweets over the course of eleven hours for 1,000 times, as well as the view counts of these Tweets after 24 hours. ## Analyses Number of Views.The time series of the Tweet views were collected on average over 1,893 seconds (\(\sim\)\(31.5\) minutes) after the Tweets were sent. During this time, the Tweets accumulated, on average, 46.2 views (range 0-43,870). 15.6% of Tweets had zero views. Fig. 1 plots the \(log_{10}\)-distribution of the number of views. Due to the nature of the long-tailed distribution with a small number of Tweets with a very large number of views (about 0.5% having more than 1,000 views), the median of 7 will better represent the view distribution. Diffusion patterns.In Fig. 2, we have plotted the views-over-time curves for all 2,723 Tweets of our sample that received more than 50 impressions by the end of the data collection. Looking more closely at our data, it becomes obvious that we can observe two different diffusion dynamics. Figure 1: Distribution of number of views after \(\sim\)\(30\) Minutes The _sigmoid_ type (eq. 1) represents Tweets that reach their maximum potential for impressions very fast and quickly saturate. Without further analysis, we can assume that these Tweets remain within their local areas of the network and they receive few or no retweets. \[T_{a,b}^{sigmoid}(t)=\frac{1}{1+(b*exp(-a*x))}-\frac{1}{1+b} \tag{1}\] Fig. 4 (which we will discuss later), appears to imply that new views are distributed according to \(\sim\)\(t^{-1}\), implying the cumulative view count to follow a _log_-curve, which can be described as: \[T_{a,b}^{log}(t)=b*\frac{log(a*x+1)}{log(a+1)} \tag{2}\] Fig. 3 is based on the same data as Fig. 2. Here, every time series was fit with both model types and drawn with the better fitting (as measured by the MSE every ten seconds) function. The curves are then colored red when the _sigmoid_-model (eq. 1) was used and blue for the _log_-model (eq. 2). The _sigmoid_-model performs better if we allow estimation over 1.0, which makes sense when imagining the future development of the curves. Identifying the diffusion type of a Tweet at an early stage can be helpful in predicting its future view count development. For our data, categorizing Tweets as _log_- or _sigmoid_-types improves the prediction of how many views they received after 24 hours significantly. _Log_-tweets receive, on average, 29% more views after 24 hours, with a significance of \(\alpha<1\%\). the numbers after 24 hours. Here, the median increase in view count is a factor of 3.75. Half-life of Tweets.Since the view counts from 20 minutes to 24 hours changed by a factor of 3.75, we cannot observe 50% of views within this dataset, and this data is not sufficient for empirically measuring the half-life of a Tweet. Consequently, we turned to the second dataset, which includes 1,000 data collections of about 5,000 Tweets over the course of 11 hours. Consistently with the previous data collection, 8.5% of Tweets had zero views after 24 hours. For the remaining Tweets, we have evaluated how long it took for every Tweet to reach 50% of the 24h view numbers. In less than 4% of Tweets, this was not possible, i.e., the Tweets reached the 50% level after the first eleven hours, confirming our previous observation that view counts reduce quickly over time for the vast majority of Tweets. Fig. 5 illustrates the distribution of half-lives in our second dataset. The right-skewed distribution has an arithmetic mean of \(131.6\) minutes (dashed line) and a median of 79.5 minutes (dotted line) with the following quantiles: \begin{tabular}{l r r r r r} Quantile & 10\% & 25\% & 50\% & 75\% & 90\% \\ Half-Life & 7.2 & 26.3 & 79.5 & 175.5 & 342.1 \\ \end{tabular} ## Outlook Future research questionsThe most obvious future research questions are related to identifying the factors that drive view counts and half-life. At every data collection, we also get the number of re-tweets and likes at the moment of data collection. Mathematically modeling and studying the temporal interplay of these times series with the number of views is a topic for a separate paper. Other features that are available via the Twitter API are the number of followers of the tweet senders, the tweet content, and possibly connected images and websites, to name just a few. We are aware that there are Tweets that go viral days or even months after they were sent. We did not account for these dynamics. However, long-term phenomena could be studied with our approach of repeatedly collecting information about the same set of Tweets (e.g., once a day). Finally, studying human behavior with social media data always comes with challenges related to biases and data quality [21]. The addition of the impression count to the list of variables, which researchers can get from API calls, will open up great new research opportunities to study popular users and content as well as more nuanced diffusion processes. At the same time, research also has to focus on revealing technical details and possible artifacts of view counts and, more broadly, Twitter metrics. Secrets.One surprising observation of this study was that a significant proportion of Tweets do not get any views. Are these Tweets getting banned, but not deleted? This and many other questions are related to the fact that social media platforms, including Twitter, are secretive about their algorithms and data handling. Besides investigating platform dynamics to improve research quality, we also need to hold the platforms accountable whenever possible to increase transparency about data handling and algorithmic content filtering. ## Research Ethics and Reproducibility In this study, we used only publicly available data from Twitter and only utilized Twitter's own APIs to collect data. We did not send any Tweets and did not interact with other Twitter accounts. Our only variables extracted from the Twitter data were Tweet IDs, timestamps of when the Tweets were created, and the impression count, which is part of the public metric variable. No Tweet texts, account profile information, or other information that could identify individuals or groups (PII) were analyzed. Reproducibility.All data from the analyses of this article are available online (_www.pfeffer.at/data/halflife_). The data includes all Tweet IDs, Tweet creation time, and for each collection iteration for every Tweet, its collection time, and the number of views. Since the views are a function of when the Tweets are collected, we have expanded the JSON response data from the Twitter API that is stored in files with the exact time of every API query. ## Acknowledgments J.P. wants to thank Dr. Kathleen M. Carley and Dr. Larry Richard Carley for discussing historical literature related to the topic as well as the potential mathematical operationalization of information half-life. Figure 4: Average views per second within the first \(\sim\)\(20\) min. Figure 5: Distribution of half-life values with mean = 131.6 min. (dashed line) and median = 79.5 min. (dotted line).
2301.10655
Excited-State Phase Diagram of a Ferromagnetic Quantum Gas
The ground-state phases of a quantum many-body system are characterized by an order parameter, which changes abruptly at quantum phase transitions when an external control parameter is varied. Interestingly, these concepts may be extended to excited states, for which it is possible to define equivalent excited-state quantum phase transitions. However, the experimental mapping of a phase diagram of excited quantum states has not yet been demonstrated. Here we present the experimental determination of the excited-state phase diagram of an atomic ferromagnetic quantum gas, where, crucially, the excitation energy is one of the control parameters. The obtained phase diagram exemplifies how the extensive Hilbert state of quantum many-body systems can be structured by the measurement of well-defined order parameters.
Bernd Meyer-Hoppe, Fabian Anders, Polina Feldmann, Luis Santos, Carsten Klempt
2023-01-25T15:57:17Z
http://arxiv.org/abs/2301.10655v3
# Excited-state phase diagram of a ferromagnetic quantum gas ###### Abstract The ground-state phases of a quantum many-body system are characterized by an order parameter, which changes abruptly at quantum phase transitions when an external control parameter is varied. Interestingly, these concepts may be extended to excited states, for which it is possible to define equivalent excited-state quantum phase transitions. However, the experimental mapping of a phase diagram of excited quantum states has not yet been demonstrated. Here we present the experimental determination of the excited-state phase diagram of an atomic ferromagnetic quantum gas, where, crucially, the excitation energy is one of the control parameters. The obtained phase diagram exemplifies how the extensive Hilbert state of quantum many-body systems can be structured by the measurement of well-defined order parameters. ## I Introduction Last century's experimental advancement to cool quantum systems close to absolute zero temperature, where thermal fluctuations are frozen out, has led to a revolution in the understanding of quantum phases and phase transitions [1]. Nowadays, quantum systems may be shielded from the surrounding environment to prevent thermalization. As a result, it is possible to investigate the properties and dynamics of quantum systems prepared at a non-zero energy. Major theoretical efforts, followed by experimental demonstrations, have focused on extending the concept of quantum phases and phase transitions to the realm of finite-energy systems. This includes dynamical systems, where phase transitions can be associated with a sudden change of the long-term average of observables [2; 3; 4; 5; 6; 7]. The dynamics can also result in a sudden change of oscillatory behaviour, termed time crystal [8; 9], and can show universal features [10; 11; 12]. Furthermore, in open quantum systems, the interplay between driving and dissipation may lead to dissipative phase transitions, characterized by a sudden change of the steady-state properties [13]. Interestingly, also excited eigenstates of isolated quantum systems may exhibit differentiated quantum phases, so-called excited-state phases [14], separated by excited-state quantum phase transitions (ESQPTs). The presence of ESQPTs, with their characteristic divergence of the density of states, has been experimentally revealed in microwave Dirac billiards [15] and molecular spectra [16; 17; 18]. In contrast, despite some theoretical proposals that have identified possible order parameters in some scenarios [19; 20; 21; 22; 23], the direct experimental exploration of an excited-state phase diagram by measuring appropriate order parameters has remained up to now elusive. In our experiments, we explore excited-state phases and ESQPTs employing an atomic spinor Bose-Einstein condensate (sBEC). Our ferromagnetic rubidium \(F=1\) sBEC presents three distinct ground-state phases. The transitions between them may be driven by the variation of the quadratic Zeeman energy \(q\), which acts as the control parameter [24; 25]. Recent experiments, performed with a sodium sBEC [26], have revealed a modification of the spin dynamics closely associated to an ESQPT, but were limited to changing \(q\) of the highest-energy level. In this article, we map out the excited-state phase diagram of our ferromagnetic sBEC. In contrast to ground-state transitions [27; 28], or the equivalent transition in the highest-energy state [26], general ESQPTs can be crossed not only by varying \(q\), but also as a function of the excitation energy, which acts as a second control parameter. This is achieved in our experiments by creating coherent-state spin superpositions with an energy that can be adjusted by both the population of the spin states and their relative phase. In this way, we are able to reach any point of the excited-state phase diagram. Furthermore, for each point, we measure an interferometric order parameter, which is a variant of that theoretically proposed in Ref. [22], but is robust with respect to magnetic field noise. With the help of this order parameter, we characterize the complete excited-state phase diagram as a function of the two control parameters, identifying three distinct excited-state quantum phases. ## II Quantum phases in spinor Bose-Einstein condensates We initially prepare an sBEC of \(7\times 10^{4}\) rubidium atoms in the hyperfine state \(|F,m\rangle=|1,0\rangle\) in a crossed-beam optical dipole trap, where the atomic spin can freely evolve. Because of its small size, the system is well described in single-mode approximation. The spin dynamics, characterized by the creation and annihilation of pairs of atoms in \(|1,\pm 1\rangle\), is modeled by the Hamiltonian [24] \[\hat{H} =\frac{q}{|\Omega|}\left(\hat{N}_{+1}+\hat{N}_{-1}\right)-\frac{1} {N}\left(\hat{N}_{0}-\frac{1}{2}\right)\left(\hat{N}_{+1}+\hat{N}_{-1}\right)\] \[-\frac{1}{N}\left(\hat{a}_{1}^{\dagger}\hat{a}_{-1}^{\dagger} \hat{a}_{0}\hat{a}_{0}+\hat{a}_{0}^{\dagger}\hat{a}_{0}^{\dagger}\hat{a}_{1} \hat{a}_{-1}\right)\!, \tag{1}\] where \(\hat{a}_{m}^{\dagger}\) and \(\hat{a}_{m}\) are the bosonic creation and annihilation operators for state \(m\), and \(\hat{N}_{m}\equiv\hat{a}_{m}^{\dagger}\hat{a}_{m}\) with \(\sum_{m}\hat{N}_{m}=N\). Note, that we assume the magnetization-free subspace, \(\langle N_{+1}-N_{-1}\rangle=0\). The interaction strength \(|\Omega|=h\times 13.9\,\mathrm{Hz}\) is experimentally determined and depends on the atom number, the spatial wave function and the atomic properties. The quadratic Zeeman energy (QZE) is positive, \(q=h\times 64.7\) Hz for the applied magnetic field of \(0.95\,\mathrm{G}\), but is tuned by a microwave dressing field on the transition \(|1,0\rangle\leftrightarrow|2,0\rangle\), and can thus be adjusted to positive and negative values [29]. The linear Zeeman effect has been eliminated by moving to a rotating frame. The Hamiltonian (1) features three ground-state phases [24, 30] depending on the QZE: (i) the twin-Fock (TF) phase for \(q/|\Omega|<-2\), characterized by an equal population of \(m=\pm 1\), and no population in \(m=0\); (ii) the Polar (P) phase for \(q/|\Omega|>2\), where all the population is in \(m=0\); and (iii) the intermediate Broken-Axisymmetry (BA) phase for \(|q/\Omega|<2\), where all spin states are populated. Figure 1(a) shows the energy of the ground state and a series of exemplary excited states, as obtained by exact diagonalization from Eq. (1) [22]. The vanishing gap between the ground and first excited state at \(q/|\Omega|=\pm 2\) marks the ground-state phase transitions. A vanishing gap between adjacent energy eigenstates also at increasing excitation energy per particle \(\eta=\langle\hat{H}\rangle/N-\eta_{0}\) with the ground state energy \(\eta_{0}\), but shifts towards smaller \(|q/\Omega|\). This diverging density of states marks the ESQPTs, which separate the excited-state spectrum into three qualitatively different excited-state phases. For simplicity, and since the ground-states can be considered the zero-energy limit of the excited phases, we label these phases as TF', BA', and P'. Note that contrary to the ground-state transitions, or the equivalent transition in the most energetic excited state [26], the ESQPTs can be crossed not only by quenching \(q\), but, crucially, also by a controlled change of the excitation energy \(\eta\) at a fixed \(q\) value [Fig. 1(e-f)]. Figure 1: **Excited-state quantum phase diagram.** (a) Three excited-state quantum phases TF’ (red, left), BA’ (orange, center) and P’ (blue, right) appear for two control parameters, the excitation energy per particle \(\eta\) and the quadratic Zeeman energy \(q\). Gray lines represent the eigenenergies of Eq. (1) calculated for \(N=70000\) atoms, where every 500th eigenvalue is plotted. The ESQPTs are indicated by a light yellow line and the ground-state quantum phase transition is highlighted by light yellow dots. (b-d) Depending on the two control parameters, different trajectories appear on the Bloch sphere: For the TF’ (b) and P’ (d) phases, full \(2\pi\) phase oscillations are obtained, while the BA’ (c) phase is limited to one side of the sphere. Black dots indicate different fractions of a full population oscillation \(T\), i.e., the time where the relative population \(n_{0}\) equals the initial population distribution. (e-f) For distinct excitation energies \(\eta\), different quantum phases arise for the same external control parameter \(q\). The affiliation to an excited-state quantum phase can be quantified by either measuring the phase \(\varphi\) after half a population oscillation period \(T/2\) (left axis) or after a full period \(T\) (right axis). ## III State preparation The main purpose of this work is to experimentally characterize the three excited-state phases at an arbitrary excitation energy. This requires, in addition to the introduction of an appropriate robust order parameter discussed below, the capability of preparing states with a controllable and well-defined non-zero energy. In order to achieve the latter, the sBEC is initialized in our experiments in the state \(|1,0\rangle\) at the desired QZE \(q\) by a sudden quench of the microwave dressing field. Subsequently, the energy is set by a variable population transfer and phase adjustment which generate a coherent spin state. Radio-frequency radiation resonant to the \(|1,0\rangle\leftrightarrow|1,\pm 1\rangle\) transition leaves a relative population of \(\mathrm{n}_{0}(\mathrm{\SIUnitSymbolDegree})=\cos^{2}(\frac{1}{2}\Omega_{R}\tau)\) in level \(|1,0\rangle\) and symmetrically transfers atoms to \(|1,\pm 1\rangle\). Here, \(\Omega_{R}\) is the Rabi frequency and \(\tau\) the pulse duration. The resulting state can be visualized on the generalized Bloch sphere [Fig. 1(b-d)], as only two levels are effectively populated, the initial level \(|1,0\rangle\) and the symmetric superposition \(|g\rangle\equiv\frac{1}{\sqrt{2}}(|1,1\rangle+|1,-1\rangle)\). The antisymmetric superposition \(|h\rangle\equiv\frac{1}{\sqrt{2}}(|1,1\rangle-|1,-1\rangle)\) is not populated and therefore remains negligible under the action of Eq. (1). The relative phase difference \(\varphi\) between \(|g\rangle\) and \(|1,0\rangle\), i.e., the azimuthal angle of the Bloch sphere, can be adjusted from its reference value \(\varphi=-\pi/2\) directly after the radio-frequency pulse to any chosen value by an off-resonant microwave pulse addressing the \(|1,0\rangle\leftrightarrow|2,0\rangle\) transition. This leaves the population unchanged but induces a well-controlled phase shift. The described sequence allows for the preparation of an atomic probe at any point in the phase diagram. The created state is not a stationary eigenstate, which turns out to be advantageous, as discussed below. It is however a superposition of excited eigenstates within a narrow energy window. This is crucial for the precise determination of the excited state phases in the vicinity of the ESQPTs. ## IV Interferometric order parameter After preparation, spin-changing collisions result in a time evolution according to Eq. (1), as visualized on the Bloch spheres in Fig. 1(b-d). The different excited-state quantum phases are characterized by trajectories of differing topology. Besides the fixed points, the dynamics always leads to an oscillation of the population with variable amplitude, while the phase evolution is either bounded (BA' phase) or stretches over a full \(2\pi\) rotation, either in positive (P') or negative (TF') direction. If the initial state is prepared at \(\mathrm{n}_{0}=0.5\) and \(\varphi=0\), the state evolves according to the trajectories displayed in Fig. 2 for \(q/|\Omega|=\{-1.5,-0.5,1.25\}\). The highlighted association with its quantum phase can be determined from a measurement of the azimuthal phase after a full oscillation cycle of the population. Reference [22] proposes to obtain this phase as an order parameter from an interferometric readout, where final microwave and radio-frequency coupling pulses lead to a \(\pi/2\) rotation about the y-axis of the Bloch sphere and map the phase onto a population imbalance. However, this protocol is hindered by the detrimental effect of magnetic-field fluctuations [23], in our case \(\Delta B=$47\,\mathrm{\SIUnitSymbolMicro G}$\), which couple the symmetric level \(|g\rangle\) with the antisymmetric level \(|h\rangle\) and lead to a dephasing after \(5-10\) ms [Fig. 3(d)]. The data in Fig. 2 are therefore recorded by dividing the evolution time into \(4-6\) ms long sections, during which the phase coherence has not degraded (Appendix A). After a full oscillation of \(\mathrm{n}_{0}\), this iterative method yields interferometric measurements of \(\varphi=\{-0.93,0.03,0.94\}\,\pi\) for the three phases respectively, which is close to the ideal values of \(\varphi=\{-1,0,1\}\,\pi\). The protocol thus enables a clear identification of the three excited-state phases, as indicated by the large hexagons in Fig. 4(a). However, the iterative method is time-consuming and a direct measurement of the order parameter is required for a full characterization of the excited-state quantum phase diagram. In order to overcome these limitations, we introduce Figure 2: **Iterative measurement of an interferometric order parameter.** (a-c) The relevant quantum states can be represented on a multi-particle Bloch sphere, where the time evolution follows the depicted trajectories. Depending on the QZE \(q\), the trajectories have different topologies that can be associated with the TF’ (red), BA’ (orange), and P’ (blue) quantum phases with a separatrix in between (black lines). (d) Measurement of the relative population \(\mathrm{n}_{0}\) (circles) as a function of evolution time for \(q/|\Omega|=\{-1.5,-0.5,1.25\}\) corresponding to the highlighted trajectories in (a-c). The data are recorded by an iterative measurement with reinitialization at each data point. Statistical error bars are smaller than the symbols. Solid lines and shading represent the theoretical prediction with uncertainties of \(0.1\) in \(q/|\Omega|\). The dashed lines mark a full population oscillation period T. (e) Iterative phase measurement obtained by an interferometric readout. The color bar at time T indicates an order parameter. the following protocol that is largely insensitive to magnetic field fluctuations and thus allows the desired full characterization [see Fig. 3(a) and Appendix B]. The phase measurement protocol starts after the interferometric evolution by transferring the atoms in level \(|1,0\rangle\) to \(|2,0\rangle\). A subsequent radio-frequency \(\pi\) pulse transfers all atoms from level \(|g\rangle\) to \(|1,0\rangle\), while leaving all \(|h\rangle\)-atoms in \(|1,\pm 1\rangle\). Measuring the atoms in \(|1,\pm 1\rangle\) enables a post-selection on experimental realizations with negligible population in level \(|h\rangle\), which is an effective post-selection on vanishing magnetic-field fluctuations [31]. While the radio-frequency pulse also reduces the number of atoms in \(|2,0\rangle\), they can still be employed to determine the interferometric phase by a microwave \(\pi/2\) pulse on the clock transition \(|1,0\rangle\) to \(|2,0\rangle\). Figure 3(c-e) show the experimental result for an initial state of n\({}_{0}=0.5\) and \(\varphi=0\) at \(q=1.25|\Omega|\) (P' phase). The relative population n\({}_{0}\) [Fig. 3(c)] shows a clear oscillation as it is not affected by magnetic field noise. In Fig. 3(d), the relative population after the interferometric sequence, i.e., the phase signal starts according to the ideal trajectory but picks up substantial noise after \(5-10\) ms. A post-selection on measurement runs where the relative population of \(|h\rangle\) versus \(|g\rangle\) is less than 35% reduces the fluctuations substantially and collapses the measurements onto the prediction. However, a mirrored signal appears whenever the magnetic field deviation is large enough for the \(|g\rangle\)-atoms to cycle once to \(|h\rangle\) and back, which is associated with a phase shift of \(\pi\). A meaningful order parameter can be defined to be \(|\sin\varphi|\) [Fig. 3(e)] as measured after a half-period of n\({}_{0}\). This parameter exploits the number of atoms in \(|h\rangle\) for a calculation of the interferometric phase (Appendix C). The experimental data for variable evolution time agree with the expectation. The mean value of 0.78(3) at a half-period allows for a significant discrimination of the P' phase versus the adjacent BA' phase with an expected order parameter of 0. The residual noise could be further reduced by a stricter post-selection parameter at the expense of prolonged data acquisition. We note that for both the P' and the TF' phases, the ideal value of the presented order parameter is 1. These phases, which are not adjacent anyhow, could nevertheless be distinguished by taking the direction of the evolution into account. The proposed order parameter together with a time-resolved measurement allows for a discrimination of all three excited-state quantum phases and thus for an experimental verification of the complete excited-state quantum phase diagram in Fig. 1(a). ## V Mapping out the phase diagram We employ the developed order parameter to map out the phase diagram along seven different paths that are shown as colored lines in Fig. 4(a). The paths are conceptually different, as they exploit the variation of three different experimental parameters: the QZE \(q\), the initial relative population n\({}_{0}\), and the initial relative phase \(\varphi\). At fixed QZE, the latter two correspond to a variation of the excitation energy per particle \[\eta=\frac{q}{|\Omega|}\left(1-\mathrm{n}_{0}\right)-2\mathrm{n}_{0}(1- \mathrm{n}_{0})\mathrm{cos}^{2}\varphi+\frac{1}{2}\left(\frac{q}{2|\Omega|}- 1\right)^{2}, \tag{2}\] where the last term corresponds to the ground-state energy per particle \(\eta_{0}\). First, we vary the QZE \(q\) while maintaining an initial state with n\({}_{0}=0.5\) and \(\varphi=0\) [Fig. 4(b)]. A variation of the QZE \(q\) leads to a variation of the state's trajectories as displayed on the Bloch sphere. The order parameter shows the expected behaviour; it exhibits large values close to 1 in the TF' and the P' phase, and small values Figure 3: **Measurement of an improved interferometric order parameter.** (a) Illustration of a sequence (see text) to measure the phase of the atoms in \(|g\rangle\) with respect to \(|1,0\rangle\), while using the atoms in \(|h\rangle\) for post-selection to reduce magnetic-field sensitivity. (b) An examined trajectory in the P’ phase. (c-e) Measurement of the relative population n\({}_{0}\) (projection onto vertical axis in (b)), the relative population for phase readout n\({}_{0}^{\prime}\) (horizontal axis), and the order parameter \(|\sin\varphi|\). Measurement results excluded by the post-selection are marked in gray. Light blue crosses indicate mean values of the remaining measurement data (dark blue) for \(|\sin\varphi|\). Solid lines are obtained from the \(N\rightarrow\infty\) limit of Eq. (1), dashed lines include an additional phase of \(\pi\) on \(|g\rangle\), the shaded area indicates half a population oscillation T/2. close to 0 in the BA' phase, with sharp ESQPTs in between. The error bars prove the stability of the results with respect to a variation of the post-selection parameter. It is varied within a range of 0-100% maximum relative population of \(|h\rangle\) and usually fixed to 35%. The ESQPTs are broadened by technical fluctuations, most dominantly of \(q/|\Omega|\), which is influenced by the atom number and the magnetic field. The final result is also displayed in the phase diagram Fig. 4(a), which also includes corresponding measurements for different values of n\({}_{0}\). Secondly, we vary the excitation energy \(\eta\) by preparing initial states with different relative population n\({}_{0}\), while keeping the QZE \(q\) and initial phase \(\varphi\) constant [Fig. 4(c)]. For \(\eta>0.05\), the data confirm the ESQPT from a small to a large order parameter. For \(\eta<0.05\), we observe strong fluctuations which are caused by a low relative population n\({}_{0}\) of a few percent after the evolution which deteriorates the results due to dominant detection noise. This is also visible in an instability with respect to the post-selection parameter. Therefore, our method remains inconclusive in this case of very small populations. Finally, we vary the excitation energy \(\eta\) by adjusting the initial phase \(\varphi\) at the relative population n\({}_{0}\) of the ground-state stationary point (Appendix D), as shown in Fig. 4(d). The phase measurement is now evaluated after a quarter-period of the n\({}_{0}\)-evolution to approximate the presented order parameter with a good contrast. The data show a clear transition from the BA' to the P' phase for increasing energy. The results presented in Fig. 4(c-d) appear as vertical lines in the phase diagram Fig. 4(a). Together, the evaluation of the order parameter enables a precise determination of the excited-state quantum phases and their transition lines, which presents the main result of this article. ## VI Conclusion In summary, we have experimentally mapped the excited-state phase diagram of an atomic spin-1 Bose-Einstein condensate. This mapping was possible due to the careful preparation of excited states with a controlled well-defined energy, and the introduction of a robust protocol, largely insensitive to magnetic-field fluctuations, for the determination of an interferometric order parameter. Our experiments probe a crucial feature of excited-state transitions: they can be crossed Figure 4: **Measurement of the order parameter as a function of various control parameters.** (a) For the description of the phase diagram, see Fig. 1(a). An interferometric order parameter \(|\sin\varphi|\) is recorded (see text) and included according to the color scale, enabling a clear determination of the quantum phases. Small circles correspond to a variation of the QZE \(q\), and small diamonds and squares result from a variation of the excitation energy \(\eta\), either by adjusting the population or the phase, respectively. The colored lines illustrate the different measurement paths. The large hexagons represent an iterative measurement of an equivalent phase-dependent order parameter (see text and Fig. 2). (b) A variation of the QZE \(q\) leads to qualitatively different Bloch-sphere trajectories. The order parameter (circles) after T/2 distinguishes between the quantum phases TF’, BA’, and P’. Error bars indicate the sensitivity to the post-selection. All colors reflect the order parameter, as employed for (a). (c) Equivalent measurement for a variation of the relative population n\({}_{0}\). The gray circles on the Bloch sphere indicate different starting points for the trajectories. (d) The order parameter as a function of the initial phase \(\varphi\) is evaluated at T/4 (see text). We look at the back of the Bloch sphere here. not only by quenching the control parameter (in our case the quadratic Zeeman effect), as ground-state transitions, but also by a controllable precise change in the excitation energy, which acts as a second control parameter. The probed abrupt change in the qualitative nature of the excited states at a critical excitation energy, experimentally extends the powerful concept of quantum phase to the entire Hilbert space of the spinor quantum gas. ## Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request. ###### Acknowledgements. We thank A. Smerzi, L. Pezze and M. Gessner for inspiring discussions. We acknowledge financial support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 274200144 - SFB 1227 DQ-mat within the project A02 and under Germany's Excellence Strategy - EXC-2123 Quantum-Frontiers - 390837967. F.A. and B.M.-H. acknowledge support from the Hannover School for Nanotechnology (HSN). P.F. acknowledges support from the Canada First Research Excellence Fund, Quantum Materials and Future Technologies Program. ## Author contributions B.M.-H. and F.A. performed the experiments. B.M.-H., F.A. and P.F. analyzed the data. C.K. and L.S. conceived and designed the experiments with all authors contributing. The manuscript was written by all authors. ## Competing interests The authors declare no competing interests. ## Appendix A Iterative measurement method To avoid the effect of magnetic-field induced dephasing, we apply an iterative method to track trajectories on the multi-particle Bloch sphere. Magnetic field fluctuations lead to a coupling between \(\ket{g}\) and \(\ket{h}\). The time evolution of the quantum state remains unchanged, as the Hamiltonian does not discriminate between \(\ket{g}\) and \(\ket{h}\), but the phase read-out by the radio-frequency pulse is strongly affected because it couples to \(\ket{g}\) only. Consequently, we observe a dephasing of the measurement results for more than \(4-6\) ms. The iterative method involves measurements of relative population and phase after an evolution time where the dephasing remains negligible. After several identical measurement repetitions, the relative population n\({}_{0}\) and phase \(\varphi\) are sufficiently well determined to allow for a subsequent experimental initialisation of the system at the measured parameters. Starting with the initial probe state, the procedure is iterated to acquire the full trajectory. Figure 2 in the main text shows that the iterative method yields trajectories that are well reproduced by our model. However, the method is based on a reinitialisation with clean coherent states, which neglects the squeezing effects in close vicinity of the separatrix. Furthermore, the analysis requires a large number of experimental realisations for an accurate estimation of number and phase for all time steps, rendering an exploration of the full phase diagram impractical. In the following section, we thus report on a more efficient magnetic-field insensitive method. ## Appendix B Magnetic-field insensitive measurements Our magnetic-field insensitive measurement protocol relies on an individual addressing of the symmetric superposition \(\ket{g}\), while using the number of atoms in the antisymmetric superposition \(\ket{h}\) for post-correction. The relative population n\({}_{0}\) follows a magnetic-field insensitive oscillation, such that the oscillation period can be measured experimentally, and is well predicted by our model [Fig. 5]. The prediction is employed to perform phase measurements at half the oscillation period, where the relative population n\({}_{0}\) is recorded. For the measure Figure 5: **Measurement of the oscillation period.** The half oscillation period is extracted from parabolic fits at the turning points of the relative population as a function of time. The result of the fitting procedure (dots) agrees well with our model (solid line), where the interaction strength \(\ket{\Omega}\) is determined by an independent measurement. ment of the phase \(\varphi\), the atoms in \(\left|1,0\right\rangle\) are first transferred to \(\left|2,0\right\rangle\) [Fig. 6(a)]. A subsequent radio-frequency \(\pi\) pulse transfers the population of the symmetric superposition \(\left|g\right\rangle\) to the level \(\left|1,0\right\rangle\), while the antisymmetric superposition \(\left|h\right\rangle\) remains in the levels \(\left|1,\pm 1\right\rangle\) [Fig. 6(b)]. Due to its linear polarisation, the radio-frequency is also resonant with the F=2 manifold, transferring 37.5% to \(\left|2,\pm 2\right\rangle\) each and causing a phase shift of \(\pi\) on all \(\left|2,m\right\rangle\) states [32]. As the quadratic Zeeman effect leads to a small deviation from these ideal values, we calibrated these transfer efficiencies \(\nu_{2}=35\%\) and \(\nu_{-2}=36\%\) and the residual population \(\nu_{0}=1-\nu_{2}-\nu_{-2}\) experimentally. The desired phase relation is now placed between the two clock states \(\left|1,0\right\rangle\) and \(\left|2,0\right\rangle\), which can be rotated into a number difference by a microwave (MW) \(\pi/2\) pulse [Fig. 6(c)]. Before a state-dependent detection, the \(\left|h\right\rangle\) atoms are transferred to \(\left|2,\pm 1\right\rangle\) by two MW \(\pi\) transitions [Fig. 6(d)]. From the final number detection of all Zeeman levels in F=2, all relevant information is available: The number of atoms in \(\left|2,\pm 1\right\rangle\) identifies the original population in \(\left|h\right\rangle\). The measured number in \(\left|2,\pm 2\right\rangle\) allows for an estimation of the original population of \(\left|1,0\right\rangle\). From the prior measurement of the relative population, the number of atoms in \(\left|g\right\rangle\) and equivalently the total number can be obtained. These measurements fix the occupation of the clock levels before the MW \(\pi/2\) pulse, and thus render the measurement of \(\left|2,0\right\rangle\) after the MW pulse an effective measurement of the phase \(\varphi\). Taking the atoms in \(\left|h\right\rangle\) into account, this phase measurement can be exploited as an order parameter for the given excited-state phase diagram. In the next section, we derive how the phase is computed from the measured atom numbers. ## Appendix C Extracting the order parameter from experimental data The order parameter \(\left|\sin\varphi\right|\) can be extracted from the final atom numbers \(N_{m}\) in the Zeeman sublevels \(\left|2,m\right\rangle\) of the \(F=2\) manifold. After the time evolution and thus at the start of our measurement protocol (see previous section), the state of the sBEC is well approximated by a product of single-particle states \[\left|\psi\right\rangle=\sqrt{\mathrm{n_{0}}}\mathrm{e}^{-i\varphi}\left|1,0 \right\rangle+\sqrt{\mathrm{1-n_{0}}}(\cos\beta\left|g\right\rangle-i\sin \beta\left|h\right\rangle), \tag{10}\] where \(\beta\) represents an unknown mixing angle between \(\left|g\right\rangle\) and \(\left|h\right\rangle\) due to magnetic-field fluctuations and \(\varphi\) is the phase that we would like to extract. Our measurement protocol yields \[N_{\pm 2}=\mathrm{n_{0}}\nu_{\pm 2}N\qquad\quad\Rightarrow\quad\quad N= \frac{N_{2}+N_{-2}}{\mathrm{n_{0}}(\nu_{2}+\nu_{-2})}, \tag{11}\] \[N_{\pm 1}=\frac{1-\mathrm{n_{0}}}{2}\sin^{2}\!\beta\,N\;\;\;\Rightarrow\;\; \;\sin^{2}\!\beta=\frac{N_{1}+N_{-1}}{(1-\mathrm{n_{0}})N}, \tag{12}\] and \[N_{0} =\frac{1}{2}\left(\nu_{0}\mathrm{n_{0}}+(1-\mathrm{n_{0}})\cos^{ 2}\!\beta\right)N \tag{13}\] \[+\sqrt{\nu_{0}\mathrm{n_{0}}(1-\mathrm{n_{0}})}\cos\beta\sin( \varphi-\alpha_{0})N.\] Here, \(\nu_{m}\) and \(\alpha_{m}\) describe the effect on the atoms in \(\left|2,0\right\rangle\) when separating \(\left|g\right\rangle\) from \(\left|h\right\rangle\) via a radio-frequency pulse (see previous section). Inserting the expression for \(\sin^{2}\!\beta\) from Eq. 12 into Eq. 13, we obtain \[\left|\sin\varphi\right|=\frac{\left|\left(N_{0}+\frac{1}{2}(N_{1}+N_{-1}) \right)/N+\frac{1}{2}\left((1-\nu_{0})\mathrm{n_{0}}-1\right)\right|}{\sqrt{ \nu_{0}\mathrm{n_{0}}(1-\mathrm{n_{0}}-(N_{1}+N_{-1})/N)}}. \tag{14}\] Note that we can extract only the absolute value of \(\sin\varphi\) because we do not have access to the sign of \(\cos\beta\). Thus, Eq. 14 together with the expression for \(N\) from Eq. 11 can be directly used to evaluate the order parameter \(\left|\sin\varphi\right|\) as shown in Fig. 4(b-d) in the main text. ## Appendix D Determination of stationary points The phase transition by a variation of the initial phase \(\varphi\) is best explored when initializing the system at the relative population of the stationary point [Fig. 4(d)]. In this way, the variation occurs as orthogonal to the trajectories as possible and thus the contrast after a quarter-period is favourable. As an example, we determine a stationary point at \(q/|\Omega|=1.25\), \(\varphi=0\) and varying \(\mathrm{n_{0}}\). The resulting population oscillations are presented in Fig. 7(a). The value of \(\mathrm{n_{0}}=0.79\), where zero oscillation amplitude is expected (and the sign changes) corresponds to the stationary point [Fig. 7(b)]. This stationary point determination serves as the starting point of the characterisation of the phase transition in Fig. 4(d) in the main text. Figure 6: **Illustration of the magnetic-field insensitive measurement protocol.** With this protocol, the order parameter can be extracted (see text). Dashed circles indicate the previous state of the atoms before radiofrequency (RF) or microwave (MW) pulses.
2303.01121
An interpretable machine-learned model for international oil trade network
Energy security and energy trade are the cornerstones of global economic and social development. The structural robustness of the international oil trade network (iOTN) plays an important role in the global economy. We integrate the machine learning optimization algorithm, game theory, and utility theory for learning an oil trade decision-making model which contains the benefit endowment and cost endowment of economies in international oil trades. We have reconstructed the network degree, clustering coefficient, and closeness of the iOTN well to verify the effectiveness of the model. In the end, policy simulations based on game theory and agent-based model are carried out in a more realistic environment. We find that the export-oriented economies are more vulnerable to be affected than import-oriented economies after receiving external shocks. Moreover, the impact of the increase and decrease of trade friction costs on the international oil trade is asymmetrical and there are significant differences between international organizations.
Wen-Jie Xie, Na Wei, Wei-Xing Zhou
2023-03-02T10:04:42Z
http://arxiv.org/abs/2303.01121v1
# An interpretable machine-learned model for international oil trade network ###### Abstract Energy security and energy trade are the cornerstones of global economic and social development. The structural robustness of the international oil trade network (iOTN) plays an important role in the global economy. We integrate the machine learning optimization algorithm, game theory, and utility theory for learning an oil trade decision-making model which contains the benefit endowment and cost endowment of economies in international oil trades. We have reconstructed the network degree, clustering coefficient, and closeness of the iOTN well to verify the effectiveness of the model. In the end, policy simulations based on game theory and agent-based model are carried out in a more realistic environment. We find that the export-oriented economies are more vulnerable to be affected than import-oriented economies after receiving external shocks. Moreover, the impact of the increase and decrease of trade friction costs on the international oil trade is asymmetrical and there are significant differences between international organizations. keywords: Global oil market, Oil trade network, Machine learning, Policy simulation JEL: C1, P4, Z13 ## 1 Introduction Energy security and energy cooperation are vital factors affecting global economic stability and development. In the face of natural disasters, local wars, and climate changes, how to ensure energy security and stable energy cooperation is an issue that needs to be addressed [1; 2; 3; 4]. The distribution of petroleum resources is markedly imbalanced. The large-scale and long-distance transportation has caused tight capacity and increased costs, which have affected the coordinated development of the energy industry [5; 6; 7; 8]. In addition to the fragile global energy supply-demand balance and frequent oil market fluctuations, local wars and sanctions, international oil price shocks and various other non-economic factors also affect international cooperation in energy [9; 10; 11]. How to solve these problems requires cross-disciplinary cooperation. The remarkable development of artificial intelligence technologies and methods have provided new tools and methods for addressing the problem of irrational energy structure and for expanding international energy cooperation. In recent years, the popularity of artificial intelligence and machine learning algorithms has extended to economics and finance research [12; 13; 14]. Increasingly, scholars adopt machine learning algorithms to analyze and solve the complex problems encountered in their respective research areas, achieving many good results and encountering considerable challenges [15]. For example, deep learning in economic forecasting suffer from a well-known and criticized black box problem. Even though deep learning algorithms can improve the accuracy of predictions, the economic implications of the model cannot be reasonably explained [16]. The scholars in the field of financial economics are very much focused on analyzing the causal relationships between variables, rather than just being satisfied with the correlations found by most deep learning algorithms [17]. However, many scholars believe that causal relationships and correlations can be combined for analysis [18; 19]. While machine learning methods are mostly used to accomplish prediction tasks in the field of economics [20; 21; 22], the biggest highlight of this paper's approach is to provide interpretable economic models, which have very large application scenarios, by fitting interpretable model parameters through machine learning optimization methods. Although the application of machine learning algorithms in the economic field has certain limitations, scholars have achieved many research results in recent years by using machine learning methods to study problems in the economic field [23; 24; 25]. Zhao et al. use machine learning methods to predict energy prices and evolutionary trends [26; 27]. In addition, the study of energy consumption or energy supply and demand based on machine learning methods is also a hot research topic [28]. How to produce convincing results applying machine learning methods has always been a concern in machine learning and computer science. As a result, a large number of studies and tools addressing the interpretability of deep learning algorithms have also emerged [19]. We believe that more practical machine learning tools will be widely used in the economic field [29]. In a complex trading environment, economies' decision-making is affected by many factors. How to represent these heterogeneous economies' attributes and environmental factors is an important issue. In different decision-making environments, the factors that affect the attributes of the economies are different, and many factors are unquantifiable due to privacy and difficulty of collection. We introduce a representation method of heterogeneous economies in complex environments [19]. Based on the decision-making process of economies, we integrate machine learning optimization algorithms with the oil trade network to directly learn and reproduce the decision-making process of the economies. In order to simulate the oil trade system more realistically and provide more practical strategic support for governments and organizations in decision-making, we provide a new analysis framework to directly learn the representation and decision-making process of the economies from the iOTNs, and then integrate game theory with the agent-based model, and finally achieve the purpose of policy simulation and evaluation. The main contributions can be summarized in three respects: 1) We construct a heterogeneous economies decision-making model based on utility theory and game theory, considering the endowments of heterogeneous economies in oil trade. 2) We use machine learning optimization algorithms to learn the endowment feature vectors of heterogeneous economies and examine the interpretability of model parameters. 3) Under the "scenario" of trade friction changes, we explore the evolution of economic entities. For example, policy simulations and policy evaluations are conducted, considering the impact of trade war and the COVID-19 epidemic. This article is organized as follows. Section 2 is a literature review. Section 3 introduces the research data and methods. Section 4 uses international oil trade data to construct iOTN, and conduct empirical analysis based on the relevant methods in section 3. Section 5 conducts a policy simulation on the iOTN. Section 6 is discussion and application. ## 2 Literature review Energy is the blood of an economic system, and oil is an essential energy resource. Since the oil resources distribution is uneven, economies need trading to balance oil supply and demand [2; 3; 4]. As a result, economies interact and become inextricably intertwined because of the complex oil trade relationship [30]. The oil trading system, which is crucial to the development of the global economy, has received considerable attention from scholars [31; 32; 33]. As a complex system, the oil trading system finds complex network methods suitable for and analysis [34; 35]. We can integrate international oil trade into a complex system. In the modeling process, each energy economy can be regarded as a node, and the trade relationship between economies can be abstracted as a network connection [36; 37; 38; 39]. The formation of the iOTN depends on the development of the economies and their trade relations. The international oil trading system is evolving into a stable, orderly, and integrated system [40; 41]. Many factors affect the formation of the oil trade network. Zhang et al. used spatial econometric models to examine significant factors such as supply and demand, technological progress, and energy efficiency [6]. Zhang et al. investigated the influence of competition and dependence between economies and geographic factors on the oil trade [42; 43]. Kitamura et al. studied the gravity equation and described the flow of oil trade, pointing out that the bilateral trade volume is directly proportional to the total production value of the two sides and inversely proportional to the distance between them [44]. Some scholars also studied each economy's status in the iOTNs [10; 45], and analyzed the influence of different factors on the oil trade of economies. The oil trade relationships are extraordinarily complex and are affected by many observable and unobservable factors. The oil trade relationships are affected not only by the attributes of the economies but also by the structure of the oil trade network. Gomez et al. used the traditional gravity model when studying trade network relations [46]. The gravity model is also used to discover the potential trade flows of agricultural products [47; 48]. Feng et al. applied the prediction method of potential trade relations to the natural gas market [49; 50]. In most of the literature, the factors affecting trade relations cannot be fully examined, and it is almost an impossible task. To give more considerations to multiple factors such as individual attributes and trade network structure, we have adopted a different approach. First, it is assumed that the existing trade network is influenced by almost all the factors whose information is also embedded in the oil trade network. We introduce a machine learning method similar to reverse engineering, learning optimization algorithms and the endowment representation of the economies in the oil trade network through the machine. Then, we can reproduce the oil trade decision-making process of the economy and predict the trade network relationship. Finally, the network evolution can be simulated and analyzed, and the network evolution index can be quantified, such as network effectiveness [51; 52; 53; 54]. ## 3 Data and methodology ### Construction of iOTN The global oil trade data from 1990 to 2019 comes from UN Comtrade, and the oil data code is HS270900. The data comes from the official data reported by both sides of the trade.The United Nations Comtrade database aggregates detailed global annual and monthly trade statistics by product and trading partner for use by governments, academia, research institutes, and enterprises. Data compiled by the United Nations Statistics Division covers approximately 200 countries and represents more than 99% of the world's merchandise trade. There are many studies [40; 43; 45; 9; 11] have used the data to explore the international oil trade, with instructive results. The oil trading economies refer to almost all the countries or regions around the world involved in oil trading. This study uses more complete oil trade import data [55]. We extract the required data from the original data, including the data of time, exporting economies, importing economies, and transaction volumes (converted into US dollars according to the UN Statistics standard). The global oil trade market can be abstracted as an iOTN, with economies as network nodes and trade relations as network connections. We construct weighted oil trade networks from 1990 to 2019. The iOTN can be represented by the matrix \(\mathbf{W}(t)=[w_{ij}(t)]\), and \(w_{ij}(t)+w_{ji}(t)\) denotes the sum of the import and export trade volume between the economy \(i\) and \(j\) in the year \(t\). In order to remove some noise, it is necessary to filter the network edges and extract the backbone of the network. The simplest way to filter is to set a transaction volume as a threshold and delete trade relationships with fewer volumes than the threshold. We use the threshold filtering method to construct the skeleton of the iOTN and obtains the adjacency matrix of the oil trade network \(\mathbf{A}=[a_{ij}]\), and the adjacency matrix element \(a_{ij}=a_{ji}=1\) means that the trade volume \(w_{ij}(t)+w_{ji}(t)\) between economy \(i\) and economy \(j\) exceeds one million U.S. dollars in at least one of the 30 years. Figure 1 is a schematic diagram of the topology of the iOTN. We only considered the most basic trade relation filtering method and did not compare the impact of different network edge filtering methods on the results. ### Decision-making model of international oil trade In the trade network, the formation of oil trade relations between economies needs to consider many trade factors, such as oil prices, oil import and export routes, energy production, and economic culture. Many factors affect the stability and vulnerability of trade relations between economies. In order to characterize the oil trade relationship between economies, we use a endowment vector to represent these trade-related factors, each dimension of the vector representing an attribute. The endowment vector between the two economies is used as decision variables when establishing oil trade relations. The benefits and costs of oil trade relations can be measured by utility functions [19]. The utility function of the economy \(i\) is defined as follows: \[U_{i}(S;\mathbf{E},\mathbf{b},\mathbf{c})=F_{i}(S;\mathbf{E},\mathbf{b})-G_{i }(S;\mathbf{E},\mathbf{c}),\ \ \forall S\subset\mathcal{I}/i, \tag{1}\] where \(S\) is the trading partners of \(i\), and \(\mathcal{I}\) is the set of economies. \(\mathcal{I}/i\) represents the set of economies that does not include the economy \(i\). The matrix \(\mathbf{E}\) is the endowment of all economies in the oil trade network. Each row of the matrix corresponds to an endowment vector of an economy, and each column corresponds to an endowment dimension, which represents a trade factor. **b** stands for the importance of the benefit attribute. The benefit endowment refers to factors such as resource reserves. The trade between economies with large differences in resource reserves can bring benefits. In addition, **c** is the importance of cost attributes. The cultural differences and language barriers will increase trade costs, and the costs are not conducive to trade relations. Therefore, the difference in the cost endowment in the trade relationship will cause a large trade loss [19]. The utility function Eq. (1) distinguishes between the benefit endowment and the cost attribute. The benefit endowment is the first \(D_{b}\) column in the **E** matrix. The cost-related endowment is the last \(D-D_{b}\) columns of the matrix. As stated by Yuan et al., we also call the matrix **E** as endowment matrix [19]. When the energy endowment of the economy \(j\) is greater than the economy \(i\), the economy \(i\) will trade with the economy \(j\). The matrix **E**, weight coefficients **b** and **c** are learnable variables [19], and the benefit function is recorded as \[F_{i}(S^{*};\textbf{E},\textbf{b})=\sum_{i\in S^{*}_{i}}\sum_{d=1}^{D_{b}}b_{ d}\max(e_{jd}-e_{id},0). \tag{2}\] The cost in the decision-making process is recorded as \[G_{i}(S;\textbf{E},\textbf{c})=\sum_{i\in S}\big{\|}\textbf{c}\circ(\textbf{e }_{j}-\textbf{e}_{i})\big{\|}_{2}, \tag{3}\] where \(\textbf{e}_{i}\) and \(\textbf{e}_{j}\) represent the endowment vectors of the economies \(i\) and \(j\), respectively. Based on the definition of the benefit function in Eq. (2) and the cost function in Eq. (3), it can be seen that for two economies to have trade relations, at least one of them needs to be able to profit, that is, the new trade relations must bring positive benefits to one party, as shown below [19]: \[\Delta u_{i}(j)=U_{i}(S^{*}_{i};\textbf{E},\textbf{b},\textbf{c})-U_{i}(S^{*} _{i}/j;\textbf{E},\textbf{b},\textbf{c}),\ \ \text{if}\ \ j\in S^{*}_{i}, \tag{4}\] Figure 1: An example of the topology of an iOTN. For clarity, we randomly display 20% of trade relations. the above formula represents the incremental utility of economy \(i\) when \(j\) is among the best trading partners of economy \(i\). \[\Delta u_{i}(j)=U_{i}(S^{*}_{i}\cup j;\mathbf{E},\mathbf{b},\mathbf{c})-U_{i}(S^ {*}_{i};\mathbf{E},\mathbf{b},\mathbf{c}),\ \ \text{if}\ \ j\notin S^{*}_{i}, \tag{5}\] the above formula represents the incremental utility of the economy \(i\) when the economy \(j\) is not among the best trading partners of the economy \(i\). Substitute Eq. (2) and Eq. (3) into Eq. (4) and Eq. (5) to get: \[\Delta u_{i}(j)=\sum_{d=1}^{D_{b}}b_{d}\max(e_{jd}-e_{id},0)-\left\|\mathbf{c }\circ(\mathbf{e}_{j}-\mathbf{e}_{i})\right\|_{2}. \tag{6}\] In the formation of trade cooperation, when the increment \(\Delta u_{i}(j)\) is greater than a certain threshold, the two economies establish a trade relationship. ### Machine learning model for parameter fitting The oil trade decision-making model \(U_{i}(S;\mathbf{E},\mathbf{b},\mathbf{c})\), which is based on the heterogeneous economy \(i\), needs to learn the endowment matrix \(\mathbf{E}\) and the weight coefficient \(\mathbf{b}\) and \(\mathbf{c}\) from the original iOTN to reconstruct oil trade relationship. Numerical simulations can then be conducted to explore the evolutionary dynamics of the oil trade network, simulate contingency shocks, and evaluate policy effects. Therefore, for the oil trade decision-making model of heterogeneous economies \(U_{i}(S;\mathbf{E},\mathbf{b},\mathbf{c})\), it is vital to obtain the endowment matrix \(\mathbf{E}\). A machine learning optimization algorithm is used for parameter learning. Our basic assumption is that a better endowment matrix \(\mathbf{E}\) can better reconstruct the original oil trade network based on the oil trade model. The endowment matrix \(\mathbf{E}\) and the weight coefficient \(\mathbf{b}\) and \(\mathbf{c}\) of the model are learned through the objective optimization function. The objective function of machine learning optimization is the loss function \(\mathcal{L}\). A smaller loss function \(\mathcal{L}\) indicates a better reconstruction of iOTN based on the endowment matrix \(\mathbf{E}\) and the decision model, and also shows that the endowment vector contains the attributes of the economy and the factors that affect trade. The subsequent numerical simulation and policy simulation can be more realistic and close to the real trading environment. The loss function \(\mathcal{L}\) takes the following form [19]: \[\mathcal{L}=\mathcal{L}_{\text{pos}}+\mathcal{L}_{\text{neg}}+\mathcal{L}_{ \text{fp}}+\mathcal{L}_{\text{reg}}. \tag{7}\] The smaller the \(\mathcal{L}_{\text{pos}}\) is, the higher the accuracy of predicting the existing connection based on the learned parameters is. A smaller \(\mathcal{L}_{\text{neg}}\) indicates that those non-existent links have a smaller probability of occurring in the prediction process and that the model reconstruction is better. \(\mathcal{L}_{\text{fp}}\) is a penalty item, which is to penalize the situation where a connection is a non-existent trade relationship, but the model predicts that it is an existing trade relationship. \(\mathcal{L}_{\text{neg}}\) is a regular term. To make the model more stable, when the dimension \(D\) of the endowment matrix \(\mathbf{E}\) is too large, some unnecessary dimensional values can be made close to zero, which can reduce the dimensionality of the embedding space. The specific details and explanation of \(\mathcal{L}_{\text{pos}}\), \(\mathcal{L}_{\text{neg}}\), \(\mathcal{L}_{\text{fp}}\), \(\mathcal{L}_{\text{reg}}\) can be found in the supplementary materials of the literature [19]. In order to find the minimum loss function \(\mathcal{L}\), the optimization model of machine learning is as follows \[\mathbf{\hat{E}},\mathbf{\hat{b}},\mathbf{\hat{c}} =\arg\min_{\mathbf{E},\mathbf{b},\mathbf{c}}\mathcal{L}(\mathbf{E},\mathbf{b},\mathbf{c}|\mathbf{A})\] (8) subject to : \[c_{d}\geq 0,\forall d=1,2,...,D\] \[\sum_{i=1}^{N}e_{id}=0,\forall d=1,2,...,D\] \[\left\|\mathbf{E}_{\mathcal{L}}\right\|_{2}^{2}=N,\forall d=1,2,...,D\] In the model training process, the selection of hyperparameters refers to the parameter settings in the literature [19], the optimization process is iterated 10,000 times, and the learning rate is 0.01. We averaged the endowments among economies. The endowments of all economies in the same dimension have a mean value of zero and are standardized. Therefore, the endowment vectors of different economies in learned \(\mathbf{E}\) are comparable. ## 4 Calibration results ### The spatial distribution of economies in endowments space The analysis of oil trade decision-making process based on the endowment vectors of heterogeneous economies has some interpretability. We find that economies have different trade endowments. To be able to visualize and interpret the learned endowment feature vectors, we draw the endowment vectors of the economies as scatter plots for visualization, as shown in Fig. 2(a,b,c). Fig. 2(a,b,c) show the endowment vector of the economy with the embedding dimension \(D=6\) and the benefit endowment dimension \(D_{b}=4\). The major energy demand and supply economies such as the USA, the United Arab Emirates, Nigeria, Indonesia, China, and Singapore are in a relatively remote location. Their endowments are different from other economies. We have to admit that it is difficult to explain each endowment dimension in an economic sense. Because the coordinate vector embedded as a variable contains many unmeasured trade factors, so it is basically impossible for a certain indicator to be completely correlated with the economic variables in reality. The embedded dimension is directly related to the decision model's decision variables, including the benefit and cost endowment variables. In order to select the most suitable endowment dimension, we have chosen different embedding dimensions \(D=4,5,6,7\) and the corresponding benefit endowment dimension \(D_{b}\). In order to obtain a smaller loss function value during the optimization process, we calculated the loss value \(\mathcal{L}\) for different benefit dimensions \(D_{b}\). Fig. 2(d) shows the objective function value \(\mathcal{L}\) under different embedding dimension \(D\) and benefit Figure 2: The endowments in the case of the embedded dimension \(D=6\) and the benefit endowment dimension \(D_{b}=4\). a) The first and second columns of the benefit endowment of economies in the iOTN. b) The third and fourth columns of benefit attributes. c) The first and second columns of the cost attributes. d) The relationship between the value of the objective function \(\mathcal{L}\) and the benefit endowment dimension \(D_{b}\) under different embedded dimensions \(D\). endowment dimension \(D_{b}\). It can be seen that the higher the dimension is, the smaller the loss function is. In Fig. 2(d), each solid line represents a case of the dimension \(D\). The abscissa represents the benefit dimension with the size \(D_{b}\), and the ordinate represents the loss function value \(\mathcal{L}\). The lowest point of each solid line can be used to determine the optimal benefit dimension \(D_{b}\). For the iOTN, when the embedded dimensions are \(D=4,5,6,7\), the optimal benefit dimensions are \(D_{b}^{*}=2,3,4,5\) respectively. In the subsequent analysis, under the corresponding embedding dimension, we only considered the case with the optimal benefit dimension \(D_{b}^{*}\). The coordinates in Figure 2(a,b) indicate the benefit endowments, while Figure 2(c) indicates the cost attributes. The economies marked in Figure 2 are those with very large attribute values. The larger the cost attribute of the economies in (c), the more difficult it is to establish trade relations with other economies. While the larger economies in Figure 2(a,b) correspond to some economies with larger trade volumes. Thus, the inconsistency of the economies shown in Figure 2 is another indication that the algorithm identifies the heterogeneity among the economies. Figure 2 intuitively shows the interpretability of the feature vectors of the economies. However, interpretability has limitations. The feature vectors cannot fully correspond to some observable economic variables or attribute variables of the economies. ### Interpretability analysis of endowments The black box problem of machine learning and deep learning, namely interpretability, has always been challenging, and it is impossible to decompose the endowment vector's connotation by a simple method. Therefore, the limitations of the method used in the paper are also reflected here. In order to further explore the specific economic meaning of endowment vectors, correlation analysis can be combined with existing economic variables. We attempt to define some measurements based on the endowment vector to discover some characteristics in the iOTN and provide more options for analyzing the network. We define two measurements based on the endowment vector to fathom more meaningful economical information from the endowment vector. First, we calculate the economy's oil trade power index [(19)]: \(\text{Power}(i)=\sum_{d=1}^{D_{b}}b_{d}e_{id}\). In order to characterize the exclusion of different economies, an indicator of economy exclusion is defined, which is the characteristic that other countries do not tend to trade with an economy or the economy itself does not trade with other economies [(19)]. The economic exclusion index is defined as \(\text{Exclusion}(i)=\left[\sum_{d=D_{b}+1}^{D}(c_{d}e_{id})^{2}\right]^{\frac{1 }{2}}.\) The exclusion or power is informative for other economies. We can study the power and exclusion of economies to provide recommendations for the economies' oil trade decisions. In order to verify the validity of trade exclusion and trade power measurement, we analyze the Spearman correlation coefficient between the trade network structure variables and exclusion and power [(56)]. **Degree centrality** measures the number of oil trading partners of an economy \(i\) and is defined as \(\text{Degree}(i)=\sum_{j=1}^{N}a_{ij}\). **Closeness centrality** is used to measure the average distance of an economy to other economies and defined as \(\text{Closeness}(i)=\frac{N-1}{C_{i}}.\)\(N\) is the number of economies in the network, and \(C_{i}\) is the sum of the distances from economy \(i\) to \(N-1\) other economies. For isolated economy \(i\), \(\text{Closeness}(i)\) is \(0\). **Clustering** is a measurement of local centrality. The clustering coefficient measures a density of triangular trade relations in trade relations between economies. The clustering coefficient is defined as \(\text{Clustering}(i)=\frac{\sum_{m=i}^{N}a_{ii}a_{ii}a_{ii}a_{ii}}{\text{ Degree}(i)/\text{Degree}(i)-1}\), where \(a_{ij}=1\) and \(a_{ij}=0\) correspond to connection and disconnection for nodes \(i\) and \(j\). The correlations between power and exclusion, oil import, export and network structure measurements are shown in Table 1. First, trade power is negatively correlated with trade exclusion, and the absolute value of the correlation coefficient decreases with the increase of \(D\). Trade power is positively correlated with import and export volume, the number of trading partners, closeness centrality, and clustering coefficient, and the correlation coefficient increases with an increase of \(D\). Trade exclusion is negatively correlated with import and export volume, the number of trading partners, and closeness centrality, and the absolute value of the correlation coefficient decreases as \(D\) increases. Trade exclusion and clustering coefficient show a weak negative correlation. The absolute value of the correlation coefficient increases as \(D\) increases. According to the results from different embedding dimensions, it can be seen that the correlation between the structural variables of the original network and economic power and exclusion are robust. The results also prove that the endowment vector learned by the optimization algorithm has economic implications, which can be used to measure the attributes of the economy and guarantees the validity of subsequent policy simulations. ### Model validation To further verify that the endowment vectors of economies have certain practical significance, we reconstructed the network based on the learned model parameters to verify whether some structural measurements in the reconstructed network have some similarity with the original network. We use Eq. (6) to calculate \(\Delta_{ij}=\min(\Delta u_{i}(j),\Delta u_{j}(i))\), and take the \(N_{\text{edges}}\) edge with the largest value of \(\Delta_{ij}\) to reconstruct the \(N_{\text{edges}}\) trade relations in the original iOTN. \(N_{\text{edges}}\) is the number of original trade relationships in iOTN. Then, we statistically analyze the network structure parameters such as the network degree of the economy, the clustering coefficient, and the closeness. They are respectively denoted as reconstructed Degree(\(i\)), reconstructed Clustering(\(i\)), reconstructed Closeness(\(i\)). To quantify the performance of reconstruction, Spearman's correlation coefficient is used to measure the rank correlation between the structural variables of the reconstructed iOTN and that of the original iOTN [56]. \[\rho=1-\frac{6\sum_{i=1}^{N}\left(X_{i}-Y_{i}\right)^{2}}{n(n^{2}-1)}, \tag{9}\] where \(X_{i}\) and \(Y_{i}\) are the ranks of the two variables sorted by size. In Fig. 3, the network degree, clustering coefficient, and closeness of the reconstructed network are highly correlated with the original network, which shows that the endowment vector learned by qualitative individuals can reflect the hidden variables of the economy in the iOTN and provides a guarantee for the effectiveness of our subsequent simulation. From left to right in Fig. 3, the embedding dimensions are \(D=4,5,6,7\), and the optimal benefit endowment dimensions are their respective \(D_{b}^{*}\). The comparison shows that the higher the dimension is, the better the correlation is, and the better the structure and properties of the original network can be reproduced. ## 5 Policy simulation of oil trade It can be concluded from the interpretability analysis of the endowment attributes and the verification of the model that the model has learned the characteristics of the iOTN. The adaptive decision-making model of an economy has a certain validity. Based on the endowment of economies, it is feasible to conduct policy simulations on decision-making models. Changing the model parameters can simulate different market environments' changes, the evolution of the iOTN, and the future development of global oil trade. With the trade endowment of the economy learned, the adaptive decision model of the economy is able to simulate the evolution and development patterns of trade networks under different trade environments. For example, the cost \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & Power & Exclusion & Import & Export & Degree & Closeness & Clustering \\ \hline \hline Panel A: \(D=4\) & & & & & & \\ Power & 1.00 & -0.35 & 0.44 & 0.38 & 0.42 & 0.40 & -0.58 \\ Exclusion & -0.35 & 1.00 & -0.68 & -0.82 & -0.90 & -0.89 & **-0.04** \\ \hline \hline Panel B: \(D=5\) & & & & & & \\ Power & 1.00 & -0.33 & 0.49 & 0.46 & 0.50 & 0.48 & -0.55 \\ Exclusion & -0.33 & 1.00 & -0.67 & -0.81 & -0.87 & -0.87 & **-0.06** \\ \hline \hline Panel C: \(D=6\) & & & & & & \\ Power & 1.00 & -0.29 & 0.56 & 0.53 & 0.58 & 0.55 & -0.55 \\ Exclusion & -0.29 & 1.00 & -0.62 & -0.78 & -0.84 & -0.85 & **-0.12** \\ \hline \hline Panel D: \(D=7\) & & & & & & \\ Power & 1.00 & -0.25 & 0.60 & 0.59 & 0.66 & 0.63 & -0.47 \\ Exclusion & -0.25 & 1.00 & -0.57 & -0.72 & -0.78 & -0.79 & -0.20 \\ \hline \hline \end{tabular} \end{table} Table 1: The correlation between exclusion, power and network degree, closeness centrality, clustering centrality, export volume, and import volume. The bold correlation coefficient is not significant when the significance level is 0.05. From top to bottom, the embedding dimensions are \(D=4,5,6,7\), and the optimal benefit endowment dimensions are set as the corresponding \(D_{b}^{*}\). importance parameter \(\mathbf{c}\) in the model can be adjusted to simulate the oil trade situation after the occurrence of events such as the COVID-19 epidemic, trade wars, trade barriers, and trade tariffs, which all lead to increased trade costs. The new cost importance parameter is defined as \[\mathbf{c}_{\alpha}=(1+\alpha)\mathbf{c}. \tag{10}\] We use the parameter \(\alpha\) to measure the change of the model parameter \(\mathbf{c}\). When \(\mathbf{c}_{\alpha}\) increases (\(\alpha>0\)), it corresponds to the increase in trade costs. When \(\mathbf{c}_{\alpha}\) decreases (\(\alpha<0\)), it corresponds to the reduction of trade costs, such as reducing tariffs and zero tariffs. The value of \(\alpha\) is between -0.4 and 0.4, and the iOTN is re-simulated based on the following formula: \[\Delta u_{i}(j)=\sum_{d=1}^{D_{\mathrm{bs}}}b_{d}\max(e_{jd}-e_{id},0)-\left\| \mathbf{c}_{\alpha}\circ(\mathbf{e}_{j}-\mathbf{e}_{i})\right\|_{2}. \tag{11}\] When \(\alpha=0\), that is, \(\mathbf{c}_{\alpha}=\mathbf{c}\), we calculate \(\Delta_{ij}=\min(\Delta u_{i}(j),\Delta u_{j}(i))\), and take the first \(N_{\mathrm{edges}}\) edge with the largest value in \(\Delta_{ij}\) to reconstruct the trade relations. \(N_{\mathrm{edges}}\) is the number of original trade relationships in iONT. We will use these \(N_{\mathrm{edges}}\) edges to find the minimum value of \(\Delta_{ij}\) and record it as \(\Delta_{\min}\). \(\Delta_{\min}\) can be regarded as the utility threshold value for new oil trade relations in the iOTN. Then, we statistically analyze the network structure parameters, such as the network degree of the economy, the clustering coefficient, and the closeness. They are respectively denoted as reconstructed Degree(\(i\)), Clustering(\(i\)), Closeness(\(i\)). Figure 3: In the embedding spaces with different dimensions, the scatter plots of economies’ network degree, clustering coefficient, and closeness of the restructured network and the original network. The rank correlations between the reconstructed network’s structural variables and the original network are shown in the plots. There is a comparison for the network degree, clustering coefficient, and closeness from top to bottom. From left to right, the embedding dimensions are \(D=4,5,6,7\). When \(\alpha=-0.4,...,0.4\), we recalculate \(\Delta_{ij}\). When the trade relationship corresponds to \(\Delta_{ij}\) satisfy \[\Delta_{ij}=\min(\Delta u_{i}(j),\Delta u_{j}(i))>\Delta_{\min}. \tag{12}\] The corresponding economies \(i\) and \(j\) establish oil trade relations. Statistical analysis of the structural variables of the reconstructed network, such as the network degree of the economy, the clustering coefficient, and the closeness are recorded as Degree\({}^{\alpha}(i)\), Clustering\({}^{\alpha}(i)\), Closeness\({}^{\alpha}(i)\). Adjusting the cost importance parameter \(\mathbf{c}\) has a different effect on the structural measurements of trade network, such as degree, clustering coefficient, and closeness. The changes in degree affect the number of trade partners of the economy. Under trade frictions or trade shocks, the changes in the number of partners can be used to measure the vulnerability of the economy. When the amount of change is large, the vulnerability is relatively large. The vulnerability based on degree, clustering, and closeness is defined as [57; 19] \[V_{\text{Degree}}=\frac{\text{Degree}^{\alpha}(i)}{\text{Degree}(i)},\ \ \ \ \ V_{\text{ Clustering}}=\frac{\text{Clustering}^{\alpha}(i)}{\text{Clustering}(i)},\ \ \ \ \ V_{\text{ Closeness}}=\frac{\text{ Closeness}^{\alpha}(i)}{\text{ Closeness}(i)}, \tag{13}\] where Degree(\(i\)), Clustering(\(i\)), Closeness(\(i\)) respectively represent the restructured network's degree, clustering coefficient, and closeness when \(\alpha=0\). Different cost coefficients \(\alpha\) have different effects on network structure variables. Fig. 4 shows the impact of cost coefficient changes on the three network structure variables of the international organizations in the iOTN, including the degree, clustering coefficient, and closeness. The network degree can be used to measure the prosperity of the global energy trade network. A larger network degree indicates greater edge density, more trade relations, and more trade activities. The clustering coefficient is used to measure the density of triangular trade relations between economies. A larger clustering coefficient indicates a larger number of triangular oil trade relations in the iOTN. The closeness characterizes the degree of connection between trade network economies. If the closeness is greater, the path between the two economies will be shorter, and this is more conducive to the effectiveness of oil trade. Figure 4: The vulnerability \(V\) of the economy under the influence of trade friction factors \(\alpha\). Under different \(\alpha\) values, there are the average network degree, the average clustering coefficient, and the average closeness from top to bottom. From left to right, the embedding dimensions are \(D=4,5,6,7\), and the optimal benefit endowment dimensions are their respective \(D_{b}^{*}\). In Fig. 4, the change of cost coefficient \(\mathbf{c}\) has a greater impact on the average degree of international organizations. When \(\mathbf{c}_{\alpha}=0.6\mathbf{c}\), the average degree of the entire network has increased by more than \(60\%\). The increase and decrease of the trade cost coefficient \(\mathbf{c}_{\alpha}\) have an asymmetric effect on the average degree of the network. When \(\mathbf{c}_{\alpha}=1.4\mathbf{c}\), the average degree of the entire network is reduced by nearly \(40\%\). The reduction of \(\mathbf{c}_{\alpha}\) (\(\alpha<0\)) corresponds to the lifting of trade wars, the lifting of trade barriers, and the reduction of trade tariffs. The impact of such policies varies across organizations, with the average degree of impact being highest for all global economies, followed by the OECD, and the G7 having the least impact. In general, the impact of trade cost on powerful economies is less than that on weak economies, and the impact on OPEC is greater than that on G7. The increase of \(\mathbf{c}_{\alpha}\) (\(\alpha>0\)) corresponds to situations such as trade wars, trade barriers, and trade tariffs. Such policies that increase trade costs have less impact on the economy compared to policies that reduce trade costs. The small difference in the impact on different organizations suggests a slower recovery in trade. In Fig. 4, the change of the cost coefficient \(\mathbf{c}\) has a smaller effect on the average clustering coefficient of the economies in the organization than on the average degree. When \(\mathbf{c}_{\alpha}=0.6\mathbf{c}\), the average clustering coefficient of the entire network increases by no more than \(15\%\). The increase and decrease of the trade cost coefficient \(\mathbf{c}_{\alpha}\) also have an asymmetric effect on the average clustering coefficient. When \(\mathbf{c}_{\alpha}=1.4\mathbf{c}\), the average clustering coefficient of the entire network is reduced by approximately \(15\%\). Similar to the previous results, changes in trade costs have less impact on powerful economies than on weak ones. The closeness centrality of different organizations are affected similarly, and strong economies are also less affected than weak ones. These results all point to the increased risks and greater vulnerability of smaller economies in the crisis and the additional challenges they will encounter in the later recovery process. Since \(\alpha\) is greater than \(0\), it means that trade frictions and trade costs are increased, so trade relations will be reduced, so vulnerability \(V\) is less than \(1\). However, all curve contains all economies, and it contains a large number of small economies, which are more affected by trade shocks, so the curve is smaller than that of other organizations, which contain a large number of powerful economies. Stronger economies are also less affected than weaker ones, resulting in a \(V\) value close to \(1\). To verify whether smaller economies are more affected in the simulation, we divided the economies into three Figure 5: The vulnerability analysis of economies in large, medium, and small groups under the influence of trade friction factors \(\alpha\). According to the volumes of oil export and import, the economies can be divided into large, medium, and small ones. The settings in the figure are the same as those in Fig. 4. groups according to the oil trade volume of export and import. For statistical validity, we divided all economies into three groups, each containing the same number of economies. Fig. 5 shows the changes in the average network degree, average clustering coefficient, and average closeness of economies in different groups under different \(\alpha\) values. From left to right in Fig. 5, the embedding dimensions are \(D=4,5,6,7\), and the optimal benefit endowment dimensions are their respective \(D_{b}^{*}\). There are the average network degree, the average clustering coefficient, and the average closeness from top to bottom. As shown in Fig. 5, the traditional powerful oil import and export economies will be less affected when they encounter shocks. The economies with small oil imports and exports are more susceptible to shocks. Further comparison and analysis of Fig. 5 shows that oil-exporting economies are more susceptible to shocks. The influence of the importing economies is relatively small. To a certain extent, this finding also shows that different economies need to diversify their strategies to respond to shocks in the face of changes in the international environment. Risk prevention and risk mitigation should be based on their trade attributes. Similarly, we can use simulation to study individual economies' impact when external emergencies or disasters occur. The simulation can provide certain predictions and preliminary preparations for disaster crises. In Fig. 5 we can also find that under different trade cost coefficients \(\mathbf{c}_{\alpha}\) value, different embedded dimension simulation, and different structural variables, the vulnerability has a relative similarity. This finding illustrates to a certain extent that the numerical simulation results of our method have certain robustness. It further illustrates that our method has certain credibility for measuring the endowments of different economies. ## 6 Discussion and application The importance and vulnerability of the oil trade necessitates a move toward diversification and efficiency. We constructed an undirected weighted international oil trade network based on the oil trade data from 1990 to 2019. The main contribution of this paper is that the iOTN is reconstructed by integrating game theory and utility theory, the trade network data being directly put into the machine learning algorithm as a data representation. Based on the network representation learning, the large-scale optimization method of machine learning is used to perform directly representation learning on heterogeneous economies in the iOTN. Simulation based on game theory and agent-based model provides suggestions for selecting trade partners and the formulation of trade cooperation for various economies. The main finding is that export-oriented economies have a greater impact than import-oriented economies after receiving external shocks in the iOTN. The impact of external events that leads to an increase or decrease in the cost of trade friction has an asymmetric impact on the iOTN, and the reduction in trade friction has a greater impact on oil trade. Smaller economies are more severely affected when they are hit by external events than larger economies. For international organizations, the dynamic evolution trend of international oil trade relations can be obtained through numerical simulation. For individual economies, it is possible to find cooperation strategies in their own economies' international trade status under specific trading environment changes. From the numerical simulation results, it can be found that different economies have to formulate and respond to different trade strategies based on their characteristics and cannot merely copy the trade response of other economies. Under our model framework, more detailed simulation and analysis can be conducted for individual economies. For example, the trade cost coefficient of a particular economy can be changed separately, instead of changing the overall cost coefficient as in this article. In addition, when an economy becomes politically unstable, the increase in its cost coefficient will definitely impact other economies. Our finding shows that there is a need for more cooperative relationships between different economies to diversify their trading partners to increase their ability to resist and mitigate risks. Based on the results of our vulnerability analysis, we can give early warning signals to some of the more vulnerable economies, and those economies can make trade strategy adjustments in advance. Considering the learned attributes of economies, we can classify economies into different groups, so that we can determine trade object attributes more precisely and specify trade strategies to provide trade efficiency and stability. Follow-up work can integrate more economic entity endowment data, conduct early warning of systematic risk and critical risk path identification, and propose more accurate response strategies. Based on the heterogeneous subject's adaptive decision model, the pattern of global energy trade operation can be monitored dynamically and in real time. Integrating emergencies (national bankruptcy, economic sanctions, and epidemic-affected economies), computational experiment methods and "scenario-response" models can make the simulated "scenarios" more in line with the real domestic trade system settings. Risk warning and risk response strategy evaluation will be conducted, corresponding plans will be made, and scientific and feasible strategies will be formulated. ## Declaration of competing interest The authors declare that they have no conflict of interest. ## Data Availability Oil data sets related to this article can be found at [https://comtrade.un.org/](https://comtrade.un.org/), an open-source online data repository. The geographic distance data sets of the CEPII database come from [http://www.cepii.fr/](http://www.cepii.fr/).
2310.08983
Heavy flavor production in the Parton-Hadron-String Dynamics (PHSD)
Relativistic heavy-ion collisions produce a hot and dense nuclear matter, through which one can study the phase diagram of QCD. Open and hidden heavy flavors are promising probes to search for the properties of the hot and dense nuclear matter under extreme conditions. We present how the production and interactions of open and hidden heavy flavors in heavy-ion collisions are realized in the Parton-Hadron-String Dynamics, which is a non-equilibrium microscopic transport approach for the description of the dynamics of strongly interacting hadronic and partonic matter.
Taesoo Song, Joerg Aichelin, Elena Bratkovskaya
2023-10-13T09:59:30Z
http://arxiv.org/abs/2310.08983v1
# Heavy flavor production in the Parton-Hadron-String Dynamics (PHSD) ###### Abstract: Relativistic heavy-ion collisions produce a hot and dense nuclear matter, through which one can study the phase diagram of QCD. Open and hidden heavy flavors are promising probes to search for the properties of the hot and dense nuclear matter under extreme conditions. We present how the production and interactions of open and hidden heavy flavors in heavy-ion collisions are realized in the Parton-Hadron-String Dynamics, which is a non-equilibrium microscopic transport approach for the description of the dynamics of strongly interacting hadronic and partonic matter. Introduction Relativistic heavy-ion collisions produce nuclear matter in extreme conditions. The properties of the matter produced in heavy-ion collisions can be probed by bulk particles such as pions and kaons, electromagnetic particles and hard particles like jets and heavy flavors. Heavy flavors have the following advantages: First, their production requires a large energy-momentum transfer, i.e. a hard scattering. Thus the production can be described in pQCD and is model-independent. Second, since heavy flavors are produced early in heavy-ion collisions, they potentially contain the information of the produced matter at the initial stage of heavy-ion collisions. In fact heavy flavors are not fully thermalized in heavy-ion collisions due to their large mass and some memory of the initial stage survives. In experiment it is observed that single electrons from \(B\) meson decays are suppressed at large transverse momentum [1] and the excited states of bottomonium are strongly suppressed - compared to the ground state [2] - in heavy-ion collisions at LHC. The former attributes to the interaction of heavy quarks with partons in the QGP, which leads to an energy loss of fast heavy quarks, and the latter to the dissolution of excited states in a QGP at high temperature. Therefore, both reveal the properties of matter in extreme conditions produced in relativistic heavy-ion collisions. ## 2 Parton-hadron-string dynamics (PHSD) For more precise quantitative studies of heavy flavor production and dynamics it is necessary to describe the space-time evolution of the matter produced in heavy-ion collisions. The Parton-Hadron-String Dynamics (PHSD) is a non-equilibrium microscopic transport approach for the description of the dynamics of strongly interacting hadronic and partonic matter [3]. The dynamics is based on the solution of generalized off-shell transport equations derived from Kadanoff-Baym many-body theory which is beyond the semi-classical BUU transport. Heavy-ion collisions in the PHSD start by nucleon-nucleon scattering. When the collision energy is relatively low, it leads to elastic scattering or the excitation of nucleon(s) or at most the production of a couple of particles. As the collision energy increases, however, more and more particles are produced and the LUND string model is useful to describe the multiparticle production. The number of strings grows with increasing bombarding energies. In PHSD, if the local energy density is above the critical value for a phase transition (according to the lQCD), strings melt into quarks and antiquarks and form a QGP. Otherwise, strings fragment into hadrons, which is called string fragmentation. The equation-of-state (EoS) of the QGP at small baryon chemical potential is available from lattice QCD calculations. However, the EoS is a macroscopic property of the QGP and does not provide a microscopic picture as well as information on the degrees-of-freedom of the QGP. Therefore, PHSD adopts the dynamical quasi-particle model (DQPM) where (anti)quark and gluon are expressed by spectral functions whose pole masses and spectral widths depend on temperataure and baryon chemical potential [4]. The pole mass and spectral widths follow those from the hard thermal loop calculations but the strong coupling is extracted from a lattice EoS in order to access the non-perturbative region close to \(T_{c}\). As a result, the DQPM satisfies lattice EoS and additionally provides a microscopic view of the QGP. Furthermore, the microscopic picture enables us to calculate transport coefficients of the QGP which are consistent with lattice QCD results in thermodynamic equilibrium [5]. When the QGP expands with time and the temperature lowers down to \(T_{c}\), partons hadronize, while conserving energy-momentum and all quantum numbers. Since partons are off-shell, the hadronized mesons and (anti)baryons are also off-shell by energy-conservation. The scattering of off-shell hadrons is based on many-body theory (G-matrix), chiral models or experimental data on scattering cross sections. We recall that the PHSD provides a good description of many bulk observables such as rapidity and transverse momentum distributions and flow coefficients of hadrons from SchwerIonen Synchrotron (SIS) to Large Hadron Collider (LHC) energies [6]. ## 3 Open heavy flavor production In PHSD the production point and initial momentum of a heavy quark pair are provided by the Glauber model and the PYTHIA event generator [7]. Since the PYTHIA event generator is based on leading-order pQCD calculations, though intial and final showers are included, the momentum and rapidity of heavy quarks from PYTHIA are rescaled such that their distributions are similar to those from the fixed-order next-to-leading logarithm (FONLL) calculations [8]. In \(pp\) collisions the produced heavy quark hadronizes through heavy quark fragmentation by using the Peterson's function [9] for transverse momentum and its chemistry is decided based on the experimental data which are generally energy-independent. The parton distribution function (PDF) in a nucleus is modified from that in a single nucleon, which is called (anti)shadowing effect. This modification affects heavy flavor production, because the main process are \(g+g\to Q+\bar{Q}\) or \(q+\bar{q}\to Q+\bar{Q}\). This (anti)shadowing effect is realized in PHSD using the EPS09 package [10]. Heavy quarks interact in the QGP through elastic scattering with off-shell massive partons. The scattering cross sections are calculated by leading-order Feynman diagrams based on the DQPM, where the strong coupling depends on temperature and baryon chemical potential, while the off-shell mass of the exchanged parton plays the role of a regulator. The resulting spatial diffusion coefficient is consistent with that from lattice QCD calculations [11]. As the local energy density approaches 0.75 \(\mathrm{GeV/fm^{3}}\) during the expansion of the matter, a heavy (anti)quark tries coalescence with light partons close both in coordinate and in momentum spaces [12]. If a heavy (anti)quark fails coalescence until the energy density drops below than 0.4 \(\mathrm{GeV/fm^{3}}\), it is forced to hadronize by fragmentation as in \(pp\) collisions. For an energetic heavy quark, it can hardly find a neighboring light quark in momentum space, though it is surrounded by them in coordinate space. Therefore the coalescence probability is large at low momentum and decreases with increasing heavy quark momentum. The probability also decreases with increasing the centrality of heavy-ion collisions, and the hadronization process will dominantly be fragmentation in extremely peripheral collisions. The hadronized \(D\) mesons interact with light mesons and baryons with scattering cross sections calculated in an effective chiral lagrangian approach with heavy-quark spin symmetry which is unitarized [12]. Fig. 1 shows the PHSD results for the nuclear modification factor \(R_{\mathrm{AA}}\) and elliptic flow \(v_{2}\) of \(D\) mesons, respectively, in 0-10 % and 30-50 % central Pb+Pb collisions at \(\sqrt{s_{\mathrm{NN}}}\) = 2.76 \(\mathrm{TeV}\) from the PHSD [13] in comparison to the experimental data from the ALICE Collaboration [14]. One can see that the (anti)shadowing effect is necessaray to explain \(R_{\rm AA}\) in central collisions but the influence on \(v_{2}\) is insignificant. ## 4 Hidden heavy flavor production Quarkonium production in \(pp\) collisions takes two steps: First, a heavy quark pair is produced by a hard scattering and then the produced pair forms a bound state. The former is a hard process where pQCD is applicable, while the latter is a soft process which must depend on a suitable model. In PHSD the former is realized by the PYTHIA event generator and the latter is based on the Wigner projection where the only parameter is the quarkonium radius. It successfully describes charmonium and bottomonium production in \(pp\) collisions [15, 16] as shown in the left panel of Fig. 2. In heavy-ion collisions there are a couple of differences from in \(pp\) collisions. First, the quarkonium radius depends on temperature because the heavy quark potential changes in the QGP. We take the free energy of the heavy quark system from lattice QCD calculations for the heavy quark potential [17] and solve the Schrodinger equation to obtain the eigenvalue and eigenfunction of the each state of quarkonium. Second, a heavy quark and a heavy antiquark - from two different initial pairs - may form a bound state in heavy-ion collisions, which is called the off-diagonal recombination. For the description of quarkonium production in heavy-ion collisions, we use Remler's formalism, where the Wigner projection is carried out first, when local temperature of the QGP lowers down to the dissociation temperataure of each quarkonium state, and it is updated whenever a heavy quark or heavy antiquark interacts in the QGP [16]. The right panel of Fig. 2 shows \(R_{\rm AA}\) of \(\Upsilon\)(1S) and \(\Upsilon\)(2S) in 0-80 % central Pb+Pb collisions at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV. The upper limit of the magenta band indicates the initial production of \(\Upsilon\)(1S) at its dissociation temperature which is about 3.3 \(T_{c}\) for the free energy heavy quark potential. On the other hand, the lower limit displays the final \(\Upsilon\)(1S) at the end of the QGP, which shows that Figure 1: (Left) \(R_{\rm AA}\) and (right) \(v_{2}\) of \(D\) mesons respectively in 0-10 % and 30-50 % central Pb+Pb collisions at \(\sqrt{s_{\rm NN}}\) = 2.76 TeV from the PHSD [13], compared to experimental data from the ALICE Collaboration [14]. heavy quark interactions in the QGP suppresses Y(1S). One can see that the experimental data from the CMS Collaboration [18] are between the upper and lower limits. Assuming that only 10 % of heavy (anti)quark scattering in the QGP affects bottomonium production and dissociation, one gets the red solid line which is consistent with the CMS data. It is reasonable to conclude that a heavy quark and heavy antiquark (form Y(1S)) scattering cross section must be much smaller than twice the heavy quark scattering cross section, because the Y(1S) is a color-singlet with a small size. On the other hand, Y(2S) has a very narrow band, because its dissociation temperature is close to \(T_{c}\). ## 5 Summary The PHSD describes a hadronic and partonic matter produced in heavy-ion collisions in terms of strongly interacting off-shell particles which reproduce lattice and experimental data. Open heavy flavor production in heavy-ion collisions is realized in PHSD by help of the PYTHIA event generator for the initial production, the EPS09 package for (anti)shadowing effects, the DQPM for interaction cross sections in QGP, the coalescence and fragmentation for hadronization and the effective chiral lagrangian with unitarizataion for hadronic scattering. According to Remler's formalism quarkonium production is closely related to the interaction of open heavy flavors in QGP, and we have found that the scattering cross section of two open heavy flavors reduces to about 10% for hidden heavy flavor in heavy-ion collisions. ## Acknowledgements We acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the grant CRC-TR 211 'Strong-interaction matter under extreme conditions' - Project number 315477589 - TRR 211. This work is supported by the European Union's Horizon 2020 research and innovation program under grant agreement No 824093 (STRONG-2020). The Figure 2: (Left) The PHSD results for the rapidity distribution of Y (nS) in \(pp\) collisions at \(\sqrt{s_{\rm NN}}\) = 5.02 TeV and (right) \(R_{\rm AA}\) of Y (1S) and Y(2S) for the free energy potential in 0-80 % central Pb+Pb collisions at the same energy, compared with experimental data from the CMS and ALICE Collaborations [18, 19]. computational resources have been provided by the LOEWE-Center for Scientific Computing and the "Green Cube" at GSI, Darmstadt and by the Center for Scientific Computing (CSC) of the Goethe University.
2301.00446
The codegree isomorphism problem for finite simple groups
We study the codegree isomorphism problem for finite simple groups. In particular, we show that such a group is determined by the codegrees (counting multiplicity) of its irreducible characters. The proof is uniform for all simple groups and only depends on the classification by means of Artin-Tits' simple order theorem.
Nguyen N. Hung, Alexander Moretó
2023-01-01T17:27:51Z
http://arxiv.org/abs/2301.00446v2
# The Codegree isomorphism problem ###### Abstract. We study the codegree isomorphism problem for finite simple groups. In particular, we show that such a group is determined by the codegrees (counting multiplicity) of its irreducible characters. The proof is uniform for all simple groups and only depends on the classification by means of Artin-Tits' simple order theorem. Key words and phrases:Character codegrees, isomorphism problem, finite simple groups, Huppert's conjecture 2010 Mathematics Subject Classification: Primary 20C15, 20C30, 20C33, 20D06 Part of this work was done while the first author was visiting the Vietnam Institute for Advanced Study in Mathematics (VIASM), and he thanks the VIASM for its hospitality and financial support. The research of the second author is supported by Ministerio de Ciencia e Innovacion (Grant PID2019-103854GB-I00 funded by MCIN/AEI/ 10.13039/501100011033) and Generalitat Valenciana CIAICO/2021/163. irreducible characters of \(G\). (CIC) The approach so far to this problem is more or less similar to Huppert's original method, and therefore, unfortunately, is still case-by-case [1, 1, 1, 2, 3]. Let \(\operatorname{cod}(G)=\{c_{1}<c_{2}<...<c_{k}\}\) and \(m_{G}(c_{i})\) be the number of irreducible characters of \(G\) with codegree \(c_{i}\). The multiset \[C(G):=\{(c_{i},m_{G}(c_{i})):1\leqslant i\leqslant k\}\] is called the group pseudo-algebra of \(G\), which can be viewed as the codegree counterpart of the aforementioned complex group algebra \(\mathbb{C}G\). A natural weaker version of (CIC) asks whether \(G\) and \(S\) must be isomorphic if \(C(G)=C(S)\). We confirm this in our first main result. **Theorem A**.: _Let \(S\) be a finite simple group and \(G\) a finite group. Then_ \[C(G)=C(S)\text{ if and only if }G\cong S.\] The main novelty of this paper is a more uniform approach to these codegree problems with as little case-by-case analysis as possible. Our proof of Theorem A, somewhat surprisingly, only relies on the classification via the so-called _simple order theorem_ (also known as the Artin-Tits theorem [1, 2]), which states that two non-isomorphic finite simple groups have the same order if and only if they are either \(PSL_{4}(2)\) and \(PSL_{3}(4)\) or \(\Omega_{2n+1}(q)\) and \(PSp_{2n}(q)\) for some \(n\geqslant 3\) and odd \(q\). This is perhaps the first time that a result of this type is proved uniformly for all simple groups. There are two key ingredients in the proof of Theorem A. We find it remarkable that they admit strikingly elementary proofs. The first provides a characterization of perfect groups in terms of codegrees, see Theorem 2.3. (Recall that a group is perfect if it coincides with its derived subgroup.) The second is an order-divisibility property involving character codegrees of finite simple groups, see Theorem 3.1. In general the set of codegrees \(\operatorname{cod}(G)\) of a finite group \(G\) does not determine its set of character degrees. However, this is not the case when \(G=S\) is simple. To see why, consider a prime divisor \(p\) of \(|S|\). Then \(S\) has some nontrivial irreducible character of \(p^{\prime}\)-degree (by the squares of the degrees equation). As a result, \(S\) has some codegree divisible by \(|S|_{p}\) - the \(p\)-part of \(|S|\), and this is the largest power of \(p\) dividing any codegree of \(S\). Therefore both the order \(|S|\) and all the character degrees of \(S\) are indeed determined by \(\operatorname{cod}(S)\). This argument also proves that if two simple groups have the same set of codegrees, then they have the same order. Our next main result is a far stronger statement but the proof requires the classification. **Theorem B**.: _Let \(S\) and \(H\) be finite simple groups such that \(\operatorname{cod}(S)\subseteq\operatorname{cod}(H)\). Then \(S\cong H\)._ Theorem B in fact is the first step in proving the codegree isomorphism conjecture (CIC). Let \(G\) be any finite group and \(H\) a simple group such that \(\operatorname{cod}(G)=\operatorname{cod}(H)\), and \(N\) be a maximal normal subgroup of \(G\) so that \(S:=G/N\) is simple. In order to prove \(G\cong H\), one would first need to establish \(S\cong H\), under the assumption \(\operatorname{cod}(S)\subseteq\operatorname{cod}(G)=\operatorname{cod}(H)\). This is precisely what we do in Theorem B (see Theorem 8.3). **Remark 1**.: Using Theorem B, we will prove in a subsequent paper [11] that (CIC) holds for all sporadic groups, alternating groups, groups of Lie type of low rank, and, for the first time in the degree/codegree isomorphism problem, groups of Lie type of arbitrary rank over a field of prime order. Furthermore, perhaps unexpectedly, we reduce (CIC) to a problem on \(p\)-groups. **Remark 2**.: The proof of Theorem B is fairly complicated and combines several techniques. In particular, it essentially utilizes some deep results on the representation theory of finite simple groups, including the classification of prime-power-degree representations [13, 1], lower/upper bounds for the largest degree of irreducible representations [10, 11], and the existence of \(p\)-defect zero characters [14, 15, 16]. Along the way we prove an effective and _explicit_ upper bound for the largest character degree \(b(S)\) of an exceptional group \(S\) of Lie type (Theorem 5.3). (See Lemma 5.4 for previous related work on symmetric and alternating groups [10] and classical groups [11].) **Remark 3**.: We need bounds for the largest character degree \(b(S)\) in order to control the behavior of \(f(S):=|S|/b(S)\) - the smallest nontrivial codegree of \(S\). While the relevance of the smallest (or low-degree in general) characters of (quasi/almost)simple groups is well-known in group representation theory (see [15] for the latest results), the smallest codegree had not been studied much before. This invariant arises naturally in the proof of Theorem B (see Lemma 5.1) and measures the relative growth of \(b(S)\) compared to \(|S|\). Our proof would be much simpler if one can show that \(f\) is _divisibly increasing_ among nonabelian simple groups, by which we mean that, if \(S\) and \(H\) are nonabelian simple of different orders such that \(|S|\) divides \(|H|\), then \(f(S)<f(H)\). We indeed confirm this phenomenon in many cases, particularly when one of the two groups involved is alternating (see Propositions 7.1, 7.3, and 8.1). The layout of the paper is as follows. In Section 2, we prove some results on prime character codegrees and provide a short proof of a theorem of Riese and Schmid on prime-power codegrees. In Section 3 we discuss the order-divisibility property and its consequences involving character codegrees of finite simple groups. Using the results in the preceding sections, we prove Theorem A in Section 4. Results on bounding the largest character degree are presented in Section 5. Finally, the proof of Theorem B is carried out in Sections 6, 7, and 8. ## 2. Prime-power codegrees In this section we prove some results on prime-power character codegrees. These results show that, in contrast to character degrees, there are significant restrictions on the structure of groups with faithful irreducible characters of prime/prime-power codegree. We mainly follow the notation from [13] for character theory and [1, 2] for finite simple groups. Throughout, for a positive integer \(n\) and a prime \(p\), we write \(n_{p}\) to denote the maximal \(p\)-power divisor of \(n\) and \(n_{p^{\prime}}:=n/n_{p}\) to denote the maximal divisor not divisible by \(p\) of \(n\). Let \(N\eqeq G\) and \(\theta\in\operatorname{Irr}(N)\). We write \(\operatorname{Irr}(G|\theta)\) for the set of irreducible constituents of \(\theta^{G}\) and \(\operatorname{Irr}(G|N)\) for the set of irreducible characters of \(G\) whose kernels do not contain \(N\). If \(G\) is a group, \(\pi(G)\) is the set of primes that divide \(|G|\). If \(n\) is an integer, \(\pi(n)\) is the set of primes that divide \(n\), and if \(\mathcal{S}\) is a set of integers, then \(\pi(\mathcal{S})\) is the set of primes that divide some member of \(\mathcal{S}\). As usual, \(\operatorname{cd}(G):=\{\chi(1):\chi\in\operatorname{Irr}(G)\}\) is the set of all irreducible character degrees of \(G\). Other notation will be recalled or defined when necessary. We begin by collecting some known facts on character codegrees that we will use without explicit mention. **Lemma 2.1**.: _Let \(G\) be a finite group and \(\chi\in\operatorname{Irr}(G)\). The following hold:_ 1. _If_ \(\chi\) _is not the principal character, then_ \(\operatorname{cod}(\chi)>\chi(1)\)_._ 2. _If_ \(N\vartriangleleft G\) _and_ \(N\leq\ker(\chi)\)_, then the codegree of_ \(\chi\) _as a character of_ \(G\) _coincides with the codegree of_ \(\chi\) _viewed as a character of_ \(G/N\)_._ 3. _If_ \(N\vartriangleleft G\) _and_ \(\theta\in\operatorname{Irr}(N)\) _lies under_ \(\chi\)_, then_ \(\operatorname{cod}(\theta)\) _divides_ \(\operatorname{cod}(\chi)\)_._ 4. \(\pi(G)=\pi(\operatorname{cod}(G))\)_._ 5. _If_ \(G\) _is abelian, then_ \(\operatorname{cod}(G)=o(G)\)_, where_ \(o(G)\) _is the set of orders of the elements of_ \(G\)_._ Proof.: Part (i) is [11, Lem. 2.1]. Parts (ii) and (iii) are contained in [10, Lem. 2.1], and part (iv) is [10, Lem. 2.4]. Now, we prove part (v). The inclusion \(o(G)\subseteq\operatorname{cod}(G)\) follows from [11, Lem. 2.2]. Conversely, if \(d\in\operatorname{cod}(G)\) there exists \(\chi\in\operatorname{Irr}(G)\) such that \(d=|G:\ker(\chi)|\) (note that since \(G\) is abelian, \(\chi\) is linear). Since \(G/\ker(\chi)\) is cyclic, we conclude that \(G\) has elements of order \(d\). ### Prime codegrees: characterizing perfect groups The goal in this subsection is to provide a characterization of perfect groups in terms of the absence of prime codegrees. **Theorem 2.2**.: _Let \(G\) be a finite group. Suppose that there exists \(\chi\in\operatorname{Irr}(G)\) faithful such that \(\operatorname{cod}(\chi)=p\) is a prime number. Then \(G\) is either the cyclic group of order \(p\) or a Frobenius group with Frobenius kernel of order \(p\)._ Proof.: We argue by induction on \(|G|\). Let \(N\vartriangleleft G\) be minimal such that there exists \(\theta\in\operatorname{Irr}(N)\) lying under \(\chi\) with \(\operatorname{cod}(\theta)=p\). Then \[\frac{|G|}{\chi(1)}=p=\operatorname{cod}(\theta)=\frac{[N:\ker(\theta)]}{ \theta(1)},\] and we deduce that \[p=\operatorname{cod}(\chi)>\chi(1)=[G:N]|\ker(\theta)|\theta(1).\] In particular, \(p\) does not divide any of the three factors in the right hand side. Suppose first that \(N<G\). By the inductive hypothesis, \(N/\ker(\theta)\) is cyclic of order \(p\) or a Frobenius group with Frobenius kernel \(K/\ker(\theta)\) of order \(p\). In the latter case, if \(\lambda\in\operatorname{Irr}(K/\ker(\theta))\) lies under \(\theta\) then \(\operatorname{cod}(\lambda)=p\). This contradicts the choice of \(N\). (Note that \(K\) is normal in \(G\) because \(K\) is characteristic in \(N\).) Hence, we may assume that \(N/\ker(\theta)\) is cyclic of order \(p\). By Clifford's theorem the faithful character \(\chi_{N}\) is a sum of \(G\)-conjugates of \(\theta\). Let \(T\) be a complete set of representatives in \(G\) for these conjugates. By [16, Lem. 2.21], the intersection of \(\ker(\theta^{g})\), where \(g\) runs over \(T\), is trivial. We conclude that \(N\) embeds into the direct product \[\prod_{g\in T}N/\ker(\theta^{g}).\] Each of the direct factors has order \(p\), and so \(N\) is an elementary abelian \(p\)-group. Since \(p\) does not divide \(|\ker(\theta)|\), we conclude that \(N\) is cyclic of order \(p\). As \(\theta\) is linear and \(\chi(1)=|G|/p\), we now have \(\chi=\theta^{G}\). It follows that \(G\) is a Frobenius group with kernel \(N\), as desired. Now, we consider the case \(N=G\). Let \(M\) be a maximal normal subgroup of \(G\). Since \(N=G\) the codegree of any irreducible character of \(N\) lying under \(\chi\) is \(1\). This means that \(\chi_{M}=\chi(1)\mathbf{1}_{M}\). But \(\chi\) is faithful, so we deduce that \(M=1\) and \(G\) is simple. If \(G\) is abelian, then it is the cyclic group of order \(p\). If \(G\) is not abelian, then \(|G|/p=\chi(1)<\sqrt{|G|}\) and it follows that \(|G|<p^{2}\). By Sylow's theorems, it follows that \(G\) has a normal Sylow \(p\)-subgroup. This contradiction completes the proof. The following consequence of Theorem 2.2 is already mentioned in the introduction. **Theorem 2.3**.: _A finite nontrivial group \(G\) is perfect if and only if \(G\) does not have any prime character codegree._ Proof.: By Lemma 2.2, if \(G\) has an irreducible character \(\chi\) of prime codegree then \(G/\ker(\chi)\) is solvable. In particular, \(G\) is not perfect. Conversely, if \(G\) is not perfect, then the abelian group \(G/G^{\prime}\) has some irreducible character of prime codegree. ### Prime power codegrees: the Riese-Schmid theorem Chillag and Herzog proved in [1, Thm. 1] that a simple group does not possess nontrivial irreducible characters of prime power codegree. The proof relied on a case by case analysis of the simple groups, using the fact that, most of the times, they have \(p\)-blocks of defect zero. This was generalized by Riese and Schmid in [13, Cor. 3] to quasisimple groups, using also block theory and the classification. We offer a short proof of this result that only depends on an easy consequence of the classification, which is due to W. Kimmerle, R. Lyons, R. Sandling, and D. N. Teague [12, Thm. 3.6]: **Lemma 2.4**.: _For every finite simple group \(S\) and prime \(p\), \(|S|<(|S|_{p^{\prime}})^{2}\)._ The following is a restatement of [13, Lem. 1] in the language of codegrees. **Lemma 2.5**.: _Let \(p\) be a prime. Let \(G\) be a finite group and \(\chi\in\operatorname{Irr}(G)\) faithful. Then \(\operatorname{cod}(\chi)\) is a power of \(p\) if and only if \(\chi\) is induced from a Sylow \(p\)-subgroup of \(G\)._ **Theorem 2.6**.: _A quasisimple group \(G\) does not possess nonprincipal characters of prime power codegree._ Proof.: Suppose that there exists \(\mathbf{1}_{G}\neq\chi\in\operatorname{Irr}(G)\) of \(p\)-power codegree. Let \(K:=\ker(\chi)\) and note that \(K\leq\mathbf{Z}(G)\). By Lemma 2.5, \(\chi\) is induced from a Sylow \(p\)-subgroup of \(G/K\). Therefore, \[\chi(1)\geq|G/K|_{p^{\prime}}.\] By Lemma 2.4, we know that \[|G/\mathbf{Z}(G)|<(|G/\mathbf{Z}(G)|_{p^{\prime}})^{2}\leq(|G/K|_{p^{\prime}})^ {2}.\] Hence, by [11, Cor. 2.30], \[\chi(1)\leq|G:\mathbf{Z}(G)|^{1/2}<|G/K|_{p^{\prime}},\] which violates the inequality above. The next result is a restatement in terms of character codegrees of Theorem B of [11]. The proof in [11] uses Brauer's first and third main theorems. Recall that if a group \(G\) has trivial \(p^{\prime}\)-core \(\mathbf{O}_{p^{\prime}}(G)\), then it is defined to be \(p\)-constrained if the \(p\)-core \(\mathbf{O}_{p}(G)\) contains its centralizer. **Theorem 2.7** (Riese-Schmid).: _Let \(G\) be a finite group and let \(p\) be a prime. Suppose that \(\chi\in\operatorname{Irr}(G)\) is faithful of \(p\)-power codegree. Then \(\mathbf{O}_{p^{\prime}}(G)=1\) and \(G\) is \(p\)-constrained._ Proof.: By Theorem 2.6, we know that \(G\) is not simple. Let \(N\) be a minimal normal subgroup of \(G\) and let \(\theta\in\operatorname{Irr}(N)\) lying under \(\chi\). Since \(\theta\) has \(p\)-power codegree (by Lemma 2.1(iii)) and \(\operatorname{cod}(\theta)>1\) (note that since \(\chi\) is faithful, \(\theta\neq\mathbf{1}_{N}\)), we deduce that \(N\) is either an elementary abelian \(p\)-subgroup or a direct product of nonabelian simple groups of order divisible by \(p\). In particular, \(\mathbf{O}_{p^{\prime}}(G)=1\). We claim that \(N\) is an elementary abelian \(p\)-group. Suppose that \(N=S_{1}\times\cdots\times S_{t}\), with \(S_{i}\cong S\) for some nonabelian simple group \(S\) of order divisible by \(p\). We wish to reach a contradiction. Since \(\theta\neq\mathbf{1}_{N}\), there exists a nonprincipal \(\psi\in\operatorname{Irr}(S_{i})\) lying under \(\theta\) for some \(i\). Note that \(\operatorname{cod}(\psi)\) is a power of \(p\) and this contradicts Theorem 2.6. The claim follows. Write \(P:=\mathbf{O}_{p}(G)\) and \(C:=\mathbf{C}_{G}(P)\). Note that \(C\cap P=\mathbf{Z}(P)\) and \(\mathbf{O}_{p^{\prime}}(C)=1\) (because \(\mathbf{O}_{p^{\prime}}(G)=1\)). We want to see that \(C\leq P\). Assume not. Take \(K\) subnormal in \(G\) such that \(\mathbf{Z}(P)\leq K\leq C\) and \(K/\mathbf{Z}(P)\) is simple. Since \(\mathbf{O}_{p^{\prime}}(C)=1\) and \(G\) does not have nonabelian minimal normal subgroups, we conclude that \(K^{\prime}\) is quasisimple. Now, take \(\gamma\in\operatorname{Irr}(K^{\prime})\) lying under \(\chi\). Again, we have that \(\gamma\) is not principal and \(\operatorname{cod}(\gamma)\) is a \(p\)-power. This contradicts Theorem 2.6. We end this section with a variation of Theorem 2.6. **Theorem 2.8**.: _Let \(G\) be a finite group. Suppose that \(p\) is a prime and \(\chi\in\operatorname{Irr}(G)\) is faithful of \(p\)-power codegree. Then \(\operatorname{cod}(\chi)\) exceeds the \(p\)-part of the product of the orders of the nonabelian composition factors in a composition series of \(G\). In particular, if \(K/L\) is a non-abelian chief factor of \(G\), then \(\operatorname{cod}(\chi)>|K/L|_{p}\)._ Proof.: Let \(n\) be the product of the orders of the non-abelian composition factors in a composition series of \(G\). Using again that \(\chi\) is induced from a Sylow \(p\)-subgroup and [12], we have \[\operatorname{cod}(\chi)>\chi(1)\geq|G|_{p^{\prime}}\geq n_{p^{\prime}}>n_{p},\] as wanted. ## 3. An order-divisibility result for codegrees The following order-divisibility result will be crucial in the proofs of our main theorems. **Theorem 3.1**.: _Suppose that \(S\) is a finite simple group and \(G\) a finite group such that \(\operatorname{cod}(S)\subseteq\operatorname{cod}(G)\). Then \(|S|\) divides \(|G|\)._ Proof.: Let \(d_{1},...,d_{k}\) be all the degrees of nontrivial irreducible characters of \(S\), and let \(m_{i}\) (\(1\leqslant i\leqslant k\)) be the number of those characters of degree \(d_{i}\). By the assumption, for each \(i\), there exists \(\chi_{i}\in\operatorname{Irr}(G)\) such that \[\frac{|S|}{d_{i}}=\frac{[G:\ker(\chi_{i})]}{\chi_{i}(1)}=\frac{|G|}{\chi_{i}(1 )|\ker(\chi_{i})|}.\] It follows that \[\sum_{i=1}^{k}\frac{m_{i}d_{i}^{2}}{|S|^{2}}=\sum_{i=1}^{k}\frac{m_{i}\chi_{i}( 1)^{2}|\ker(\chi_{i})|^{2}}{|G|^{2}},\] and thus \[\frac{\sum_{i=1}^{k}m_{i}\chi_{i}(1)^{2}|\ker(\chi_{i})|^{2}}{|G|^{2}}=\frac{ \sum_{i=1}^{k}m_{i}d_{i}^{2}}{|S|^{2}}=\frac{|S|-1}{|S|^{2}}.\] Therefore \(|S|^{2}\) divides \(|G|^{2}(|S|-1)\), and the theorem follows. We record some consequences of Theorem 3.1 that will be needed in subsequent sections. **Corollary 3.2**.: _Suppose that \(S\) and \(H\) are finite simple groups such that \(\operatorname{cod}(S)=\operatorname{cod}(H)\). Then \(|S|=|H|\)._ Proof.: This directly follows from Theorem 3.1. (Or, alternatively, from the comment in the paragraph that precedes the statement of Theorem B.) **Lemma 3.3**.: _Suppose that \(S\) and \(H\) are finite nonabelian simple groups such that \(\operatorname{cod}(S)\subseteq\operatorname{cod}(H)\). Let \(x:=|H|/|S|\). Then \(x\in\mathbb{N}\) and \(dx\in\operatorname{cd}(H)\) for every \(1\neq d\in\operatorname{cd}(S)\)._ Proof.: We know that \(x\in\mathbb{N}\) by Theorem 3.1. For each \(1\neq d\in\operatorname{cd}(S)\), we have \(|S|/d\in\operatorname{cod}(H)\), and thus there exists some \(\chi\in\operatorname{Irr}(H)\) such that \(|S|/d=|H|/\chi(1)\), implying that \(\chi(1)=dx\), as claimed. **Lemma 3.4**.: _Let \(S\) and \(H\) be finite simple groups of Lie type. Suppose that the defining characteristic of \(H\) is \(p\) and \(\operatorname{cod}(S)\subseteq\operatorname{cod}(H)\). Then \(|S|_{p^{\prime}}=|H|_{p^{\prime}}\) and there exists \(\chi\in\operatorname{Irr}(S)\) such that \(\chi(1)=|S|_{p}\)._ Proof.: We first observe that \(|S|\) is divisible by \(p\) because otherwise every codegree of \(S\) is not divisible by \(p\) but the only nontrivial codegree of \(H\) not divisible by \(p\) is \(|H|_{p^{\prime}}=|H|/\mathrm{St}_{H}(1)\). By [10, 11], \(S\) has an irreducible character, say \(\chi\), of \(p\)-defect \(0\), so that \(\operatorname{cod}(\chi)\) is coprime to \(p\). Therefore we have \[|S|/\chi(1)=|H|_{p^{\prime}}.\] It follows from Theorem 3.1 that \(\chi(1)|H|_{p^{\prime}}=|S|\) divides \(|H|\), implying that \(|S|_{p^{\prime}}=|H|_{p^{\prime}}\) and \(\chi(1)=|S|_{p}\). Remark that when \(p\geqslant 5\), the above result holds for all \(S\). This is because, in such case, irreducible \(p\)-defect zero characters still exist in alternating groups (by [1]) and in sporadic simple groups [11]. **Lemma 3.5**.: _Let \(S\) be a finite simple group of Lie type and \(H\) a finite nonabelian simple group. Suppose that \(\operatorname{cod}(S)\subseteq\operatorname{cod}(H)\) and there are primes \(p\neq r\) such that \(H\) has a unique character degree divisible by each \(|H|_{p}\) and \(|H|_{r}\). Then \(|S|=|H|\) and \(\operatorname{cd}(S)\subseteq\operatorname{cd}(H)\)._ Proof.: Repeating the arguments in the proof of Lemma 3.4, we have \(|S|_{p^{\prime}}=|H|_{p^{\prime}}\) and \(|S|_{r^{\prime}}=|H|_{r^{\prime}}\), implying that \(|S|=|H|\) and \(\operatorname{cd}(S)\subseteq\operatorname{cd}(H)\). ## 4. Group pseudo-algebras of simple groups: Theorem A In this section we prove Theorem A, using the results in the preceding sections. Theorem 3.1 is useful in proving results concerning codegrees of finite simple groups. One of them is the next theorem, whose proof makes use of the simple order theorem. Recall that the simple order theorem asserts that two non-isomorphic finite simple groups have the same order if and only if they are either \(PSL_{4}(2)\) and \(PSL_{3}(4)\) (of order \(20,160\)) or \(\Omega_{2n+1}(q)\) and \(PSp_{2n}(q)\) (odd-dimensional orthogonal and symplectic groups, of order \((1/2)q^{n^{2}}\prod_{i=1}^{n}(q^{2i}-1)\)) for some \(n\geqslant 3\) and odd \(q\) (see [10, Thm. 5.1] for instance). It was proved by E. Artin [11, 12] in the fifties for known families of simple groups at the time, and completed by J. Tits for the remaining families discovered later on (see [10]). Artin's method is to consider certain invariants associated to (orders of) simple groups that can be computed explicitly and are able to distinguish the groups easily. Therefore, the simple order theorem currently relies on the classification of finite simple groups. **Theorem 4.1**.: _Suppose that \(S\) and \(H\) are finite simple groups such that \(\operatorname{cod}(S)=\operatorname{cod}(H)\). Then \(S\cong H\)._ Proof.: The statement is trivial when both of \(S\) and \(H\) are abelian. If one of the two groups is nonabelian, then, by Corollary 2.3, so is the other. So assume that both \(S\) and \(H\) are nonabelian. By Theorem 3.2, we have \(|S|=|H|\) and hence it follows that \(\operatorname{cd}(S)=\operatorname{cd}(H)\). Assume to the contrary that \(S\) is not isomorphic to \(H\). By the simple order theorem, we have \[\{S,H\}=\{PSL_{4}(2),PSL_{3}(4)\}\] or \[\{S,H\}=\{\Omega_{2n+1}(q),PSp_{2n}(q)\}\] for some odd prime power \(q=p^{\ell}\) and \(n\geqslant 3\). The former case is eliminated using [12], so we just need to show that \(\operatorname{cd}(\Omega_{2n+1}(q))\neq\operatorname{cd}(PSp_{2n}(q))\) for indicated \(n\) and \(q\). By Lusztig's classification of ordinary characters of finite groups of Lie type (see [1, Chapter 13]), irreducible characters of \(G:=Sp_{2n}(q)\) are parameterized by pairs \(((s),\psi)\) where \((s)\) is the conjugacy class of a semisimple element \(s\in G^{*}:=SO_{2n+1}(q)\) and \(\psi\) is a unipotent character of the centralizer \(C:=\mathbf{C}_{G^{*}}(s)\). Moreover, the degree of the character \(\chi_{((s),\psi)}\) associated to \(((s),\psi)\) is \[\chi_{((s),\psi)}(1)=[G^{*}:C]_{p^{\prime}}\psi(1).\] Let \(\alpha\in\{\pm 1\}\) such that \(4\mid(q^{m}-\alpha)\) and consider a semisimple element \(s\in G^{*}\) with spectrum \(\{1,-1,...,-1\}\) such that \(C\cong GO_{2n}^{\alpha}\) (see [14, Lem. 2.2]). Such \(s\) will then belong to \(\Omega_{2n+1}(q)=[G^{*},G^{*}]\), implying that the semisimple character \(\chi_{((s),\mathbf{1}_{C})}\) associated to the pair \(((s),\mathbf{1}_{C})\) is trivial on \(\mathbf{Z}(Sp_{2n}(q))\), by [10, Lem. 4.4]. We therefore have an irreducible character of \(PSp_{2n}(q)\) of degree \[\chi_{(s)}(1)=(|SO_{2n+1}(q)|/|GO_{2n}^{\alpha}|)_{p^{\prime}}=(q^{n}+\alpha)/2.\] To see that \(\operatorname{cd}(PSp_{2n}(q))\neq\operatorname{cd}(\Omega_{2n+1}(q))\) (for odd \(q\) and \(n\geqslant 3\)), it is enough to show that \((q^{n}+\alpha)/2\) is not character degree of \(\Omega_{2n+1}(q)\). By [12, Thm. 6.1], under our assumptions on \(n\) and \(q\) and the additional condition \((n,q)\neq(3,3)\), the minimal (nontrivial) irreducible character of \(Spin_{2n+1}(q)\) has degree \[d(Spin_{2n+1}(q))=\left\{\begin{array}{ll}(q^{2n}-1)/(q^{2}-1)&\text{if }q \geqslant 5\\ (q^{n}-1)(q^{n}-q)/2(q+1)&\text{if }q=3,\end{array}\right.\] (For the definition of classical groups of various isogeny types, including the odd-dimensional spin groups, we refer the reader to [1, p. 40].) Note that \(Spin_{2n+1}(q)\) is a central extension of \(\Omega_{2n+1}(q)\) and so every character degree of \(\Omega_{2n+1}(q)\) is one of \(Spin_{2n+1}(q)\). It is now easy to check that \(d(Spin_{2n+1}(q))>(q^{n}+\alpha)/2\) for \(n\geqslant 3\). For the remaining case \((n,q)=(3,3)\), we note that \(d(Spin_{2n+1}(q))=27\), which is still greater than \((q^{n}+\alpha)/2=13\). Certainly, one has to do much more work to relax the hypothesis in Theorem 4.1 to \(\operatorname{cod}(S)\subseteq\operatorname{cod}(H)\); that is, to obtain Theorem B. **Lemma 4.2**.: _Let \(n\geqslant 3\) and \(q\) be an odd prime power. Then \(\operatorname{cd}(\Omega_{2n+1}(q))\nleq\operatorname{cd}(PSp_{2n}(q))\) and \(\operatorname{cd}(PSp_{2n}(q))\nleq\operatorname{cd}(\Omega_{2n+1}(q))\)._ Proof.: We have seen in the proof of Theorem 4.1 that \(PSp_{2n}(q)\) possesses an irreducible character of degree \((q^{n}+\alpha)/2\), where \(\alpha\in\{\pm 1\}\) such that \(4\mid(q^{n}-\alpha)\), and furthermore \((q^{n}+\alpha)/2\notin\operatorname{cd}(\Omega_{2n+1}(q))\). Therefore, it suffices to show \(cd(\Omega_{2n+1}(q))\nleq\operatorname{cd}(PSp_{2n}(q))\). We claim that both \[(q^{2n}-1)/(q^{2}-1)\text{ and }q(q^{2n}-1)/(q^{2}-1)\] are elements of \(\operatorname{cd}(\Omega_{2n+1}(q))\) for \(q\) odd. Let \(G:=Spin_{2n+1}(q)\), the universal cover of \(\Omega_{2n+1}(q)\). The dual group \(G^{*}\) of \(G\) (in the sense of [1, Def. 13.10]) is the projective conformal symplectic group \(PCSp_{2n}(q)\), which is the quotient of \(\widetilde{G}=CSp_{2n}(q)\) by its center \(\mathbf{Z}(\widetilde{G})\simeq C_{q-1}\). Consider a semisimple element \(s\in\widetilde{G}\) with spectrum \(Spec(s)=\{-1,-1,1,...,1\}\) and \[\mathbf{C}_{\widetilde{G}}(s)\cong(Sp_{2}(q)\times Sp_{2n-2}(q))\cdot C_{q-1}\] (see [10, Lem. 2.4]). Let \(s^{*}\) be the image of \(s\) under the natural homomorphism from \(\widetilde{G}\) to \(G^{*}\). It is easy to see that, by the choice of \(s\), \(\mathbf{C}_{\widetilde{G}}(s)\) is the complete inverse image of \(\mathbf{C}_{G^{*}}(s^{*})\) under this homomorphism, and thus \(\mathbf{C}_{G^{*}}(s^{*})=\mathbf{C}_{\widetilde{G}}(s)/\mathbf{Z}(\widetilde {G})\) and \[[G^{*}:\mathbf{C}_{G^{*}}(s^{*})]_{p^{\prime}}=[\widetilde{G}:\mathbf{C}_{ \widetilde{G}}(s)]_{p^{\prime}}=\frac{|Sp_{2n}(q)|_{p^{\prime}}}{|Sp_{2}(q)|_{ p^{\prime}}|Sp_{2n-2}(q)|_{p^{\prime}}}=\frac{q^{2n}-1}{q^{2}-1},\] where \(p\) is the defining characteristic of \(G\). Consider the canonical homomorphism \(f:Sp_{2}(q)\times Sp_{2n-2}(q)\hookrightarrow\mathbf{C}_{\widetilde{G}}(s) \rightarrow\mathbf{C}_{G^{*}}(s^{*})\). Using [1, Prop. 13.20], we know that unipotent characters of \(Sp_{2}(q)\times Sp_{2n-2}(q)\) are of the form \(\theta\circ f\) where \(\theta\) runs over the unipotent characters of \(\mathbf{C}_{G^{*}}(s^{*})\). In particular, as \(Sp_{2}(q)\cong SL_{2}(q)\) has unipotent characters of degrees \(1\) and \(q\), \(\mathbf{C}_{G^{*}}(s^{*})\) has two unipotent characters of degree \(1\) and \(q\) as well. By the conclusion of the previous paragraph, the Lusztig series \(\mathcal{E}(G,(s^{*}))\) of \(G\) associated to the conjugacy class \((s^{*})\) of \(G^{*}\) contains two characters of degrees \((q^{2n}-1)/(q^{2}-1)\) and \(q(q^{2n}-1)/(q^{2}-1)\). Note that \(s^{*}\in PSp_{2n}(q)=[G^{*},G^{*}]\) and \(|\mathbf{Z}(G)|=|G^{*}/(G^{*})^{\prime}|=2\), and therefore every character in the Lusztig series \(\mathcal{E}(G,(s^{*}))\) restricts trivially to \(\mathbf{Z}(G)\) (see [13, Lem. 4.4]), and so can be viewed as a character of \(G/\mathbf{Z}(G)=\Omega_{2n+1}(q)\). The claim is completely proved. Suppose first that \(q>3\) and assume to the contrary that \(\operatorname{cd}(\Omega_{2n+1}(q))\subseteq\operatorname{cd}(PSp_{2n}(q))\). Then we have \((q^{2n}-1)/(q^{2}-1)\in\operatorname{cd}(PSp_{2n}(q))\), so that \((q^{2n}-1)/(q^{2}-1)\in\operatorname{cd}(Sp_{2n}(q))\). Let \(\chi\in\operatorname{Irr}(Sp_{2n}(q))\) such that \(\chi(1)=(q^{2n}-1)/(q^{2}-1)\). Now, as \(q>3\), we have \((q^{2n}-1)/(q^{2}-1)<(q^{2n}-1)/2(q+1)\), and therefore, by the classification of irreducible characters of \(Sp_{2n}(q)\) of degrees up to \((q^{2n}-1)2(q+1)\) ([12, Thm. 5.2]), we deduce that \(\chi\) must be one of the Weyl characters of degree \((q^{n}\pm 1)/2\) or the minimal unipotent one of degree \((q^{n}-1)(q^{n}-q)/2(q+1)\). A simple check reveals that none of these degrees matches the degree of \(\chi\). Now we suppose \(q=3\). (In such case, \((q^{2n}-1)/(q^{2}-1)\) is indeed a character degree of \(PSp_{2n}(q)\).) As the case of \(\Omega_{7}(3)\) and \(PSp_{6}(3)\) can be checked directly, we suppose furthermore that \(n\geq 4\). We then have \(q(q^{2n}-1)/(q^{2}-1)<(q^{2n}-1)(q^{n-1}-q)/2(q^{2}-1)\). Examining the degrees up to \((q^{2n}-1)(q^{n-1}-q)/2(q^{2}-1)\) of irreducible characters of \(Sp_{2n}(q)\) available in [10, Cor. 4.2], we observe that none of them is equal to \(q(q^{2n}-1)/(q^{2}-1)\). We have shown that \(q(q^{2n}-1)/(q^{2}-1)\notin\operatorname{cd}(Sp_{2n}(q))\), and therefore, by the above claim, \(\operatorname{cd}(\Omega_{2n+1}(q))\nleq\operatorname{cd}(PSp_{2n}(q))\), as desired. We can now prove our first main Theorem A, which in fact follows from the following slightly stronger result. If \(G\) and \(H\) are groups we say that \(C(G)\subseteq C(H)\) if \(\operatorname{cod}(G)\subseteq\operatorname{cod}(H)\) and \(m_{G}(c)\leq m_{H}(c)\) for every \(c\in\operatorname{cod}(G)\). **Theorem 4.3**.: _Let \(H\) be a finite simple group and \(G\) a nontrivial finite group such that \(C(G)\subseteq C(H)\). Then \(G\cong H\)._ Proof.: Suppose first that \(H\) is abelian of prime order \(p\). Then \(\operatorname{cod}(G)\subseteq\operatorname{cod}(H)=\{1,p\}\). Therefore, by Lemma 2.4 of [16], \(G\) is an elementary abelian \(p\)-group and since \(k(G)\leq p\), we conclude that \(G\) is cyclic of order \(p\), as wanted. So we may assume that \(H\) is nonabelian. By Corollary 2.3 and the assumption \(C(G)\subseteq C(H)\), we have that \(G\) is perfect. Let \(N\preccurlyeqeq\,G\) such that \(S:=G/N\) is nonabelian simple. Now \(\operatorname{cod}(S)\subseteq\operatorname{cod}(G)\subseteq\operatorname{cod }(H)\) and therefore, by Theorem 3.1, we have \(|S|\) divides \(|H|\). Note that \(C(S)\subseteq C(G)\subseteq C(H)\). Therefore there exists a subset \(I\subseteq\operatorname{Irr}(H)\backslash\{\mathbf{1}_{H}\}\) and a bijection \(f:\operatorname{Irr}(S)\backslash\{\mathbf{1}_{S}\}\to I\) such that \[\frac{|S|}{\chi(1)}=\frac{|H|}{f(\chi)(1)}\] for every \(\chi\in\operatorname{Irr}(S)\backslash\{\mathbf{1}_{S}\}\). It follows that \[\sum_{\chi\in\operatorname{Irr}(S)\backslash\{\mathbf{1}_{S}\}}\frac{\chi(1)^ {2}}{|S|^{2}}=\sum_{\psi\in I}\frac{\psi(1)^{2}}{|H|^{2}},\] and thus \[\frac{|S|-1}{|S|^{2}}\leq\frac{|H|-1}{|H|^{2}}.\] As the function \((x-1)/x^{2}\) is decreasing on \([2,\infty)\), we deduce that \(|S|\geq|H|\). The conclusions of the last two paragraphs show that \(|S|=|H|\). If \(S\cong H\) then \(C(G/N)=C(S)=C(H)\supseteq C(G)\) and so \(G/N\) and \(G\) have the same number of conjugacy classes, which is possible only when \(N=1\), and we are done. So assume by contradiction that \(S\not\cong H\). Using again the simple order theorem, we have \(\{S,H\}=\{PSL_{4}(2),PSL_{3}(4)\}\) or \(\{S,H\}=\{\Omega_{2n+1}(q),PSp_{2n}(q)\}\) for some \(n\geqslant 3\) and odd \(q\). For the former pair, using the character tables of both \(PSL_{4}(2)\) and \(PSL_{3}(4)\) available in [1], one observes that \(7\in\operatorname{cd}(PSL_{4}(2))\backslash\operatorname{cd}(PSL_{3}(4))\) and \(63\in\operatorname{cd}(PSL_{3}(4))\backslash\operatorname{cd}(PSL_{4}(2))\), implying that none of \(\operatorname{cod}(PSL_{3}(4))\) and \(\operatorname{cod}(PSL_{2}(4))\) contains the other, and this violates the fact that \(\operatorname{cod}(S)\subseteq\operatorname{cod}(H)\). The latter pair was already handled in Lemma 4.2. ## 5. The largest character degree of finite simple groups Let \(b(G)\) denote the largest degree of an irreducible character of a finite group \(G\). Recall from the Introduction that if \(S\) is a simple group, then \(f(S):=|S|/b(S)\) is the smallest nontrivial character codegree of \(S\). The following elementary fact explains the relevance of \(f(S)\), and therefore \(b(S)\). **Lemma 5.1**.: _Let \(S\) and \(H\) be finite simple groups such that \(\operatorname{cod}(S)\subseteq\operatorname{cod}(H)\). Then \(f(S)\geqslant f(H)\)._ Proof.: The hypothesis implies that \(f(S)\in\operatorname{cod}(S)\subseteq\operatorname{cod}(H)\). Since \(f(H)\) is the smallest nontrivial member of \(\operatorname{cod}(H)\), it follows that \(f(S)\geqslant f(H)\). Under the hypothesis of Lemma 5.1, we showed in Theorem 3.1 that \(|S|\) divides \(|H|\). We will see in later sections that, in many cases, the two conditions \(f(S)\geqslant f(H)\) and \(|S|\) divides \(|H|\) are enough to force \(S\cong H\), as stated in Theorem B. Browsing through character tables of small-order simple groups in [1], one notices that if \(|H|\) is a multiple of \(|S|\) and \(|H|>|S|\), then \(b(H)>b(S)\). However, it seems that the largest character degree grows slower than the order - that is, \(f(H)>f(S)\). This is not so easy to prove generally, but we do confirm it in several cases, particularly when either \(S\) or \(H\) is an alternating group (see Sections 7 and 8). We shall need effective (both lower and upper) bounds for the largest degree of an irreducible character of simple groups. For symmetric groups, asymptotic and explicit bounds were obtained by A. M. Vershik and S. V. Kerov in [13] which can be used to derive the corresponding bounds for alternating groups. For a group \(S\) of Lie type in characteristic \(p\), an obvious (and in fact very tight!) lower bound for \(b(S)\) is the degree \(\mathsf{St}_{S}(1)=|S|_{p}\) of the Steinberg character \(\mathsf{St}_{S}\). When \(S\) is of classical type, explicit upper bounds have been worked out by M. Larsen, G. Malle, and P. H. Tiep in [11]. Unfortunately, upper bounds for exceptional groups achieved in [11] are only asymptotic and its proof does not allow one to obtain an explicit bound. We obtain Theorem 5.3 below that we believe will be useful in other applications. **Lemma 5.2**.: _Let \(\mathbf{G}\) be a simple algebraic group over the algebraic closure of a finite field of order \(q\) in characteristic \(p\), \(F:\mathbf{G}\to\mathbf{G}\) a Steinberg endomorphism, and \(G:=\mathbf{G}^{F}\) be the corresponding finite group of Lie type. Let \(r\) be the rank of \(\mathbf{G}\). Then_ \[(q-1)^{r}\cdot|G|_{p}\leqslant|G|_{p^{\prime}}\leqslant q^{r}\cdot|G|_{p}.\] Proof.: Note that finite groups \(\mathbf{G}^{F}\) of the same isogeny type have the same order, so we may work with \(\mathbf{G}\) being of simply-connected type. The inequalities are then straightforward to verify using the order formulas for finite groups of Lie type available in [1, p. xvi]. **Theorem 5.3**.: _Let \(S\) be a simple exceptional group of Lie type defined over a field of order \(q\) in characteristic \(p\). Then the following hold:_ 1. \(b(S)<256|S|_{p}\)_._ 2. _If_ \(q>2\)_, then_ \(b(S)<26|S|_{p}\)_._ Proof.: The Tits group can be verified easily, so we assume that \(S\neq{}^{2}F_{4}(2)^{\prime}\). We then may find a simple algebraic group \(\mathbf{G}\) of adjoint type and a Steinberg endomorphism \(F:\mathbf{G}\to\mathbf{G}\) such that \(S=[G,G]\) where \(G:=\mathbf{G}^{F}\) (see [11, Prop. 24.21]). Clearly it suffices to show that \(b(G)<256\mathsf{St}_{G}(1)\). Let \((\mathbf{G}^{*},F^{*})\) be the dual pair of \((\mathbf{G},F)\), so \(\mathbf{G}^{*}\) will be the corresponding simple algebraic group of simply connected type, and set \(G^{*}:=(\mathbf{G}^{*})^{F^{*}}\). As mentioned before, Lusztig's classification on complex characters of finite groups of Lie type implies that the set of irreducible complex characters of \(G\) is partitioned into Lusztig series \(\mathcal{E}(G,(s))\) associated to various conjugacy classes \((s)\) of semisimple elements of \(G^{*}\). Furthermore, there is a bijection \(\chi\mapsto\psi\) from \(\mathcal{E}(G,(s))\) to \(\mathcal{E}(\mathbf{C}_{G^{*}}(s),(1))\) such that \[\chi(1)=\frac{|G|_{p^{\prime}}}{|\mathbf{C}_{G^{*}}(s)|_{p^{\prime}}}\psi(1). \tag{1}\] The detailed structure of centralizers of semisimple elements in a finite exceptional groups of Lie type was determined by Carter [14], Deriziotis [15], and Deriziotis-Liebeck [16]. A well-known result of Steinberg states that the centralizer \(\mathbf{C}_{\mathbf{G}^{*}}(s)\) of a semisimple element \(s\) is a connected reductive subgroup of maximal rank in \(\mathbf{G}^{*}\). Such connected subgroup has a decomposition \(\mathbf{C}_{\mathbf{G}^{*}}(s)=\mathbf{ST}\), where \(\mathbf{S}\) is a semisimple subgroup, \(\mathbf{T}\) is a central torus, \(\mathbf{S}\cap\mathbf{T}\) is finite, and \(|(\mathbf{C}_{G^{*}}(s))^{F^{*}}|=|\mathbf{S}^{*}||\mathbf{T}^{*}|\) (see [16, p. 48]). When \(s\) is in \(G^{*}\), the centralizer \(\mathbf{C}_{\mathbf{G}^{*}}(s)\) is \(F^{*}\)-stable and \(\mathbf{C}_{G^{*}}(s)=\left(\mathbf{C}_{\mathbf{G}^{*}}(s)\right)^{F^{*}}\); and so \[|\mathbf{C}_{G^{*}}(s)|=|S||T|\] where \(S:=\mathbf{S}^{F^{*}}\) and \(T:=\mathbf{T}^{F^{*}}\). Let \(r\) be the semisimple rank of \(\mathbf{G}^{*}\) and \(q\) (that will be a power of \(p\)) the absolute value of all eigenvalues of \(F\) on the character group of an \(F\)-stable maximal torus of \(\mathbf{G}\). Possible values for \(|S|\) and \(|T|\) are available in [15, 16]. In particular, we have \[|S|=\prod_{i}|L_{r_{i}}(q^{a_{i}})|\] and \[|T|=\prod_{j}\Phi_{j}(q),\] where \(L_{r_{i}}(q^{a_{i}})\)s are finite groups of Lie type (of simply-connected type) of rank \(r_{i}\) defined over a field of order \(q^{a_{i}}\) and \(\Phi_{j}(q)\)s are cyclotomic polynomials (and also polynomials of the forms \(q^{2}\pm\sqrt{2}q+1\), \(q^{2}\pm\sqrt{3}q+1\), or \(q^{4}\pm\sqrt{2}q^{3}+q^{2}\pm\sqrt{2}q+1\) for Suzuki and Ree groups) evaluated at \(q\). As \(\mathbf{C}_{\mathbf{G}^{*}}(s)\) has maximal rank, we furthermore have \[\sum_{i}a_{i}r_{i}+\sum_{j}\deg(\Phi_{j})=r. \tag{2}\] Now formula (1) implies that the typical degree of an irreducible character of \(G\) is of the form \[\chi(1)=\frac{|G|_{p^{\prime}}}{\prod_{i}|L_{r_{i}}(q^{a_{i}})|_{p^{\prime}} \prod_{j}\Phi_{j}(q)}\psi(1),\] where \(\psi\in\mathcal{E}(\mathbf{C}_{G^{*}}(s),(1))\), a unipotent character of \(\mathbf{C}_{G^{*}}(s)\). By [13, Thm. 3.1], for any finite group of Lie type \(\mathbf{G}^{F}\), where \(\mathbf{G}\) is a simple algebraic group in characteristic \(p\) and \(F\) a Steinberg endomorphism on \(\mathbf{G}\), the degree \(\mathsf{St}(1)=|\mathbf{G}^{F}|_{p}\) of the Steinberg character \(\mathsf{St}\) of \(\mathbf{G}^{F}\) is strictly larger than the degree of any other unipotent character. Therefore, the degrees of unipotent characters of \(\mathbf{C}_{G^{*}}(s)\), which are in fact the same as those of the semisimple group \(S\), are bounded by \(\prod_{j}|L_{r_{i}}(q^{a_{i}})|_{p}\). It follows that \[b(G)\leqslant\frac{|G|_{p^{\prime}}}{\prod_{i}|L_{r_{i}}(q^{a_{i}})|_{p^{ \prime}}\prod_{j}\Phi_{j}(q)}\prod_{j}|L_{r_{i}}(q^{a_{i}})|_{p},\] By Lemma 5.2, \[\frac{|L_{r_{i}}(q^{a_{i}})|_{p}}{|L_{r_{i}}(q^{a_{i}})|_{p^{\prime}}} \leqslant\frac{1}{(q^{a_{i}}-1)^{r_{i}}}\leqslant\frac{1}{(q-1)^{a_{i}r_{i}}}.\] Also, it is easy to see that \[\Phi_{j}(q)\geqslant(q-1)^{\deg\Phi_{j}}.\] We therefore deduce that \[b(G)\leqslant\frac{|G|_{p^{\prime}}}{(q-1)^{\sum_{i}a_{i}r_{i}+\sum_{j}\deg( \Phi_{j})}}=\frac{|G|_{p^{\prime}}}{(q-1)^{r}}.\] On the other hand, we have \(|G|_{p^{\prime}}\leqslant|G|_{p}q^{r}\) by again Lemma 5.2, and it follows that \[\frac{b(G)}{|G|_{p}}\leqslant\frac{q^{r}}{(q-1)^{r}}.\] As the rank \(r\) is at most \(8\) for exceptional groups, the desired inequalities follow. Bounds for \(b(S)\) of alternating groups and classical groups are collected in the following. **Lemma 5.4**.: _Let \(S\) be a finite simple group, \(n\) a positive integer, and \(q\) a prime power._ 1. _For_ \(S=\mathsf{A}_{n}\) _with_ \(n\geqslant 5\)_,_ \[\frac{1}{2}e^{-1.28255\sqrt{n}}\sqrt{n!}\leqslant b(S)\leqslant e^{-0.11565 \sqrt{n}}\sqrt{n!}.\] _In particular,_ \[b(S)>\frac{1}{2}e^{-1.28255\sqrt{n}}(2\pi n)^{1/4}(n/e)^{n/2}\] 2. _For_ \(S=\mathsf{A}_{n}\) _with_ \(n\geqslant 5\)_,_ \[b(\mathsf{A}_{n+1})\geqslant\frac{2(n+1)}{\sqrt{8n+1}+3}b(\mathsf{A}_{n}).\] 3. _For_ \(S=PSL_{n}(q)\) _with_ \(n\geqslant 2\)_,_ \[b(S)<13(1+\log_{q}(n+1))^{2.54}\mathsf{St}_{S}(1).\] 4. _For_ \(S=PSU_{n}(q)\) _with_ \(n\geqslant 3\)_,_ \[b(S)<2(2+\log_{q}(n+1)^{1.27}\mathsf{St}_{S}(1).\] 5. _For_ \(S\in\{\Omega_{2n+1}(q),PSp_{2n}(q),P\Omega_{2n}^{\pm}(q)\}\) _with_ \(n\geqslant 2\) _and_ \(q\) _odd,_ \[b(S)<38(1+\log_{q}(2n+1))^{1.27}\mathsf{St}_{S}(1).\] 6. _For_ \(S\in\{\Omega_{2n+1}(q),PSp_{2n}(q),P\Omega_{2n}^{\pm}(q)\}\) _with_ \(n\geqslant 2\) _and_ \(q\) _even,_ \[b(S)<8(1+\log_{q}(2n+1))^{1.27}\mathsf{St}_{S}(1).\] Proof.: Part (i) follows from [22, Thm. 1] and Parts (iii)-(vi) follow from [1, Thm. 5.1, 5.2, and 5.3]. Let us prove Part (ii). Let \(\chi\in\operatorname{Irr}(\mathsf{S}_{n})\) such that \(\chi(1)=b(\mathsf{S}_{n})\). Let \(\lambda\) be the partition of \(n\) corresponding to \(\chi\) and \(Y_{\lambda}\) be the Young diagram associated to \(\lambda\). By the well-known branching rule, the induction \(\chi^{\mathsf{S}_{n+1}}\) of \(\chi\) from \(\mathsf{S}_{n}\) to \(\mathsf{S}_{n+1}\) is the sum of irreducible characters corresponding to the partitions of \(n+1\) whose associated Young diagrams are obtained from \(Y_{\lambda}\) by adding a suitable node. The number of those suitable nodes is at most \((\sqrt{8n+1}+1)/2\) (see [1, p. 1950]) and at most one of the resulting Young diagrams is symmetric. We deduce that \[(n+1)b(\mathsf{A}_{n})\leqslant(n+1)\chi(1)=\chi^{\mathsf{S}_{n+1}}(1) \leqslant\frac{\sqrt{8n+1}+3}{2}b(\mathsf{A}_{n+1}),\] and the result follows. ## 6. Theorem B: Groups of Lie type In this section we prove Theorem B when the groups involved are of Lie type. In the following, for simplicity, we say that two groups have the same defining characteristic if there is a common characteristic over which the groups can be defined. **Proposition 6.1**.: _Let \(S\) and \(H\) be finite simple groups of Lie type. Suppose that \(\operatorname{cod}(S)\subseteq\operatorname{cod}(H)\). Then \(S\) and \(H\) have the same defining characteristic._ Proof.: Suppose that the defining characteristic of \(H\) is \(p\). By Lemma 3.4, \(|S|_{p^{\prime}}=|H|_{p^{\prime}}\) and there is \(\chi\in\operatorname{Irr}(S)\) such that \(\chi(1)=|S|_{p}\). By Lemma 3.3, it follows that \[d\cdot\frac{|H|_{p}}{\chi(1)}\in\operatorname{cd}(H)\text{ for every }d\in \operatorname{cd}(S). \tag{3}\] Certainly if \(\chi\) is the Steinberg character of \(S\) then we are done. So we assume otherwise and aim to find a contradiction or end up with a case where \(H\) can be defined in another characteristic not equal to \(p\). By the classification of prime-power-degree representations of quasi-simple groups [16, Thm. 1.1], we arrive at the following possibilities of \(S\) and \(\chi(1)\). (i) \(S=PSL_{2}(q)\), \(\chi(1)\in\{q\pm 1\}\) or \(q\) is odd and \(\chi(1)\in\{(q\pm 1)/2\}\). Observe that \(\chi(1)\) cannot be \((q\pm 1)/2\) because otherwise, by taking \(d=2\chi(1)\), we would have \(2|H|_{p}\in\operatorname{cd}(H)\) which is impossible. So \(\chi(1)=q+\alpha=p^{x}\) for some \(\alpha\in\{\pm 1\}\) and \(x\in\mathbb{N}\). Suppose first that \(q=2^{t}\) for some \(t\geq 2\). Then \(2^{t}+\alpha=p^{x}\). By Mihailescu's theorem [14] (previously known as Catalan's conjecture), either \(x=1\) so that \(2^{t}+\alpha\) is a (Mersenne or Fermat) prime or \(\alpha=1\) and \(t=3\). In the latter case, \(p=3\) and \(|H|_{3^{\prime}}=|S|_{3^{\prime}}=56\), forcing \(H\) to be \({}^{2}G_{2}(3)^{\prime}\), which turns out to be isomorphic to \(S=PSL_{2}(8)\), as desired. So it remains to consider the former case: \(q+\alpha=2^{t}+\alpha=p\) is the defining characteristic of \(H\). Now \(|H|_{p^{\prime}}=|S|_{p^{\prime}}=q(q-\alpha)\). Let \(p^{a}\) be the order of the underlying field of \(H\). It is clear from the order formulas of simple groups of Lie type (see [1, p. xvi]) that \(|H|_{p}<|H|_{p^{\prime}}/(p^{a}-1)\leq|H|_{p^{\prime}}/(p-1)\). We therefore deduce that \[|H|_{p}<\frac{q(q-\alpha)}{q+\alpha-1}.\] Thus we must have \(\alpha=-1\) and \(|H|_{p}=p\). Now \(H\) is a simple group of Lie type in characteristic \(p\) such that \(|H|=p(p+1)(p+2)\). This is impossible as for such a group \(H\), one can check from the order formula that \(|H|_{p^{\prime}}<(|H|_{p})^{2}\). Next we suppose \(q\geq 5\) is odd. Then \(p=2\) and \(q+\alpha=|S|_{2}\). Again by Mihailescu's theorem, either \(q\) is a prime or \(\alpha=-1\) and \(q=9\). The case \(q=9\) is eliminated in the same way as before. So assume that \(|H|_{2^{\prime}}=q(q-\alpha)/2\) and \(q\) is a prime. Note that when \(H\) is not of type \(A_{1}\), every prime divisor of \(|H|_{p^{\prime}}\) is smaller than \(\sqrt{|H|_{p^{\prime}}}\). Therefore our group \(H\) must be \(PSL_{2}(q_{1})\) for some 2-power \(q_{1}\), implying that \(q(q-\alpha)/2=q_{1}^{2}-1\). This, however, returns no relevant solutions. (ii) For the remaining possibilities of \(S\) and \(\chi\), the character \(\chi\) has a decent small degree and we are able to produce an irreducible character of \(S\) whose degree is a proper multiple of \(\chi(1)\). Condition (3) then implies that a proper multiple of \(|H|_{p}\) is a character degree of \(H\), which is impossible. This required character turns out to be chosen as a unipotent character in most cases. We refer the reader to [12, SS13.8] for the description of unipotent characters of finite classical groups. The next possibility of \(S\) and \(\chi(1)\) is \(S=PSL_{n}(q)\), \(q>2\), \(n\) is an odd prime, \((n,q-1)=1\), and \(\chi(1)=(q^{n}-1)/(q-1)\). If \(n=3\) then \(SL_{3}(q)\) has an irreducible character of degree \(q^{3}-1\) (see [13]), which is a proper multiple of \(\chi(1)=q^{2}+q+1\). For \(n\geq 5\), the unipotent character parameterized by the partition \((2,n-2)\) with degree \[\chi^{(2,n-2)}(1)=\frac{(q^{n}-1)(q^{n-1}-q^{2})}{(q-1)(q^{2}-1)}\] fulfills our requirement. Another possibility is \(S=PSU_{n}(q)\), \(n\) is an odd prime, \((n,q+1)=1\), and \(\chi(1)=(q^{n}+1)/(q+1)\). This case is handled similarly as in the previous one. Here we note that, when \(n\geq 5\) is odd, the unipotent character parameterized by the partition \((2,n-2)\) has degree \[\chi^{(2,n-2)}(1)=\frac{(q^{n}+1)(q^{n-1}-q^{2})}{(q+1)(q^{2}-1)}.\] The next case is \(S=PSp_{2n}(q)\), \(n\geq 2\), \(q=\ell^{t}\) with \(\ell\) an odd prime, \(tn\) is a 2-power, and \(\chi(1)=(q^{n}+1)/2\). Now the unipotent character parameterized by the symbol \(\binom{0}{n}\) has required degree \[\chi^{\binom{0}{n}}(1)=\frac{(q^{n}+1)(q^{n}+q)}{2(q+1)}.\] The last possibility involving a family of groups is \(S=PSp_{2n}(3)\), \(n\geq 3\) is a prime, and \(\chi(1)=(3^{n}-1)/2\). Then \(S\) has a unipotent character with degree \[\chi^{\binom{0}{n}\ \begin{subarray}{c}1\\ -\end{subarray}}(1)=\frac{(3^{n}-1)(3^{n}-3)}{8}.\] (iii) \((S,\chi(1))\in\{(Sp_{6}(2),7)\), \((Sp_{6}(2),27)\), \(({}^{2}F_{4}(2)^{\prime},27)\), \((G_{2}(2)^{\prime},7)\), \((G_{2}(2)^{\prime},27)\), \((G_{2}(3),64)\}\). First assume that \(S=G_{2}(2)^{\prime}\) and \(\chi(1)=27\). Then \(p=3\) and \(|H|_{3^{\prime}}=|S|_{3^{\prime}}=224\). It is easy to check that there is no such simple group \(S\) of Lie type in characteristic \(3\) with \(27\mid|S|_{3}\). In all other cases one can find a character \(\psi\in\operatorname{Irr}(S)\) such that \(\chi(1)\mid\psi(1)\) but \(\chi(1)<\psi(1)\). Therefore \(\psi(1)|H|_{p}/\chi(1)\) is a proper multiple of \(|H|_{p}\) and thus cannot be a character degree of \(H\), violating condition (3). As seen in Lemma 3.4 and Proposition 6.1, we face the situation where two simple groups \(S\) and \(H\) of Lie type have the same characteristics \(p\) and \(|S|_{p^{\prime}}=|H|_{p^{\prime}}\). It is surprising to us that this turns out to happen only when \(|S|=|H|\) (see Proposition 6.3 below), and therefore they are among the coincidences appeared in the simple order theorem. We slightly modify two of the invariants in Artin's proof of the simple order theorem (for classical groups) to prove our results. **Definition 6.2**.: Let \(S\) be a finite group of Lie type in characteristic \(p\). Let \(\omega=\omega(S)\) and \(\psi=\psi(S)\) respectively denote the largest and the second largest orders of \(p\) modulo a prime divisor of \(|S|_{p^{\prime}}\). We will refer to \(\omega(S)\) and \(\psi(S)\) as the Artin invariants of \(S\). In fact, when \(p\) is the dominant prime of \(|S|\), these \(\omega(S)\) and \(\psi(S)\) coincide with Artin's invariants defined in [1]. We remark that there are only a few cases involving Mersenne and Fermat primes where \(p\) is not dominant in \(|S|\), and they are listed in [10, Thm. 3.3]. Assume for now that \(S\) is not one of \(G_{2}(2)^{\prime}\), \({}^{2}G_{2}(3)^{\prime}\), and \({}^{2}F_{4}(2)^{\prime}\). (Note that \(S_{1}=G_{2}(2)^{\prime}\cong PSU_{3}(9)\) and \(S_{2}={}^{2}G_{2}(3)^{\prime}\cong PSL_{2}(8)\) and, even though we do not allow \(S_{1}\) (or \(S_{2}\)) to be viewed as a Lie type group over a field of order \(2\) (or \(3\)), we do allow it to be viewed as one over the field of order \(9\) (or \(8\)).) Let \(q=p^{t}\) be the order of the underlying field of \(S\). It is well-known that the order \(|S|\) then has the standard cyclotomic factorization in terms of \(q\) as \[|S|=\frac{1}{d}q^{k}\prod_{i}\Phi_{i}(q)^{e_{i}},\] where \(d\), \(k\), \(e_{i}\)s depend on \(S\) and can be found in [1, Table 6] and [10, Tables C1, C2, and C3] for instance, and \(\Phi_{i}(q)\) is the value of the \(i\)th cyclotomic polynomial evaluated at \(q\). Replacing \(q\) by \(p^{n}\) and factorizing each \(\Phi_{i}(x^{t})\) further into cyclotomic polynomials of \(x\), one has \[|S|=\frac{1}{d}p^{kt}\prod_{i}\Phi_{i}(p)^{f_{i}}\] for certain positive integers \(f_{i}\)s depending on \(S\). Using Zsigmondy's theorem, it is not difficult to see that, except for some'small' cases, the invariants \(\omega(S)\) and \(\psi(S)\) are precisely the largest and second largest, respectively, index \(i\) such that \(\Phi_{i}(p)\) appears in the cyclotomic factorization of \(|S|\) (see [10, Lem. 4.6]). We refer the reader to [10, Tables A1, A2 and A3] for the list of exceptions and the values of their Artin's invariants, including the groups \(G_{2}(2)^{\prime}\), \({}^{2}G_{2}(3)^{\prime}\), and \({}^{2}F_{4}(2)^{\prime}\) we excluded earlier. We reproduce in Table 1 the values of \(\omega(S)\) and \(\psi(S)\) for the generic case only. **Proposition 6.3**.: _Let \(p\) be a prime. Suppose that \(S\) and \(H\) are two non-isomorphic simple groups of Lie type in characteristic \(p\) and \(|S|_{p^{\prime}}=|H|_{p^{\prime}}\). Then \(\{S,H\}=\{PSL_{4}(2),PSL_{3}(4)\}\) or \(\{S,H\}=\{\Omega_{2n+1}(q),PSp_{2n}(q)\}\) for some \(n\geqslant 3\) and odd \(q\). In particular, \(|S|=|H|\)._ Proof.: By the assumptions, we have \(\omega(S)=\omega(H)\) and \(\psi(S)=\psi(H)\). First we consider the case where both \(S\) and \(H\) are generic so that their invariants \(\omega\) and \(\psi\) are available in Table 1. Comparing those values, we can find all the collections of groups with equal values of \(\omega\) and \(\psi\). We list these collections in Table 2 (each row in the table is one such collection). Now one simply compare the \(p^{\prime}\)-parts of orders of groups in each collection. It turns out that the only pair with the same \(p^{\prime}\)-parts of orders is \(\{\Omega_{2n+1}(q),PSp_{2n}(q)\}\) with some \(n\geqslant 3\) and odd \(q\). Assume now that at least one of the two groups, say \(S\), is non-generic. That is, \(S\) is among the exceptions listed in the second column of Table 1. The values of the invariants \(\omega\) and \(\psi\) of these groups are available in [10, Tables A2 and A3]. The analysis is basically the same as in the generic case, but more tedious. We first find all the possible groups \(H\) with \(\omega(S)=\omega(H)\) and \(\psi(S)=\psi(H)\), and then compare \(|S|_{p^{\prime}}\) and \(|H|_{p^{\prime}}\), where \(p\) is the defining characteristic of \(S\) and \(H\). Let us demonstrate the case \(S=PSL_{3}(4)\) as an example. Then \(\omega(S)=4\) and \(\psi(S)=3\). But there are only two other simple groups of Lie type with the same values of \(\omega\) and \(\psi\), namely \(PSL_{4}(2)\) and \(P\Omega_{8}^{+}(2)\). However, \(|P\Omega_{8}^{+}(2)|_{2^{\prime}}\neq|PSL_{3}(4)|_{2^{\prime}}=|PSL_{4}(2)|_{2 ^{\prime}}\), and so we come up with another possible pair for \(\{S,H\}\), namely \(\{PSL_{4}(2),PSL_{3}(4)\}\), as stated in the theorem. The next theorem improves Theorem 4.1 when the relevant groups are of Lie type. **Theorem 6.4**.: _Let \(S\) and \(H\) be finite simple groups of Lie type such that \(\operatorname{cod}(S)\subseteq\operatorname{cod}(H)\). Then \(S\cong H\)._ Proof.: By Lemma 3.4 and Propositions 6.1 and 6.3, we have that \(S\) and \(H\) fall into one of two pairs of groups concluded in Proposition 6.3. The result now follows by Lemma 4.2. ## 7. Theorem B: The mixed case of alternating groups and groups of Lie type In this section, we prove Theorem B in the mixed situation where the set of codegrees of an alternating group \(S\) is contained in that of a simple group \(H\) of Lie type, or vice versa. In the following proposition we remark that the condition on \(m\) is necessary, due to the coincidences of isomorphic simple groups: \(A_{5}\cong PSL_{2}(4)\cong PSL_{2}(5)\), \(A_{6}\cong PSL_{2}(9)\), and \begin{table} \begin{tabular}{l} \hline \(PSL_{n}(p^{2s})\), \(\Omega_{2n+1}(p^{s})\), \(PSp_{2n}(p^{s})\), \(P\Omega_{2(n+1)}^{+}(p^{s})\), \(P\Omega_{2n}^{-}(p^{s})\) \\ \(PSL_{3}(p^{2s})\), \(PSU_{4}(p^{s})\), \(\Omega_{7}(p^{s})\), \(PSp_{6}(p^{s})\), \(P\Omega_{8}^{+}(p^{s})\) \\ \(PSL_{2}(p^{6s})\), \(\Omega_{5}(p^{3s})\), \(G_{2}(p^{2s})\), \({}^{3}D_{4}(p^{s})\) \\ \(PSL_{2}(p^{3s})\), \(G_{2}(p^{s})\) \\ \(PSL_{3}(p^{4s})\), \(PSU_{4}(p^{2s})\), \(\Omega_{7}(p^{2s})\), \(PSp_{6}(p^{2s})\), \(P\Omega_{8}^{+}(p^{2s})\), \(F_{4}(p^{s})\) \\ \(PSL_{2}(2^{6s})\), \(\Omega_{5}(2^{3s})\), \(G_{2}(2^{2s})\), \({}^{3}D_{4}(2^{s})\), \({}^{2}F_{4}(2^{s})\), \(s\geq 3\) odd \\ \(PSL_{4}(p^{3s})\), \(E_{6}(p^{s})\) \\ \(PSL_{4}(p^{6s})\), \(\Omega_{9}(p^{3s})\), \(P\Omega_{10}^{+}(p^{3s})\), \(P\Omega_{8}^{-}(p^{3s})\), \(E_{6}(p^{2s})\) \\ \(PSL_{3}(p^{6s})\), \(PSU_{4}(p^{3s})\), \(\Omega_{7}(p^{3s})\), \(PSp_{6}(p^{3s})\), \(P\Omega_{8}^{+}(p^{3s})\), \({}^{2}E_{6}(p^{s})\) \\ \(PSL_{3}(p^{12s})\), \(\Omega_{7}(p^{6s})\), \(PSp_{6}(p^{6s})\), \(P\Omega_{8}^{+}(p^{6s})\), \(F_{4}(p^{3s})\), \({}^{2}E_{6}(p^{2s})\) \\ \(PSL_{5}(p^{6s})\), \(\Omega_{11}(p^{3s})\), \(PSp_{10}(p^{3s})\), \(P\Omega_{12}^{+}(p^{3s})\), \(P\Omega_{10}^{-}(p^{3s})\), \(E_{8}(p^{s})\) \\ \(PSU_{n}(q)\), \(PSU_{n-1}(q)\), \(n\geq 7\) odd \\ \(PSU_{3}(2^{2s})\), \({}^{2}B_{2}(2^{3s})\), \(s\) odd \\ \(PSU_{3}(3^{s})\), \({}^{2}G_{2}(3^{s})\), \(s\geq 3\) odd \\ \(PSU_{9}(p^{s})\), \(PSU_{8}(p^{s})\), \(E_{7}(p^{s})\) \\ \(\Omega_{2n+1}(q)\), \(PSp_{2n}(q)\), \(n\geq 3\), \(q\) odd \\ \hline \end{tabular} \end{table} Table 2. Simple groups of Lie type with the same values of \(\omega\) and \(\psi\): generic case. \(A_{8}\cong PSL_{4}(2)\). We also recall that \(f(X):=|X|/b(X)\), where \(b(X)\) is the largest character degree of \(X\). **Proposition 7.1**.: _Suppose \(m=7\) or \(m\geqslant 9\). Let \(H\) be a simple group of Lie type. If \(|\mathsf{A}_{m}|\) divides \(|H|\), then \(f(H)>f(\mathsf{A}_{m})\). As a consequence, \(\operatorname{cod}(\mathsf{A}_{m})\nleq\operatorname{cod}(H)\)._ Proof.: Let \(p\) be the defining characteristic and \(q\), a power of \(p\), the order of the underlying field of \(H\). Consider first the case \(H\) being of exceptional type. Using Lemma 5.3, we have \[f(H)>\frac{|H|}{256|H|_{p}}=\frac{1}{256}|H|_{p^{\prime}}.\] As \(|\mathsf{A}_{m}|\) divides \(|H|\), it follows that \(f(H)>(1/256)|\mathsf{A}_{m}|_{p^{\prime}}.\) Therefore, to prove the theorem, it suffices to show \[b(\mathsf{A}_{m})\geqslant 256|\mathsf{A}_{m}|_{p}.\] Let us assume for now that that \(m\geqslant 10\). In particular, the dominant prime in \(|\mathsf{A}_{m}|=m!/2\) is \(2\). We therefore just need to show \(b(A_{m})\geqslant 256|\mathsf{A}_{m}|_{2}\). As \(|\mathsf{A}_{m}|_{2}\leqslant 2^{m-2}\), for this we want to show \[b(\mathsf{A}_{m})\geqslant 64\cdot 2^{m}.\] Note that \(b(\mathsf{A}_{19})=64,664,600>64\cdot 2^{19}\) (see [10] for the degree of the largest irreducible characters and associated partitions of symmetric groups of degree up to \(75\), from which one can deduce the exact value or a good bound for the one of corresponding alternating groups). Now one just inducts on \(m\) with the help of Lemma 5.4(ii) to achieve the desired bound for \(m\geqslant 19\). Suppose that \(m\leqslant 18\) and recall that we are still dealing with exceptional groups. When \(q=2\), the proposition can be verified directly, so assume that \(q\geqslant 3\). In such case, \(b(H)<26|H|_{p}\) by Lemma 5.3, and whence the above estimate can be refined so that we only need to prove \(b(\mathsf{A}_{m})\geqslant 26|\mathsf{A}_{m}|_{p}\), which turns out to be true for all \(18\geqslant n\geqslant 13\). For the remaining values \(m\leqslant 12\), the arguments go as follows. First we are done if \(f(\mathsf{A}_{m})\leqslant\sqrt{|H|}\), as \(f(H)>\sqrt{|H|}\), so we may assume that \(|H|<f(\mathsf{A}_{m})^{2}\). For each \(m\leqslant 12\), there are indeed no possibilities for \(H\) satisfying \(|H|<f(\mathsf{A}_{m})^{2}\) and \(|A_{m}|\) divides \(|H|\). Following the same idea as in the case of exceptional groups, but using Lemma 5.4 instead, we can show that in fact \(f(H)>f(\mathsf{A}_{m})\) for every \(H\) of classical type and \(m\geqslant 19\). Let us present the details for only the linear groups. Consider \(H=PSL_{n}(q)\) for some \(n\geqslant 2\) and \(q\) a prime power. By Lemma 5.4(i), we have \[f(\mathsf{A}_{m})=\frac{m!}{2b(\mathsf{A}_{m})}\leqslant e^{1.28255\sqrt{m}}(m!)^{1/2}.\] Thus, if \(|H|\geqslant e^{2.5651\sqrt{m}}m!\) then \(f(H)>\sqrt{|H|}\geqslant e^{1.28255\sqrt{m}}(m!)^{1/2}\geqslant f(\mathsf{A}_{ m})\) and we would be done. We therefore can assume that \(|H|<e^{2.5651\sqrt{m}}m!\), which in particular implies that \(n<m\). Using Lemma 5.4(iii), we see that, as before, it is enough to show that \(b(\mathsf{A}_{m})\geqslant 13(1+\log_{q}(n+1))^{2.54}|\mathsf{A}_{m}|_{p}\). Since \(m\geqslant n+1\) and \(|\mathsf{A}_{m}|_{p}\leqslant 2^{m-2}\), for this it is sufficient to show that \[b(\mathsf{A}_{m})\geqslant 13(1+\log_{2}m)^{2.54}2^{m-2}.\] This last inequality is indeed true for \(m=20\), and therefore is true for all \(m\geqslant 20\), by induction and Lemma 5.4(ii). Checking directly, we see that the inequality \(b(\mathsf{A}_{m})\geqslant 13(1+\log_{q}(n+1))^{2.54}|\mathsf{A}_{m}|_{2}\) is still valid for \(n=19\). As for the exceptional types, we are left to consider the small cases \(m\leqslant 18\). Again we are done if \(f(\mathsf{A}_{m})\leqslant\sqrt{|H|}\), so we may assume that \(|H|<f(\mathsf{A}_{m})^{2}\). For each \(m\), we search for relevant \(H\) satisfying \(|H|<f(\mathsf{A}_{m})^{2}\) and \(|A_{m}|\) divides \(|H|\) and find that, for such an \(H\), the inequality \(f(H)>f(\mathsf{A}_{m})\) always holds true. We shall need the following result on \(2\)-defect zero and \(3\)-defect zero characters of alternating groups, which easily follows from earlier work of F. Garvan, D. Kim and D. Stanton [1] on the so-called _\(p\)-core partitions_. They are partitions having no hook lengths divisible by \(p\). Using Garvan-Kim-Stanton's result, A. Granville and K. Ono [1] proved the existence of \(p\)-defect zero characters with \(p\geqslant 5\) in symmetric and alternating groups. **Lemma 7.2**.: _Let \(m\) be a positive integer._ 1. \(\mathsf{A}_{m}\) _has a_ \(2\)_-defect zero irreducible character if and only if_ \(m=2k^{2}+k\) _or_ \(m=2k^{2}+k+2\) _for some_ \(k\in\mathbb{N}\)_._ 2. \(\mathsf{A}_{m}\) _has a_ \(3\)_-defect zero irreducible character if and only if there is a prime_ \(\ell\equiv 2(\operatorname{mod}3)\) _such that the the exact power of_ \(\ell\) _dividing_ \(3m+1\) _is odd._ Proof.: See the discussion in [1, pp. 333-334]. **Proposition 7.3**.: _Let \(S\) be a simple group of Lie type and \(8\neq m\geqslant 7\) an integer. Then \(\operatorname{cod}(S)\upmodels\operatorname{cod}(\mathsf{A}_{m})\). In fact, if \(|S|\) divides \(|\mathsf{A}_{m}|\) and \(m\geqslant 44\), then \(f(S)<f(\mathsf{A}_{m})\)._ Proof.: Assume by contradiction that \(\operatorname{cod}(S)\subseteq\operatorname{cod}(\mathsf{A}_{m})\). By Lemma 5.1, we then have \(f(S)\geqslant f(\mathsf{A}_{m})\). Suppose that the defining characteristic of \(S\) is \(p\). Observe that \(f(S)\leqslant|S|/\mathsf{St}_{S}(1)=|S|_{p^{\prime}}\). Furthermore, \(|S|_{p^{\prime}}<|S|_{p}^{2}\) (see [1, Proof of Thm. 12]) and \(|S|_{p}\leqslant|\mathsf{A}_{m}|_{p}\) by Theorem 3.1. Therefore we have \(f(S)<(|\mathsf{A}_{m}|_{p})^{2}\). Assume for a moment that \(m\geqslant 10\) so that \(|\mathsf{A}_{m}|_{p}\leqslant|\mathsf{A}_{m}|_{2}\leqslant 2^{m-2}\). We now have \(f(S)<2^{2m-4}\). On the other hand, it is clear that \(f(\mathsf{A}_{m})>\sqrt{m!/2}\). Therefore, we would be done if \(m!\geqslant 2^{4m-7}\). By the well-known estimate \(m!>\sqrt{2\pi m}(m/e)^{m}\), this is certainly true when \(m\geqslant 44\). So we may now suppose that \(m\leqslant 43\). As mentioned above, every simple group of Lie type, and therefore \(S\) in particular, has a \(2\)-defect zero irreducible character, which means that \(S\) has an odd codegree and so does \(\mathsf{A}_{m}\) as \(\operatorname{cod}(S)\subseteq\operatorname{cod}(\mathsf{A}_{m})\). It follows that \(m=2k^{2}+k\) or \(m=2k^{2}+k+2\) for some \(k\in\mathbb{N}\), by Lemma 7.2(i). This forces \(m\) to be one of \(10,12,21,23,36\), or \(38\). By the same reason, \(\mathsf{A}_{m}\) has a codegree not divisible by \(3\) and so Lemma 7.2(ii) further narrows down the choices for \(m\): \(m\in\{10,12,21,36\}\). In fact, when \(m=21\) or \(36\), we still have \(f(\mathsf{A}_{m})>|\mathsf{A}_{m}|_{2}^{2}\), and since \(|\mathsf{A}_{m}|_{2}^{2}>f(S)\), it follows that \(f(\mathsf{A}_{m})>f(S)\), which is a contradiction. Suppose \(m=10\). The inequality \(f(\mathsf{A}_{m})<(|\mathsf{A}_{m}|_{p})^{2}\) then forces \(p=2\) or \(3\). If \(p=2\) then \(|S|_{2^{\prime}}=|\mathsf{A}_{10}|/\chi(1)\), where \(\chi\in\operatorname{Irr}(\mathsf{A}_{10})\) is one of the two \(2\)-defect zero irreducible characters of equal degree \(384\), implying \(|S|_{2^{\prime}}=10!/(2\cdot 384)=4725\). It is easy to see from [1] that there is no such group of Lie type in characteristic \(2\). If \(p=3\) then \(|S|_{3^{\prime}}=10!/(2\cdot 567)=3200\) since \(\mathsf{A}_{10}\) has a unique \(3\)-defect zero character of degree \(567\) which again leads to a contradiction as there is no such group in characteristic 3. The case \(m=12\) is treated similarly and we skip the details. ## 8. Theorem B: Alternating and sporadic groups **Proposition 8.1**.: _Let \(m<n\) be positive integers. Then \(f(\mathsf{A}_{m})<f(\mathsf{A}_{n})\). Consequently, \(\operatorname{cod}(\mathsf{A}_{m})\nsubseteq\operatorname{cod}(\mathsf{A}_{m+1})\)._ Proof.: It suffices to show that \(b(\mathsf{A}_{m+1})<(m+1)b(\mathsf{A}_{m})\). Let \(\chi\in\operatorname{Irr}(\mathsf{A}_{m+1})\) such that \(\chi(1)=b(\mathsf{A}_{m+1})\). As shown in [16, p. 1956], such \(\chi\) must be the restriction of an irreducible character, say \(\psi\), of \(\mathsf{S}_{m+1}\) whose associated partition, say \(\lambda\), is not self-conjugate. In particular, \(\chi(1)=\psi(1)\). As in the proof on Lemma 5.4(ii), let \(Y_{\lambda}\) be the Young diagram associated to \(\lambda\). The restriction \(\psi_{\mathsf{S}_{m}}\) of \(\psi\) to \(\mathsf{S}_{n}\) is the sum of irreducible characters corresponding to the partitions of \(n\) whose associated Young diagrams are obtained from \(Y_{\lambda}\) by removing a suitable node. The number of those suitable nodes is at most \((\sqrt{8m+9}-1)/2\), so \[b(\mathsf{A}_{m+1})=\psi(1)\leq\frac{\sqrt{8m+9}-1}{2}b(\mathsf{S}_{m}).\] Since \(b(\mathsf{S}_{m})<2b(\mathsf{A}_{m})\) as already mentioned above, it follows that \[b(\mathsf{A}_{m+1})<(\sqrt{8m+9}-1)b(\mathsf{A}_{m}),\] which implies our desired inequality \(b(\mathsf{A}_{m+1})<(m+1)b(\mathsf{A}_{m})\) for \(m\geqslant 5\). The result is easily checked for smaller \(m\). **Proposition 8.2**.: _Theorem B is true when either \(S\) or \(H\) is a sporadic simple group._ Proof.: The case where both \(S\) and \(H\) are sporadic simple groups can be verified by using the available data in [1]. Suppose that \(S\) is a sporadic group and \(H=\mathsf{A}_{m}\) for some \(m\geqslant 5\). Let \(p_{S}\) be the largest prime divisor of \(|S|\). By Theorem 3.1, we have \(|S|\) divides \(|\mathsf{A}_{m}|\), so \(p_{S}\leqslant m\). By Lemma 5.1, we have \(f(S)\geqslant f(\mathsf{A}_{m})>\sqrt{m!/2}\). It follows that \(f(S)\geqslant\sqrt{ps!/2}\). Again using [1], it can be checked that this can never happen. Next we assume that \(S\) is a sporadic group and \(H\) is a simple group of Lie type in characteristic \(p\). Suppose first that \(S\) has an irreducible character, say \(\chi\), of \(p\)-defect zero. Then, as argued in the proof of Lemma 3.4, we have \(|S|_{p^{\prime}}=|H|_{p^{\prime}}\) and \(\chi(1)=|S|_{p}\). In particular, \(\chi(1)\) is a prime power, and therefore, [11, Thm. 1.1] yields \[(S,p,\chi(1))\in\{(M_{11}/M_{12},11,11),(M_{11},2,16),(M_{24}/Co_{2}/Co_{3},23,23)\}\] (We note that \(M_{12}\) has another irreducible character of prime power degree, namely 16, but the character is not of 2-defect zero and thus does not fit our situation.) However, for each of these possibilities, there is no simple group of Lie type \(H\) in characteristic \(p\) such that \(|H|_{p^{\prime}}=|S|_{p^{\prime}}\). Next, we suppose that \(S\) has no characters of \(p\)-defect zero. By [1, Cor. 2], \[p\in\{2,3\}\text{ and }S\in\{M_{12},M_{22},M_{24},J_{2},HS,Suz,Ru,Co_{1}, Co_{3},BM\}.\] Now we just apply Lemma 5.4 and argue similarly as in the proof of Proposition 7.1, with \(S\) in place of \(\mathsf{A}_{m}\), to arrive at \(f(H)>f(S)\), and thus it follows from Lemma 5.1 that \(\operatorname{cod}(S)\nsubseteq\operatorname{cod}(H)\). Now we consider the case where \(S=\mathsf{A}_{m}\) for some \(m\geq 5\) and \(H\) a sporadic simple group. Using Theorem 3.1, we have \(m!/2\) divides \(|H|\) and so \(m\) is at most \(\overline{p}_{H}-1\), where \(\overline{p}_{H}\) is the smallest prime not dividing \(|H|\). This constraint is enough to ensure that \(f(A_{m})<f(H)\), and thus \(\operatorname{cod}(\mathsf{A}_{m})\nsubseteq\operatorname{cod}(H)\) by Lemma 5.1. Finally we consider the case where \(S\) is a simple group of Lie type and \(H\) a sporadic simple group. As in the proof of Proposition 7.3, we have \(f(H)\leq(|H|_{p})^{2}\), where \(p\) is the defining characteristic of \(S\). The only possible \(p\) satisfying such condition is \(p=2\). Now \(|S|_{2^{\prime}}\) is an odd codegree of \(S\), and hence of \(H\), and so \(|S|_{2^{\prime}}=|H|/\chi(1)\) for some \(2\)-defect zero character \(\chi\in\operatorname{Irr}(H)\). There are in fact only \(16\) sporadic simple groups having a \(2\)-defect zero irreducible character. For such a group and such a character, there are no \(S\) satisfying the indicated condition. This concludes the proof. Theorem B follows from Theorem 6.4 and Propositions 7.1, 7.3, 8.1, and 8.2. For future work on the codegree isomorphism conjecture (CIC), we record the following immediate consequence of Theorem B. **Theorem 8.3**.: _Let \(S\) be a finite nonabelian simple group. Let \(G\) be a minimal counterexample to (CIC) with respect to \(S\) - that is, \(G\) is minimal subject to the conditions \(\operatorname{cod}(G)=\operatorname{cod}(S)\) and \(G\not\cong S\). Then \(G\) has a unique minimal normal subgroup \(N\) and \(G/N\cong S\)._ Proof.: Let \(N\) be a maximal normal subgroup of \(G\). Since \(\operatorname{cod}(G/N)\subseteq\operatorname{cod}(G)=\operatorname{cod}(S)\) and \(G/N\) is simple, it follows from Theorem B that \(G/N\cong S\). Furthermore, by the minimality of \(G\) as a counterexample, we have that \(N\) is a minimal normal subgroup of \(G\). (If \(G\) has a normal subgroup \(M\) such that \(M<N\), then \(\operatorname{cod}(G/N)\subseteq\operatorname{cod}(G/M)\subseteq\operatorname {cod}(G)\), forcing \(\operatorname{cod}(G/M)=\operatorname{cod}(S)\).) Also, \(N\) is the unique minimal normal subgroup of \(G\) since, otherwise, \(G=S\times S\), which violates the assumption \(\operatorname{cod}(G)=\operatorname{cod}(S)\). We conclude the paper with a couple of remarks. First, the group pseudo-algebra \(C(G)\) seems to better distinguish finite groups than the usual complex group algebra \(\mathbb{C}G\). For instance, while any two abelian groups \(A\) and \(B\) of the same order have the same complex group algebra \(\mathbb{C}A=\mathbb{C}B\), it was shown in [10] that \(A\cong B\) if and only if \(C(A)=C(B)\). It has even been speculated that a finite group \(G\) and an abelian group \(A\) are isomorphic if and only if \(C(G)=C(A)\). This, if true, would indicate that abelian groups have very distinctive character codegrees (counting multiplicities). Theorem A shows that simple groups indeed have very distinctive codegrees. Our results are likely to remain true for quasi and/or almost simple groups. However, at the time of this writing, we do not see yet a uniform proof for these larger families of groups as the one presented in this paper.
2309.02473
A Survey of Imitation Learning: Algorithms, Recent Developments, and Challenges
In recent years, the development of robotics and artificial intelligence (AI) systems has been nothing short of remarkable. As these systems continue to evolve, they are being utilized in increasingly complex and unstructured environments, such as autonomous driving, aerial robotics, and natural language processing. As a consequence, programming their behaviors manually or defining their behavior through reward functions (as done in reinforcement learning (RL)) has become exceedingly difficult. This is because such environments require a high degree of flexibility and adaptability, making it challenging to specify an optimal set of rules or reward signals that can account for all possible situations. In such environments, learning from an expert's behavior through imitation is often more appealing. This is where imitation learning (IL) comes into play - a process where desired behavior is learned by imitating an expert's behavior, which is provided through demonstrations. This paper aims to provide an introduction to IL and an overview of its underlying assumptions and approaches. It also offers a detailed description of recent advances and emerging areas of research in the field. Additionally, the paper discusses how researchers have addressed common challenges associated with IL and provides potential directions for future research. Overall, the goal of the paper is to provide a comprehensive guide to the growing field of IL in robotics and AI.
Maryam Zare, Parham M. Kebria, Abbas Khosravi, Saeid Nahavandi
2023-09-05T11:56:07Z
http://arxiv.org/abs/2309.02473v1
# A Survey of Imitation Learning: Algorithms, Recent Developments, and Challenges ###### Abstract In recent years, the development of robotics and artificial intelligence (AI) systems has been nothing short of remarkable. As these systems continue to evolve, they are being utilized in increasingly complex and unstructured environments, such as autonomous driving, aerial robotics, and natural language processing. As a consequence, programming their behaviors manually or defining their behavior through reward functions (as done in reinforcement learning (RL.)) has become exceedingly difficult. This is because such environments require a high degree of flexibility and adaptability, making it challenging to specify an optimal set of rules or reward signals that can account for all possible situations. In such environments, learning from an expert's behavior through imitation is often more appealing. This is where imitation learning (IL) comes into play - a process where desired behavior is learned by imitating an expert's behavior, which is provided through demonstrations. This paper aims to provide an introduction to IL and an overview of its underlying assumptions and approaches. It also offers a detailed description of recent advances and emerging areas of research in the field. Additionally, the paper discusses how researchers have addressed common challenges associated with IL and provides potential directions for future research. Overall, the goal of the paper is to provide a comprehensive guide to the growing field of IL in robotics and AI. Imitation learning, learning from demonstrations, reinforcement learning, survey, robotics ## I Introduction Traditionally, machines and robots have been manually programmed to learn autonomous behavior [1]. Traditional methods require experts to provide specific, hard-coded rules regarding the actions that a machine must perform, as well as the characteristics of the environment in which the machine operates. However, developing such rules requires considerable time and coding expertise [2]. In order to automate the tedious manual hard-coding of every behavior, a learning approach is required [3]. Imitation learning provides an avenue for teaching the desired behavior by demonstrating it. IL techniques have the potential to reduce the problem of teaching a task to that of providing demonstrations, thus eliminating the need for explicit programming or the development of task-specific reward functions [3]. The concept of IL is based on the premise that human experts are capable of demonstrating the desired behavior even when they are unable to program it into a machine or robot. As such, IL can be leveraged in any system that requires autonomous behavior similar to that of a human expert [1]. The main purpose of IL is to enable agents to learn to perform a specific task or behavior by imitating an expert through the provision of demonstrations [4]. Demonstrations are used to train learning agents to perform a task by learning a mapping between observations and actions. By utilizing IL, agents are able to transition from repeating simple predetermined behaviors in constrained environments to taking optimal autonomous actions in unstructured environments, without imposing too much burden on the expert [2]. As a result, IL approaches have the potential to offer significant benefits to a wide range of industries, including manufacturing [5], health care [6], autonomous vehicles [7, 8], and the gaming industry [9]. In these applications, IL allows subject-matter experts, who may not possess coding skills or knowledge of the system, to program autonomous behavior in machines or robots efficiently. Although the idea of learning by imitation has been around for some time, recent achievements in computing and sensing, along with a growing demand for artificial intelligence applications, have increased the significance of IL [10, 11]. Consequently, the number of publications in the field has increased significantly in recent years. Multiple surveys of IL have been published over the past two decades, each focusing on different aspects of the field's development (Fig. 1). Schaal [3] presented the first survey of IL, focusing on IL as a route to create humanoid robots. More recently, Osa et al. [1] provided an algorithmic perspective on IL, while Hussein et al. [12] provided a comprehensive review of the design options for each stage of the IL process. Most recently, Le Mero et al. [7] provided a comprehensive overview of IL-based techniques for end-to-end autonomous driving systems. Despite the existence of a large number of surveys on IL, a new survey is necessary to capture the latest advances in this rapidly evolving field and provide an up-to-date overview of the state of the art. With the field gaining increasing interest and having diverse applications, a comprehensive survey could serve as an essential reference for newcomers, as well as provide an overview of different use cases. We acknowledge that IL is a constantly evolving field, with new algorithms, techniques, and applications being developed. Therefore, our survey aims to consolidate the vast amount of research on IL, making it easier for researchers and practitioners to navigate. Moreover, we aim to identify gaps and challenges in the current research, providing a clear direction for future work. Lastly, we aim to make the concepts and techniques of IL more accessible to a wider audience, including researchers from related fields, to enhance the understanding of this area. Overall, we strongly believe that our survey will make significant contributions to advancing the field of IL and guide future research in this exciting area. The objective of this survey paper is to present a comprehensive overview of the field of IL. To achieve this, we will organize our discussion of IL approaches based on historical and logical reasons. Initially, we will introduce the two broad categories of approaches to IL: behavioral cloning (BC) and inverse reinforcement learning (IRL). We will discuss their formulations, developments, strengths, and limitations. Additionally, we will explore how adversarial imitation learning (AIL) extends IRL by introducing an adversarial context to the learning process. We will underscore the benefits of integrating adversarial training into IL and assess the current progress in the AIL field. Furthermore, we will introduce imitation from observation (IFO) as a novel technique that aims to learn from state-only (no actions) demonstrations. We will discuss the significance of IfO and how it incorporates and extends the previous categories of BC, IRL, and AIL in different methods to tackle the challenge of learning from state-only observations. Finally, we will discuss the challenges that IL techniques encounter in real-world scenarios, such as sub-optimal demonstrations and domain discrepancies between the expert and the learner. We will conclude with a discussion of the different approaches to IL, their limitations, and future research directions that can be taken to address them. ## II Behavioral Cloning BC is an IL technique that treats the problem of learning a behavior as a supervised learning task [13, 14]. BC involves training a model to mimic an expert's behavior by learning to map the state of the environment to the corresponding expert action. The expert's behavior is recorded as a set of state-action pairs, also known as demonstrations. During the training process, the model is provided with these demonstrations as inputs and is trained to learn a function that maps the current state to the corresponding expert action. Once the model is trained, it can use the learned function to generate actions for new states that it has not encountered before. One advantage of BC is that it requires no knowledge of the underlying dynamics of the environment [13]. Instead, it relies solely on the provided demonstrations to learn the behavior. Additionally, BC is computationally efficient since it involves training a supervised learning model, which is a well-studied problem in machine learning. Despite its simplicity, the BC approach has a significant drawback - the covariate shift problem [15]. This problem arises because during training, the learner is trained on states generated by the expert policy, but during testing, the learner is tested on states induced by its action [16]. As a result, the state distribution observed during testing can differ from that observed during training. The problem with BC supervised approach is that the agent does not know how to return to the demonstrated states when it drifts and encounters out-of-distribution states [17]. Covariate shift is particularly dangerous in safety-critical situations such as driving [18], as the agent may encounter novel situations that it has not seen during training, and its ability to recover from mistakes can be critical to avoid accidents. To address the covariate shift problem and improve the robustness of the BC approach, three broad research areas have been identified (Fig. 2). The first and most popular area is interactive IL. Algorithms of this type are based on the assumption that the agent has access to an online expert who can be consulted during training. Dataset aggregation (DAgger) [14] is the earliest interactive IL method and proposes to train the agent on its own state distribution to resolve the train and test time Fig. 1: A historical timeline of IL research illustrating key achievements in the field. mismatch problem. DAgger queries the expert to relabel the data collected by the agent with the appropriate action that should have been taken. However, due to frequent queries, the human expert is subjected to a significant cognitive burden, resulting in inaccurate or delayed feedback that adversely affects the training process [19]. Consequently, determining when and how to engage human subjects is one of the key challenges of interactive IL algorithms [20]. Rather than providing continuous feedback, "human-gated" interactive IL algorithms [21, 22] extend DAgger to allow the expert to decide when to provide the corrective interventions. For example, human-gated DAgger (HG-DAgger) [21] rolls out the agent trajectory until the expert determines that the agent has reached an unsafe region of the state space. In this case, the human expert intervenes by taking control of the system and guiding the agent back to a safe state. Using this method, no constraints limit the amount of human intervention. Li et al. [19] propose a method that learns to minimize human intervention and adaptively maximize automation during training. To accomplish this, when the human expert issues intervention, it incurs a cost to the agent, which the agent learns to minimize during its training process. However, the use of these algorithms depends on human experts constantly monitoring the agent to decide when to intervene, which imposes a significant burden on them. To tackle this challenge, there has been an increasing interest in "robot-gated" algorithms [23, 24, 25, 20] that allow robots to actively ask humans for intervention. For example, SafeDAgger [23] uses an auxiliary safety policy, which determines the likelihood of the agent deviating from the expert's trajectory, as a signal for the agent to transfer control over to the expert. LazyDAgger [24] extends SafeDAgger to reduce the number of context switches between the expert and autonomous control. A recent robot-gated approach called ThriftyDAgger [20] aims to intervene only when states are sufficiently novel (out-of-distribution) or risky (prone to result in task failure). In addition, the intervention burden is reduced by limiting the total number of interventions to a budget specified by the user. The second area of research addressing the covariate shift problem consists of algorithms that estimate the support of the expert occupancy measure and then specify a reward that encourages the agent to remain on the support of the expert [26, 27, 17]. The reward function is then optimized using RL. Unlike interactive IL, these algorithms do not assume access to an online expert and only rely on demonstrations and further interactions with the environment. The most popular algorithms are based on IRL, which addresses the covariate shift by training an RL agent to consistently match the demonstrations over time. A detailed discussion of these methods will be provided in section III and IV. However, these methods often use complex and unstable approximation techniques involving adversarial training to learn a reward function from demonstrations [28, 17]. In this section, a recent alternative line of research is reviewed that also employs RL, but instead of learning a reward function, it uses a simple fixed reward function. The key idea is to incentivize the agent to consistently remain on the support of the expert policy over time by encouraging it to return to demonstrated states when facing new states outside the expert's support [29]. Wang et al. [26] estimate the support of the expert policy using a kernelized version of principal component analysis. The support estimation process produces a score that increases as a state-action pair moves closer to the support of the expert policy. The score is then used to construct an intrinsic reward function. Reddy et al. [17] propose soft Q IL (SQIL). SQIL encourages the agent to imitate the expert in demonstrated states by using an extremely sparse reward function - assigns a constant reward of +1 to transitions inside expert demonstrations and a constant reward of 0 to all other transitions. The reward function encourages the agent to return to demonstrated states after encountering out-of-distribution states. The proposed model outperforms simple BC and shows good performance even with a limited number of demonstrations. Brantley et al. [27] use expert demonstrations to train an ensemble of policies, with the variance of their predictions used as the cost. Inherently, the variance (cost) outside of expert support would be higher since ensemble policies will be more likely to disagree on states not seen in the demonstrations. An RL algorithm minimizes this cost in combination with a supervised BC cost. As a result, the RL cost assists the agent in returning to the expert distribution, whereas the supervised cost ensures that the agent mimics the expert within the expert's distribution. Lastly, the third area of algorithms aims to constrain the agent to known regions of the space covered by the demonstrator without relying on an interactive expert or leveraging RL. These methods are particularly beneficial and practical for real-world applications where safety constraints must be met, such as in healthcare, autonomous driving, and industrial Fig. 2: A categorization of methods addressing the covariate shift problem. Interactive IL assumes access to an online expert. DAgger like algorithms require the expert to provide corrective labels for each action taken by the agent. On the other hand, human-gated and robot-gated methods provide corrective labels only when they are requested by the expert or agent, respectively. Unlike interactive IL, IRL methods do not require access to an online expert. These methods require an underlying RL algorithm to optimize a reward function (either learned from demonstrations or fixed). Lastly, by constraining the agent to known regions of the space covered by demonstrations, constrained IL attempts to address covariate shift problems that cannot be expressed or solved using the other two categories. processes [30]. In [31], the authors attempt to overcome the covariate shift problem in autonomous driving by augmenting the imitation loss with additional losses to discourage bad driving. Moreover, additional data is provided to the agent in the form of synthetic perturbations to the expert's trajectory. The perturbations expose the model to non-expert behavior such as collisions and provide an important signal for the added losses to avoid these behaviors. Wong et al. [32] propose a learned error detection system to indicate when an agent is in a potential failure state. In this way, the error detector can constrain the policy to execute only on states seen previously in the demonstrations and prevent potentially unstable behavior by resetting the agent to a well-known configuration or terminating the execution. [33] considers an offline IL setting where the agent is provided with two datasets: one small dataset of expert policy state-action pairs and one large dataset of state-action-next state transitions from a potentially suboptimal behavior policy. Their solution to the covariate shift problem consists of training a dynamics model on the sub-optimal demonstrations and applying a high penalty in regions of the state-action space that are not well covered by the data. It is difficult for the supervised learning approach used in behavioral cloned policies to identify the underlying causes of expert actions, leading to a phenomenon known as "causal misidentification" [35]. In the training procedure, the causal structure of the interaction between the expert and the environment is not taken into account. Therefore, cloned policies might fail to distinguish nuisance correlates from the true causes of expert actions [35]. When training and testing distributions are the same, ignoring nuisance correlates might not pose a problem as they continue to hold in the test dataset. In BC, however, ignoring causality is particularly problematic because of the distributional shift [35]. The causal structure of the underlying sequential decision process in IL further exacerbates this problem. This is because the causal structure-past actions influence future observations-often creates additional complex nuisance correlates. To address this problem, [35] learns a mapping from causal graphs to policies and then uses targeted interventions-either expert queries or environment interactions-to find the optimal policy. Subsequent work by Wen et al. [36] explores a prominent class of causal confusion problems known as "the copycat problem." This problem happens when the expert actions are highly correlated over time. In this scenario, the agent learns to cheat by copying the expert's previous actions. To address the copycat problem, [36] proposes an adversarial approach for learning a feature representation that ignores the information about the known nuisance correlates-the previous actions-while retaining the necessary information to predict the next action. Chuang et al. [34] extend [36] to high-dimensional image observations. Using a memory extraction module, they attempt to extract historical features from observation histories while removing as much information as possible regarding previous actions (Fig. 3). Traditionally, BC involves training an explicit neural network [37]. Unfortunately, conventional explicit models struggle to model discontinuities, making policies incapable of switching decisively between different behaviors. This issue arises because explicit models are unable to represent discontinuities with neural networks built with continuous activation functions, which is almost always the case with neural networks [37]. In contrast, implicit models are capable of representing sharp discontinuities, despite only having continuous layers in the network [37]. The implicit BC model presented in [38] turns BC into an energy-based modeling problem [39] by training a neural network that takes both observations and actions as input and outputs a single value that is low for expert actions and high for non-expert actions [37]. A trained implicit BC policy selects the action input with the lowest score for a given observation. This method requires more computation than explicit BC models, both during training and during inference. However, the results demonstrate that it can often outperform traditional explicit baselines in robotic manipulation tasks, both in the real world and in simulation. ## III Inverse Reinforcement Learning In addition to behavioral cloning, another key approach to imitation learning is IRL [40]. IRL involves an apprentice agent that aims to infer the reward function underlying the observed demonstrations, which are assumed to come from an expert who acts optimally [41]. Once the reward function is inferred, it is optimized to train an apprentice policy through RL [42]. RL agents, unlike the agents in BC, learn by continually interacting with their environment, observing the consequences of their actions, and altering their behavior to maximize long-term cumulative reward [43, 44]. This process involves using reinforcement signals to learn the long-term consequences of each action, allowing the agent to recover from mistakes [27]. Because of this capability, IRL is less sensitive to covariate shift compared to BC [14]. IRL has been widely used in a variety of applications, such as robotics manipulation, autonomous navigation, game playing, and natural language processing [45, 46, 47]. Nonetheless, devising an effective IRL algorithm for learning from demonstrations is a challenging task, mainly due to two major reasons. Firstly, IRL can be computationally expensive and resource-intensive. In part, this is due to the fact that the agent must Fig. 3: Left: BC might learn a shortcut from prior observations that outputs the previous action as the current action. Right: a copycat-free memory extraction module. The shortcut is no longer available using historical information [34]. interact repeatedly with its environment to accurately estimate the reward function [19, 48]. Additionally, the nature of this process can be inherently unsafe, particularly when dealing with high-risk applications such as autonomous driving or aircraft control [49]. Furthermore, a typical IRL approach follows an iterative process that involves alternating between reward estimation and policy training, which results in poor sample efficiency [46, 47, 50]. Consequently, there has been significant research aimed at addressing these issues to enhance the sample efficiency of IRL algorithms while maintaining the safety and accuracy of the learned policy. Some of these approaches include methods that utilize human guidance to reduce the number of interactions required to estimate the reward function accurately [51]. The second major challenge of IRL arises due to the inherent ambiguity in the relationship between the policy and the reward function. Specifically, a policy can be optimal with respect to an infinite number of reward functions [52, 50]. To address this challenge, researchers have proposed various methods to introduce additional structure into the reward function. There are roughly three categories of IRL methods that aim to address this ambiguity [53]. The first category is maximum-margin methods. The key idea in maximum-margin methods is to infer a reward function that explains the optimal policy more thoroughly than all other policies by a margin. These methods address the discussed ambiguity problem by converging on a solution that maximizes some margin. A foundational method in this category is the work of Ng et al. [50]. They estimate the reward function for which the given policy is optimal using a linear program while maximizing a margin. Another major work is maximum margin planning (MMP) [54] which seeks to find a weighted linear mapping of features to rewards so that the estimated policy is "close" to the demonstrated behaviors. [55, 56] build on and extend MMP to nonlinear hypothesis spaces by utilizing a family of functional gradient techniques. The adoption of feature-based reward functions gave rise to a variety of approaches making use of feature expectations for margin optimization. Abbeel and Ng [45] propose two foundational methods (max-margin and projection) for maximizing the feature expectation loss margin without assuming access to the expert's policy. As with many other IRL methods, these methods have the drawback of limiting the agent's performance to the quality of the expert. To address this limitation, Syed and Schapire [57] propose a game-theoretic approach capable of training a policy with superior performance to an expert. The second category of IRL algorithms aims to solve the ambiguity problem by maximizing the entropy of the resulting policy. MaxEntIRL [47] was the first IRL method to utilize maximum entropy. Ziebart's work [47] demonstrates that the maximum entropy paradigm is capable of handling expert suboptimality and stochasticity by using the distribution over possible trajectories. Subsequent works [58, 59] extend the MaxEntIRL algorithm to continuous state-action spaces using path integral methods. Optimizing a full forward Markov Decision Process (MDP) iteratively becomes intractable in high-dimensional continuous state-action spaces. To overcome this complexity, these studies exploit the local optimality of demonstrated trajectories. In many prior methods, detailed features are manually extracted using domain knowledge, which can be linearly combined into a reward, such as the distance between the ball and the cup for a robotic ball-in-cup game [60]. While linear representations are sufficient in many domains, they may be overly simplistic for complex real-world tasks, particularly when reward values are derived from raw sensory data. Wulfmeier et al. [61] propose maximum entropy deep IRL, a generalization of MaxEntIRL that utilizes neural networks to model complex, nonlinear reward functions. In addition, instead of using pre-extracted features, the deep architecture is further extended to learn features through Convolution layers. This is an important step towards automating the learning process [12]. A further study by Fin et al. [62] proposes guided cost learning (GCL), which improves the sampling efficiency of [61] since [61] relies on a large number of expert transitions to estimate the reward function. GCL learns a nonlinear reward function in the inner loop of a policy optimization (in contrast to early IRL methods). This allows it to scale effectively to complex control problems. A further advantage of this method is that it utilizes the raw state of the system instead of predefined features to construct the reward function, which reduces the engineering burden. Bayesian algorithms constitute the third category of IRL algorithms. Methods under this category use the expert's actions as the evidence for updating the reward function estimate. A posterior distribution over candidate reward functions is derived from a prior distribution over the rewards and a likelihood of the reward hypothesis. Various models of likelihood have been proposed over the years. BIRL [63] is the earliest Bayesian IRL technique, based on a Boltzmann distribution for modeling the likelihood. A variety of distributions can be used as the prior over the reward functions [63]. For example, a Beta distribution is appropriate for planning problems with large rewards dichotomy. Analytically obtaining the posterior in the continuous space of reward functions is extremely difficult. To address this issue, [63] uses Markov chain Monte Carlo (MCMC) to derive a sample-based estimate of the posterior mean. Instead of computing the posterior mean, [64] computes the maximum a posteriori (MAP) reward function. [64] argues that the posterior mean is not the most suitable approach for reward inference as it integrates over the entire reward space, even those not consistent with the observed behavior, in the loss function. Levine et al. [65] propose a Bayesian IRL algorithm that uses a nonlinear function of features to represent the reward. They use a Gaussian process prior on reward values and a kernel function to determine the structure of the reward. The kernel's hyperparameters are learned through the Bayesian GP framework, leading to the learning of the reward structure. Classical BIRL algorithms are computationally intractable in complex high-dimensional environments since generating each posterior sample requires solving an entire MDP. This limitation has prevented these methods from scaling beyond small tabular settings. To overcome this limitation, [66] gen erates samples from the posterior distribution without using an MDP solver by proposing an alternative likelihood formulation that leverages preference labels over demonstrations (Fig. 4). Another approach to address the scalability issue is approximate variational reward IL (AVRIL) [67]. This approach simultaneously learns an imitating policy and an approximate posterior distribution over the reward in an offline setting. Unlike traditional sampling or MAP-based techniques, AVRIL relies on variational inference to estimate the posterior distribution accurately. Most existing IRL algorithms rely on the unrealistic assumption that the transition model, and sometimes the expert's policy, are known beforehand [47, 50, 54, 65]. However, in real-world scenarios, the agent often has to estimate the expert policy and transition dynamics from samples, leading to errors in the recovered reward function [68, 69]. In their analysis, [69] breaks down this error into components from estimating the expert policy and transition model. Based on their analysis (in a tabular setting), they propose an efficient sampling strategy focused on transferring the learned reward function to a fully known target environment. It is assumed, however, that the agent can query the transition dynamics for arbitrary states and actions. To remove this assumption, active exploration for IRL (AceIRL) [68] focuses on developing an efficient exploration strategy. This strategy aims to explore both the environment dynamics and the expert policy such that an arbitrary IRL algorithm can infer the reward function as effectively as possible. By utilizing previous observations, AceIRL builds confidence intervals that capture feasible reward functions and find exploration policies that prioritize the most relevant regions of the environment. ## IV Adversarial Imitation Learning Scaling IRL algorithms to larger environments has been a major challenge despite their success in generating policies that replicate expert behavior [62, 70, 71]. This challenge arises due to the computational complexity of many IRL algorithms, which often require RL to be executed in an inner loop [46]. AIL offers a promising solution to the computational challenges of IRL by searching for the optimal policy without fully solving an RL sub-problem at each iteration [46]. AIL involves a two-player game between an agent and an adversary (discriminator) where the adversary attempts to distinguish agent trajectories from expert trajectories [72]. The agent, on the other hand, endeavors to deceive the adversary by generating trajectories that closely resemble expert trajectories. Through this adversarial process, the agent gradually improves its imitation of the expert's behavior until it converges to a policy that closely resembles the expert's policy. AIL has demonstrated statistically significant improvements over existing methods in multiple benchmark environments, including robotics, autonomous driving, and game playing [46, 73, 74]. The effectiveness of AIL in addressing the limitations of IRL has spurred continued research in this area. The first AIL method that gained prominence is known as generative AIL (GAIL) [46]. In GAIL, the reward function measures the ability of the agent to imitate the expert's behavior. To do this, GAIL utilizes a discriminator network trained to distinguish between the expert's behavior and the agent's generated trajectories. The reward signal is then derived from the confusion of the discriminator, reflecting how difficult it is to tell whether a given trajectory is generated by the agent or the expert. By maximizing this reward signal, the agent is incentivized to generate trajectories that closely resemble the expert's behavior. Over the years, numerous improvements have been proposed to the original algorithm to improve its sample efficiency, scalability, and robustness [75], including changes to the discriminator's loss function [76] and switching from on-policy to off-policy agents [77]. In AIL, the objective is to enable the agent to generate trajectories that are similar to those of the expert. This involves the use of distance measures to quantify the similarity between the two. Different AIL methods employ various similarity measures to match the distribution over states and actions encountered by the agent with that of the expert [29]. For example, GAIL makes use of the Shannon-Jensen divergence, while some methods, such as AIRL [76], use the Kullback-Leibler divergence. However, recent research by Arjovsky et al. [78] has shown that replacing f-divergences with the Fig. 4: A low-dimensional state feature embedding is pre-trained using ranked demonstrations [66]. A linear combination of learned features is used to derive the reward function. A pairwise ranking likelihood is used by MCMC proposal evaluations to estimate the likelihood of preferences over demonstrations given a proposal (w). Utilizing the pre-computed embeddings of the ranked demonstrations makes MCMC sampling highly efficient; There is no need for data collection during inference or an MDP solver. Wasserstein distance through its dual formulation can result in improved training stability, a technique that several AIL methods have implemented [77, 79]. Given these developments, exploring new similarity measures holds the potential to discover novel AIL methods. Most AIL methods, just like GANs (generative adversarial networks) [80], use a min-max optimization approach to minimize the distance between the state-action distributions of the expert and agent, while maximizing a reward signal derived from the discriminator's confusion. However, this approach can be challenging to train due to issues such as vanishing gradients and convergence failure [28]. To overcome these challenges, methods such as primal wasserstein IL (PWIL) [29] have been developed, which approximates Wasserstein distances through a primal-dual approach. ## V Imitation from Observation The prevailing paradigm in IL assumes that the learner has access to both states and actions demonstrated by an expert [81]. However, this often necessitates collecting data explicitly for IL purposes [81]. In robotics, for instance, the expert must teleoperate the robot or move its joints manually (kinesthetic learning) [82], and in gaming, the expert may require a special software stack. In both cases, considerable operator expertise is required, and useful demonstrations are limited to those recorded under artificial conditions. These limiting factors have motivated recent efforts in IfO [83], where the expert's actions are unknown. In contrast to previous methods, imitation from observation is a more natural way to learn from experts and is more in tune with how humans and animals approach imitation in general. It is common for humans to learn new behaviors by observing other humans without being aware of their low-level actions (e.g., muscle commands). Humans learn a wide range of tasks, from weaving to swimming to playing games, by watching videos online. While there may be huge gaps in body shapes, sensing modalities, and timing, they show an incredible ability to apply the knowledge gained from online demonstrations [9]. Enabling agents to learn from demonstrations without the action information makes a large number of previously inapplicable resources, such as videos on the Internet, available for learning [84]. Additionally, it opens up the possibility of learning from agents with different embodiments whose actions are unknown or cannot be matched. The use of state-only demonstrations for IL is not new [85]. However, recent deep learning and visual recognition developments [86] have equipped researchers with more powerful tools to approach the problem, particularly when dealing with raw visual observations [48]. Liu et al. [83] propose an imitation from observation method that learns an imitator policy from raw videos using context-aware translation. Their algorithm utilizes a context translation model that converts demonstrations from the expert's context (e.g., a third-person viewpoint) to the agent's context (e.g., a first-person viewpoint). The model is then used to predict the expert behavior in the context of the robot (Fig. 5). Using the predicted observations, a reward function is defined that is made up of a penalty for deviating from the expert's translated features - encoded from input observations - and a penalty for encountering observations that are different from the translated observations. RL is then used to optimize the derived reward function. There are two drawbacks that limit the applicability of this method. First, it is assumed that demonstrations from different contexts are aligned in time which is rarely the case in the real world [87]. Second, learning the translation model requires a large number of demonstrations [83]. A further limitation is that it cannot address systematic domain shifts, such as differences in embodiment [83]. Sermanet et al. [88] introduce a self-supervised representation learning method using time-contrastive networks (TCN) that is invariant to different viewpoints and embodiments. TCN trains a neural network to learn an embedding of each video frame to extract features invariant to context differences, such as the camera angle. By using a triplet loss function, two frames occurring at the same time but with different modalities (i.e., viewpoints) are brought closer together in the embedding space while the frames from distant time-steps but with a visually similar frame are pushed apart (Fig. 6). In order to construct the reward function, Euclidean distance is calculated between the embedding of a demonstration and the embedding of an agent's camera images. RL techniques are used to optimize the reward function for learning imitation policies. A limitation of this technique is that it requires multi-viewpoint video for training, which is not readily available (e.g., over the Internet). BC from observation (BCO) [89] aims to minimize the amount of post-demonstration environment interactions required for training RL algorithms of prior methods by taking a behavior cloning approach. BCO first learns an inverse dynamics model by letting an agent, who initially follows a random policy, interact with its environment and collect data [81]. After that, the model is used to infer the missing actions of the expert demonstrations. A BC algorithm is then used to map the states to the inferred actions and solve the problem as a regular IL problem. Using this approach, it is necessary to gather large amounts of data to learn the dynamics model Fig. 5: A context translation model is trained on several videos of expert demonstrations [83]. The robot observes the context of the task it must perform during the learning process. The model then determines what an expert would do in the context of the robot. online, especially in high-dimensional problems. Subsequent work by [81] attempts to reduce the number of environment interactions required in BCO by learning a latent forward dynamics model in an offline manner [48]. The underlying assumption in this work is that predictable, though unknown, causes describe the classes of observed state transitions. The goal is to enable an agent to predict and imitate these latent causes [81]. To achieve this, a latent policy is learned, which estimates the probability that a latent action would be taken in an observed state. They then use a limited number of interactions with the environment to learn a mapping between the real-world actions the agent can take and latent actions identified by the model. Generative adversarial imitation from observation (GAIfO) [84] adapts the GAIL objective to IfO by matching the expert and agent's state-transition distributions. By adopting an adversarial approach, this method can overcome the covariate shift problem encountered in the previous approaches [81, 89]. In addition, it is capable of handling demonstrations that are not time-aligned, unlike previous approaches. Using this approach is most successful when the expert and the agent operate in the same environment, under the same dynamics. However, it becomes more challenging to match state-transition distributions when dynamics differ since the expert's state transitions might not even be feasible in the agent's environment [90]. Jaegle et al. [91] introduce a non-adversarial IRL from observations approach using likelihood-based generative models. In this method, conditional state transition probabilities are matched between expert and learner. According to the authors' findings, their approach of matching conditional state transition probabilities tends to focus less on irrelevant differences between the expert and the learner settings than adversarial approaches such as GAIfO, which matches joint state-next-state probabilities. In particular, they argue that conditional state probabilities are less prone to erroneously penalize features that are not present in the demonstrations but lead to correct transitions. Raychaudhuri et al. [87] propose a framework that learns state maps across the source and target domain from unpaired, unaligned demonstrations. This approach addresses embodiment, viewpoint, and dynamics mismatch. In order to preserve MDP dynamics during domain transformation, local and global alignments are performed. In local alignment, adversarial training is used to minimize divergence between the state-transition distributions of the true and transferred trajectories. Meanwhile, a learned temporal position function is used to enforce global alignment to ensure that states are placed in a consistent temporal position across the two domains. Finally, given the set of transferred demonstrations, BCO is used to learn the final policy. As with [83], this method relies on proxy tasks, i.e., expert demonstrations from both domains, which limits its application. Aytar et al. [9] present a novel self-supervised framework for learning to play hard exploration Atari games without ever being explicitly exposed to the Atari environment by watching YouTube videos (Fig. 7). Learning from YouTube videos poses several challenges due to the existence of domain-specific variations (e.g., in color or resolution), and a lack of frame-by-frame alignment. To address these challenges, they first use self-supervised classification tasks, constructed over both vision and audio to map unaligned videos from multiple sources to a common representation. Then, a single YouTube video is embedded in this representation, and a sequence of checkpoints are placed along the embedding. This is used to create a reward function that encourages the agent to imitate human gameplay. During policy training, the agent is rewarded only when it reaches these checkpoints. Brown et al. [97] introduce an IRL from observation technique for extrapolating the expert's intent from suboptimal ranked demonstrations. This work aims to improve the performance over a suboptimal expert in high-dimensional tasks by inferring the expert's intentions. They learn a state-based reward function such that a greater total return is assigned to higher-ranked trajectories. Utilizing ranking to construct a reward function in this way enables identifying features that are correlated with rankings, allowing for potentially better-than-demonstrator performance. Given the learned reward function, RL is used to optimize a policy. Utilizing large amounts of navigation data from YouTube, [98] proposes a framework for learning scalable driving. First, a model is trained on a small labeled dataset to map monocular images to Bird's Eye View (BEV), facilitating learning from the unconstrained nature of YouTube videos (e.g. in viewpoints or camera parameters). Since many publicly available driving datasets include action labels, this assumption is reasonable. This trained model is used to generate pseudo-labels over a large unlabeled dataset. Lastly, a generalized policy is trained on the pseudo-labeled dataset and fine-tuned on the clean labels of the small labeled dataset. ## VI Challenges and Limitations ### _Imperfect Demonstrations_ A common assumption in IL methods is that the demonstrations will be optimal, performed by an expert demonstrator Fig. 6: The embedding space encourages co-occurring frames from different viewpoints to be in close proximity to each other, while images captured from the same viewpoint but at different times should be far apart [88]. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Ref & Datasets & Inputs & Learning Type & Online/ Offline & Online & Application \\ \hline [93] & Sim & State, Image & Interactive IL & Online & Yes & \begin{tabular}{c} Robotic Locomotion, \\ Dependency Parsing \\ \end{tabular} \\ \hline [17] & Sim & State, Image & Regularized BC & Online & No & \begin{tabular}{c} Car Racing, Atari Games, \\ Locomotion Control Tasks \\ \end{tabular} \\ \hline [14] & Sim & State, Image & BC\DAagger & Online & Yes & Games, Handwriting Recognition \\ \hline [21] & Sim, Real & State & BC\HG-DAagger & Online & Yes & Autonomous Driving \\ \hline [22] & Sim & State & \begin{tabular}{c} BC\\\ Human-gated Interactive IL \\ \end{tabular} & Online & Yes & Robotic Manipulation \\ \hline [19] & Sim & State & Human-in-the-loop RL & Online & Yes & Autonomous Driving \\ \hline [23] & Sim & Image & BC\ SafeDAagger & Online & Yes & Autonomous Driving \\ \hline [24] & Sim, Real & State, Image & BC\ LazyDagger & Online & Yes & Robotic Locomotion, Fabric Manipulation \\ \hline [25] & Sim & State & BC\ EnsembleDagger & Online & Yes & Inverted Pendulum, Locomotion \\ \hline [20] & Sim, Real & State, Image & BC\ThriftyDAagger & Online & Yes & Peg Insertion, Cable Routing \\ \hline [26] & Sim & State & \begin{tabular}{c} IL via \\ Expert Support Estimation \\ \end{tabular} & Online & No & \begin{tabular}{c} Robotic Locomotion, \\ Autonomous Driving \\ \end{tabular} \\ \hline [27] & Sim & State, Image & \begin{tabular}{c} IL via \\ Expert Support Estimation \\ \end{tabular} & Online & No & \begin{tabular}{c} Atari Games, \\ Continuous Control Tasks \\ \end{tabular} \\ \hline [31] & Sim, Real & State & \begin{tabular}{c} IL via \\ Expert Support Estimation \\ \end{tabular} & Online & No & Autonomous Driving \\ \hline [32] & Sim & State & BC & Online & No & Robotic Manipulation \\ \hline [33] & Sim & State & Offline Imitation Learning & Offline & No & Robotic Locomotion \\ \hline [35] & Sim & State & Causal Graph-Parameterized & Online & Yes & \begin{tabular}{c} Atari Games, \\ Robotic Locomotion \\ \end{tabular} \\ \hline [36] & Sim & State & BC & Offline & No & Robotic Locomotion \\ \hline [34] & Sim & State, Image & BC & Offline & No & Robotic Locomotion, Autonomous Driving \\ \hline [38] & Sim, Real & State, Image & Implicit BC & Offline & No & Robotic Manipulation \\ \hline [94] & Sim, Real & State & Maximum Entropy IRL & Online & No & Path Planning, Gridworld \\ \hline [45] & Sim & State & Feature Based IRL & Online & No & Autonomous Driving, Gridworld \\ \hline [47] & Real & State & Maximum Entropy IRL & Online & No & \begin{tabular}{c} Predicting Driving Behavior, \\ Route Recommendation \\ \end{tabular} \\ \hline [54] & Real & State & Maximum-margin IRL & Online & No & Route Planning \\ \hline [55] & Sim, Real & State, Image & \begin{tabular}{c} Maximum-margin IRL\\\ MMPBOOST \\ \end{tabular} & Online & No & \begin{tabular}{c} Path Planning, Legged Locomotion, \\ Driving Obstacle Detection/Avoidance \\ \end{tabular} \\ \hline [56] & Sim, Real & State, Image & Maximum-margin IRL & Online & No & \begin{tabular}{c} Footstep Prediction, Grasp Prediction, \\ Navigation Task \\ \end{tabular} \\ \hline [57] & Sim & State & Maximum-margin IRL & Online & No & Car Driving Game \\ \hline [58] & Sim & State & Maximum Entropy IRL & Online & No & 2-D Point Mass Control System \\ \hline [59] & Real & State & Maximum Entropy IRL & Online & No & Robotic Manipulation \\ \hline [60] & Sim & State & Relative Entropy IRL & Online & No & Car Racing, Gridworld, Game \\ \hline [61] & Sim & State & Maximum Entropy Deep IRL & Online & No & Objectworld, Binaryworld \\ \hline [62] & Sim, Real & State & Maximum Entropy IRL & Online & No & Robotic Manipulation \\ \hline [63] & Sim & State & Bayesian IRL & Online & No & Random Generated MDPs \\ \hline [64] & Sim & State & \begin{tabular}{c} Bayesian IRL\\\ MAP Inference \\ \end{tabular} & Online & No & Gridworld, Simplified Car Racing \\ \hline \hline \end{tabular} \end{table} TABLE I: Summary of existing research on imitation learning \begin{table} \begin{tabular}{c c c c c c c} \hline \hline [65] & Sim & State & Nonlinear Bayesian IRL & Online & No & Objectworld, Highway Driving \\ \hline [66] & Sim & Image & Bayesian IRL & Online & No & Atari Games \\ \hline [67] & Sim, Real & State & Bayesian IRL & Offline & No & \begin{tabular}{c} Medical Information Dataset, \\ Physics-based Control \\ \end{tabular} \\ \hline [69] & Sim & Image & IRL & Online & Yes & \begin{tabular}{c} Gridworld, Random Generated MDPs, \\ Chain MDP \\ \end{tabular} \\ \hline [68] & Sim & State & IRL\(\backslash\)Active Exploration & Online & No & \begin{tabular}{c} Random MDP, Gridworld, \\ Chain MDP, Double Chain \\ \end{tabular} \\ \hline [46] & Sim & State & Generative Adversarial IL & Online & No & Physics-based Control, Robotic Locomotion \\ \hline [76] & Sim & State & Adversarial IRL & Online & No & \begin{tabular}{c} Random Generated MDPs, \\ Continuous Control Tasks \\ \end{tabular} \\ \hline [77] & Sim & State & Adversarial IRL & Online & No & Robotic Locomotion and Manipulation \\ \hline [95] & Sim & State & Bayes-GAIL & Online & No & Robotic Locomotion \\ \hline [96] & Sim & State & IRL & Online & No & Robotic Locomotion \\ \hline [29] & Sim & State, Image & Adversarial IRL & Online & No & Robotic Locomotion and Hand Manipulation \\ \hline [79] & Sim & State, Image & Generative Adversarial IL\(\backslash\) InfoGAIL & Online & No & \begin{tabular}{c} Synthetic 2D Example, \\ Autonomous Highway Driving \\ \end{tabular} \\ \hline [83] & Sim, Real & Image & Imitation from Observation & Online & No & Robotic Manipulation \\ \hline [88] & Sim, Real & Image & IfO\(\backslash\)Self-supervised Learning & Online & No & Robotic Manipulation, Human Pose Imitation \\ \hline [89] & Sim & State & BC from Observation & Online & No & Physics-based Control, Robotic Locomotion \\ \hline [81] & Sim & State, Image & Imitation from Observation & Online & No & Physics-based Control, CoinRun Game \\ \hline [84] & Sim & State, Image & Generative Adversarial Imitation from Observation & Online & No & Physics-based Control, Robotic Locomotion \\ \hline [90] & Sim & State & Imitation from Observation & Online & No & Robotic Locomotion \\ \hline [91] & Sim & State & IRL from Observations & Online & No & Robotic Locomotion \\ \hline [97] & Sim & State & Cross-domain IL from Observations & Offline & No & Physics-based Control, Robotic Locomotion \\ \hline [9] & Sim & Image & Imitation from Observation & Online & No & Atari Games \\ \hline [97] & Sim & State, Image & IRL from Observations & Online & No & Atari Games, Robotic Locomotion \\ \hline [98] & Sim, Real & Image & Conditional IL from Observations & Offline & No & Autonomous Driving \\ \hline [99] & Sim & State & Importance Weighting IL, & Online & No & Robotic Locomotion \\ \hline [100] & Sim & State & BC from Noisy Demonstrations & Offline & No & Robotic Locomotion \\ \hline [101] & Sim & State, Image & Weighted GAIL & Online & No & Atari Games, Robotic Locomotion \\ \hline [102] & Sim & State & Weighted BC & Offline & No & Robotic Locomotion \\ \hline [103] & Sim & State & BC & Offline & No & \begin{tabular}{c} MiniGrid Environments, Robotic Manipulation, \\ Chess Game-endings \\ \hline [104] & Sim & State & GAIL with Modified Objective & Online & No & Robotic Locomotion \\ \hline [105] & Sim & State & GAIL & Online & No & Physics-based Control \\ \hline [106] & Sim, Real & Image & Cross-emboiment IRL & Online & No & Robotic Manipulation \\ \hline [107] & Sim & State & IL via Optimal Transport & Online & No & \begin{tabular}{c} Physics-based Control, Robotic Locomotion, \\ 2D Maze Navigation \\ \end{tabular} \\ \hline [108] & Sim & State & BC & Offline & No & Robotic Manipulation, Physics-based Control \\ \hline \hline \end{tabular} \end{table} TABLE I: Summary of existing research on imitation learning (Continued) [2]. However, this assumption is too restrictive when it comes to learning from demonstrations in a variety of cases [2]. Firstly, it can be difficult to obtain large numbers of high-quality demonstrations from human experts [109, 110]. In many real-world tasks, this would be impossible for humans due to the amount of time and effort required. Additionally, humans are prone to making mistakes for various reasons, such as the presence of distractions, or limited observability of the environment [99, 100]. Secondly, it is necessary to leverage the scale and diversity of crowd-sourced datasets to learn robust effective IL policies [111]. However, a crowd-sourced dataset will inevitably have a wide range of behavior optimality since it is collected from users with varying levels of expertise. The naive solution to imperfect demonstrations would be to discard the non-optimal ones. However, this screening process is often impractical since it requires significant human effort [100]. Therefore, researchers have been increasingly interested in developing methods that can learn from imperfect demonstrations. Wu et al. [99] present two general approaches to address imperfect demonstrations by utilizing both confidence-scored and unlabeled data: two-step importance weighting IL (2IWIL) and generative adversarial IL with imperfect demonstration and confidence (IC-GAIL). Both approaches assume that a fraction of demonstrations are annotated with confidence scores (i.e. the probability that a given trajectory is optimal). 2IWIL is a two-step approach that first uses a semi-supervised classifier to generate confidence scores for the unlabeled demonstrations, and then performs standard GAIL with reweighted distribution [102]. To avoid error accumulation in two steps, IC-GAIL forgoes learning a classifier and performs occupancy measure matching with unlabeled demonstrations. Sasaki et al. [100] propose an offline BC algorithm to learn from noisy demonstrations, obtained from a noisy expert, without any screening or annotations associated with the non-optimal demonstrations. The key idea is to leverage the learned policy to reweight the samples in the next iteration of weighted BC. The noisy expert action distribution is assumed to be a weighted mixture of two distributions: the action distribution of an optimal expert and a non-optimal one. The goal is to change the weights so that the noisy expert action distribution mode gets closer to the optimal expert action distribution mode. This is achieved by reusing the old policy (i.e. the policy optimized in the previous iteration) as the weights for action samples in the weighted BC objective. However, this approach only converges to the optimal policy when optimal demonstrations constitute the majority of the data. Wang et al. [101] investigate how to weight imperfect demonstrations in GAIL without requiring auxiliary information from an oracle. An automatic weight prediction method is proposed to assess the quality and significance of each demonstration for training. They demonstrate that the weight can be accurately estimated using both the discriminator and the agent policy in GAIL. In the training procedure, the weight estimation is conducted first to determine weight for each demonstration. Using weighted GAIL, the agent policy is then trained with weighted demonstrations. These two procedures interact alternately and are optimized as a whole. Kim et al. [102] aim to overcome the distributional shift problem caused by the lack of sufficient expert demonstrations by using supplementary imperfect demonstrations with unknown optimality levels. They regularize a distribution-matching objective of IL by a KL divergence between the agent distribution and a mixture of expert and imperfect distributions. An optimal state-action distribution of this regularized objective is obtained using a dual-program technique [112]. Given the optimal state-action distribution, the expert policy is extracted by performing weighted BC. Beliaev et al. [103] introduce IL by Estimating Expertise of Demonstrators (ILEED). It leverages information about demonstrators' identities to infer their expertise levels in an unsupervised manner. Each demonstrator is assigned a state-dependent expertise value, which indicates which demonstrators perform better in specific states, allowing them to combine their strengths in different states. ILED develops and optimizes a joint model over a learned policy and expertise levels. As a result, the model is able to learn from the optimal behavior of each demonstrator and filter out the suboptimal behavior. Expertise levels are modeled by the inner product of two embeddings: an state embedding, and a demonstrator embedding. Each dimension of the embeddings vector corre Fig. 7: For the path in (a), t-SNE projections [92] of trajectories using the proposed embedding (c) and raw pixels (d) are shown [9]. In b, an example frame of MONTEZUMA’S REVENGE is compared using four different domains: the Arcade Learning Environment, and three YouTube videos. Based on the embedding space, it can be seen that the four trajectories are well-aligned. sponds to a latent skill, with the state embedding having a weighting of how relevant that skill is in acting correctly at that state and the demonstrator embedding representing how adept the demonstrator is at that skill (Fig. 8). ### _Domain Discrepancies_ The majority of prior research assumes that the expert and the agent operate under the same state and action space [107]. This assumption makes it easier to manually specify one-to-one correspondences between the actions of the expert and the agent. However, this will restrict the application of these algorithms to simple scenarios in which expert demonstrations come from the agent domain. In recent years, there has been increased interest in IL under a more relaxed and realistic assumption: given demonstrations of a task in the expert domain, train the agent to learn to perform the task optimally in its own domain [107]. This relaxed setting facilitates the collection of demonstrations by removing the requirement for in-domain expert demonstrations, improving the efficiency and scalability of IL. A variety of solutions have been proposed to address three main types of domain discrepancies: dynamics [90, 104], viewpoint [88, 105], and embodiment mismatch [106, 107, 113]. The transfer of knowledge between different domains in IL research often involves learning a mapping between the state-action spaces. Recent works [83, 88] utilize paired and time-aligned demonstrations from both domains (expert and agent) to learn a state mapping or encoding to a domain invariant feature space. Following this, they perform an RL step to learn the final policy on the given task. These studies are limited in their application due to the limited availability of paired demonstrations and the high cost of RL procedures [108]. To overcome these limitations, Kim et al. [108] propose a general framework to learn state and action maps from unpaired and unaligned demonstrations while having access to an online expert. In addition, they eliminate the need for an expensive RL step by leveraging the action map to perform zero-shot imitation. The work of [87] extends [108] to the imitation form observation setting and also eliminates the need for an online expert in [108]. All these methods rely on proxy tasks, which limits their applicability in real-world scenarios. Stadie et al. [105] propose an adversarial framework for viewpoint-agnostic imitation that uses a discriminator to distinguish data coming from different viewpoints and maximizes domain confusion without proxy tasks. Zakka et al. [106] adopt a goal-driven approach that focuses on imitating task progress rather than matching fine-grained structural details. Several of these approaches have already been discussed in previous sections. The following is a detailed discussion of a few of the most recent methods. Chae et al. [104] provide a framework for learning policies that can perform well under perturbed environment dynamics. The objective is to train a policy that is robust to continuous dynamics variation, using only a few samples from the continuum of environment dynamics. The sampled environments are used during both the demonstration collection phase and the policy interaction phase (Fig. 9). The problem is then formulated as the minimization of the weighted average of Jensen-Shannon divergences between the multiple expert policies and the agent policy. Cross-embodiment IRL (XIRL) [106] attempts to extract an agent-invariant definition of a task from videos of different agents executing the same task differently due to embodiment differences. XIRL uses temporal cycle consistency (TCC) to learn visual embeddings, which identify key moments in videos of varying lengths and cluster them to encode task progression. In order to learn an embodiment-invariant reward function, XIRL uses the distance from a single goal state in the TCC embedding space. This method can be applied to any number of embodiments or experts (regardless of their skill level) since it does not require the manual pairing of video frames between the expert and the learner. Fickinger et al. [107] examine how expert demonstrations can be used to train an imitator agent with a different embodiment without relying on explicit cross-domain latent space [106] or resorting to any form of proxy tasks [83, 87, 88]. Instead, they use the Gromov-Wasserstein distance between state-action occupancies of the expert and the agent to find isometric transformations that preserve distance measures between the two domains. Given trajectories from the expert and the agent domain, pseudo-rewards are computed based on the degree to which distances from a state to its neighbors in the agent domain are preserved in the expert domain. Using Fig. 8: IL by estimating expertise of demonstrators [103]. Left: Skills associated with states are encoded by state embeddings. Middle: Demonstrators’ expertise, \(\rho\), at a particular state is determined by the state embedding and the demonstrator embedding. Right: By utilizing the expertise levels, the model improves the learned policy, thereby improving the estimation of the state/demonstrator embeddings and the expertise level. these pseudo-rewards, the policy is optimized using an RL algorithm. ## VII Opportunities and Future Work This survey paper provides a comprehensive overview of the field of IL, exploring its algorithms, categorizations, developments, and challenges. The paper starts by presenting a categorization of IL algorithms, identifying two general learning approaches, namely BC and IRL, and discussing their relative benefits and limitations. Additionally, the paper highlights the benefits of integrating adversarial training into IL and evaluates the current progress in the AIL field. The paper also introduces a novel technique called IFO that aims to learn from state-only demonstrations. Through the examination of various IL algorithms, we have gained valuable insights into their strengths and limitations and identified some of the key challenges and opportunities for future research. One of the significant challenges across all categories of IL approaches is the need to collect diverse and large-scale demonstrations, which is crucial for training a generalizable policy that can be applied in the real world [111]. However, this poses a challenge, as readily available demonstration resources such as online videos present additional difficulties such as the varying levels of expertise among the demonstrators. Another challenge in IL research is developing methods that enable agents to learn across domains with differences in dynamics, viewpoint, and embodiment. Overcoming these challenges is essential if we are to teach agents to learn from experts effectively and apply the insights from IL research to real-world scenarios. Therefore, future research should focus on developing algorithms that can learn from imperfect demonstrations, extract useful information, and enable cross-domain learning. Despite these challenges, the field of IL presents exciting opportunities for future research. As the field of AI continues to evolve and mature, we believe that IL will play a critical role in enabling agents to learn from demonstrations, adapt to new tasks and environments, and ultimately achieve more advanced levels of intelligence, paving the way for real-world applications of AI. ## Acknowledgments This research was partially supported by the Australian Research Council's Discovery Projects funding scheme (project DP190102181 and DP210101465).
2301.01758
An Ensemble Mobile-Cloud Computing Method for Affordable and Accurate Glucometer Readout
Despite essential efforts towards advanced wireless medical devices for regular monitoring of blood properties, many such devices are not available or not affordable for everyone in many countries. Alternatively using ordinary devices, patients ought to log data into a mobile health-monitoring manually. It causes several issues: (1) clients reportedly tend to enter unrealistic data; (2) typing values several times a day is bothersome and causes clients to leave the mobile app. Thus, there is a strong need to use now-ubiquitous smartphones, reducing error by capturing images from the screen of medical devices and extracting useful information automatically. Nevertheless, there are a few challenges in its development: (1) data scarcity has led to impractical methods with very low accuracy: to our knowledge, only small datasets are available in this case; (2) accuracy-availability tradeoff: one can execute a less accurate algorithm on a mobile phone to maintain higher availability, or alternatively deploy a more accurate and more compute-intensive algorithm on the cloud, however, at the cost of lower availability in poor/no connectivity situations. We present an ensemble learning algorithm, a mobile-cloud computing service architecture, and a simple compression technique to achieve higher availability and faster response time while providing higher accuracy by integrating cloud- and mobile-side predictions. Additionally, we propose an algorithm to generate synthetic training data which facilitates utilizing deep learning models to improve accuracy. Our proposed method achieves three main objectives: (1) 92.1% and 97.7% accuracy on two different datasets, improving previous methods by 40%, (2) reducing required bandwidth by 45x with 1% drop in accuracy, (3) and providing better availability compared to mobile-only, cloud-only, split computing, and early exit service models.
Navidreza Asadi, Maziar Goudarzi
2023-01-04T18:48:53Z
http://arxiv.org/abs/2301.01758v1
# An Ensemble Mobile-Cloud Computing Method for Affordable and Accurate Glucometer Readout ###### Abstract Despite essential efforts towards advanced wireless medical devices for regular monitoring of blood properties, many such devices are not available or not affordable for everyone in many countries. Alternatively using ordinary devices, patients ought to log data into a mobile health-monitoring manually. According to medical specialists, it causes several issues: (1) due to the direct human intervention, it is prone to errors, and clients reportedly tend to enter unrealistic data; (2) typing values several times a day is bothersome and causes clients to leave the mobile app. Thus, there is a strong need to use non-ubiquitous smartphones, reducing error by capturing images from the screen of medical devices and extracting useful information automatically. Nevertheless, there are a few challenges in its development: (1) data scarcity has led to impractical methods with very low accuracy: to our knowledge, only small datasets are available in this case; (2) accuracy-availability tradeoff: one can execute a less accurate algorithm on a mobile phone to maintain higher availability, or alternatively deploy a more accurate and more compute-intensive algorithm on the cloud, however, at the cost of lower availability in poor/no connectivity situations. We present an ensemble learning algorithm, a mobile-cloud computing service architecture, and a simple compression technique to achieve higher availability and faster response time while providing higher accuracy by integrating cloud- and mobile-side predictions. Additionally, we propose an algorithm to generate synthetic training data which facilitates utilizing deep learning models to improve accuracy. Our proposed method achieves three main objectives: (1) \(92.1\%\) and \(97.7\%\) accuracy on two different datasets, improving previous methods by \(\sim\)\(40\%\), (2) reducing required bandwidth by \(45\times\) with \(\sim\)\(1\%\) drop in accuracy, (3) and providing better availability compared to mobile-only, cloud-only, split computing, and early exit service models. Mobile Computing, Ensemble Learning, Data Generation, Deep Learning, Smart Health ## 1 Introduction Many mHealth/uHealth medical devices, especially those affordable in middle/low-income countries, show the measured quantity on a digital or seven-segment screen alongside other additional information such as date, time, diagrams and measurement units. In most commonly used mHealth services, particularly for diabetics, patients are required to manually type sensed values into their mobile app. As illustrated in Fig. 1(a), a client first reads a value from the medical device, opens the app, navigates to logging interface, and eventually logs the information through typing. These steps should be repeated every time they log a measurement. ### _Motivation_ According to the reports [1, 2] as well as our own experience from our diabetes management application iDia [3], this procedure has a few drawbacks: (1) Manual logging multiple times a day, is deterring; it is bothersome and time-consuming, and based on the feedbacks we have received, leads users to eventually lose their interest in using the app. (2) It is prone to human errors. More importantly, the observations show that patients are tempted to enter fake information that are more acceptable and closer to the normal values. This can have considerable negative effects on the whole process of prevention, control, and treatment. Currently, there are two better approaches: (1) Some medical devices are able to transmit their data to mobile devices via distant communication technologies (e.g., Bluetooth), facilitating the logging procedure (Fig. 1(b)). Nevertheless, they are far more expensive and in some cases less accurate [4]. More importantly, most of them can only interact with their own software applications and do not permit third-party apps to receive data. In addition, different versions of transmission technologies cause incompatibility between different mobile and medical devices. Despite a potentially bright future, these devices currently hold less than one percent of the market [5]. This number is even lower in developing countries. (2) An alternative which has grown interest in academia, uses image capturing and computing capabilities of mobile devices (i.e., smartphones); they are available to almost everybody and are able to perform light-weight computing tasks. Fig. 1(c) illustrates this alternative: the mobile phone is used to capture an image of the medical device, and then image processing is applied to recognize the sensed values automatically. In this approach (Fig. 2), each digit is considered as an object having a region of interest (RoI) and an ordered sequence of digits (i.e., RoIs) forms a _read_ string. A correct read means not only all digits are predicted correctly, but also in the same order as displayed on the device. In this paper we follow this methodology for its broader applicability especially in middle/low-income countries. ### _Challenges_ **Accuracy.** Although automated image-based reading may seem basic and easy, there are a few points that prove it a challenging problem in this specific case: (1) the current state-of-the-art [6], which has largely improved the previous algorithms, uses only two medical devices, yet achieves just **51.5%** accuracy, meaning it misreads almost half of the captured images. (2) as we discuss in Sections 2 and 3, the diversity of medical devices and a variety of structural and visual differences as well as noisy information on their screens (e.g., date and time) make it extremely difficult to read sensed value with business-as-usual image processing techniques. **Data Scarcity.** The accuracy of deep learning algorithms relies on either big volume of annotated data to be trained in a supervised manner or a model pre-trained on a related task. To the best of our knowledge, there exists neither a big dataset nor a related pre-trained model in our particular task of imaged-based reading. Generative models (e.g., GANs) also would not help there because they require similar data to train, which is not applicable in our case. Additionally, it might be really difficult (if not impossible) to generate automatic annotation for our usecase. **Resource Constraints.** Current state-of-the-art deep learning models are compute-intensive and need to be deployed on specialized cloud infrastructures. Thus, they introduce new challenges, including availability during poor network conditions, that is genuinely an expected issue in our target under-developed countries. Edge Computing [7], referring to every device near data source with some compute capacity (e.g., smartphones), is considered as a promising approach to improve availability and quality of service (QoS) by reducing delay. Mobile devices are usually resource-constrained, and therefore, can only run simpler deep neural network (DNN) models with much lower accuracy. Cloud Computing, however, is at the opposite side. ### _Main Contributions_ We propose practical solutions to address the challenges above. We present an ensemble mobile-cloud computing architecture to get the best of both worlds: higher availability on the mobile, as well as higher accuracy and enhanced performability (i.e., a measure of level of performance/service-quality of the system) by integrating cloud and mobile modules (Fig. 3, described in SS3). The followings items are our main contributions: (1) A hybrid mobile-cloud service architecture, together with a compatible ensemble deep learning algorithm. This enables an accuracy-availability tradeoff based on network connectivity. We addressed the challenges of combining predictions of two separate models; e.g., differences and overlaps in the identified bounding boxes for each data element. (2) A simple yet effective compression method. Combined with our ensemble model, this provides higher accuracy despite little communicated data. (3) A high-fidelity data synthesizer algorithm, making utilizing deep learning models possible. This has basically turned the challenge into an opportunity; the variety of glucometer models, data formats, units, fonts, etc. is a challenge to conventional methods, but we used it in our data synthesizer mechanism to produce enough reasonable data to train high accuracy models that cover many varieties including those not seen before. Our proposed method achieves \(92.1\%\) and \(97.7\%\) accuracy on two real-world datasets, and improves previously published results by more than \(40\%\). It reduces the required bandwidth by \(45\times\), and maintains higher availability compared to mobile-only, cloud-only, split computing, and early exit service models. Our proposal can be easily extended to other usecases with minor modifications. The rest of this paper is organized as follows: In Section 2, we review related work. In Section 3, we present our proposed method. Section 4 explains our dataset generation algorithm. We evaluate our methods and algorithms in Section 5 and conclude in Section 6. ## 2 Related Work We separate related work into two different parts. The first one presents algorithms for reading sensed values or detection and recognition of digits on digital or seven-segment screens. The second one summarizes the studies that attempt to deploy part or whole of a deep learning model on the mobile. Fig. 1: Logging methods. Removing human interventions (red arrows) is preferred to avoid false logs. Fig. 2: Image-based method, showing Regions-of-Interest (Rols) and the correct labels. The expected correct readout is ‘11.9 mmol/L’. ### _Image-Based Automated Reading_ Most of the proposed methods break the problem into multiple steps including image enhancement, localization of RoIs, detection and classification, and eventually ordering. We review related work within five criteria, as summarized in Table I. **Automated Localization.** Locating the RoIs is a crucial step and impacts the final accuracy. Many works try to simplify the problem while assuming the localization step is somehow already done, either manually by a client, or by fixing the device or using special markers [8, 9, 10, 11, 13, 15]. A few others [6, 12, 16], and our work take a more holistic approach, applying automated localization as well. **Accuracy.** To our knowledge, no previous work achieves a reasonable readout accuracy. [6] with \(51.5\%\) has by far the best performance. General OCR engines also are not helpful; our previous experiments along with the reports from [15, 16], and [6] confirm the state-of-the-art OCR engine, Tesseract's [14] poor performance (\(<\)10%) in this specific task. On the other hand, our method can reach over \(90\%\) accuracy on both datasets. **Robustness.** Different methods, especially those using conventional algorithms are usually sensitive to different conditions such as skewness, various noises, camera perspective and angle, exposure, and illumination. Here, we define robustness as performing consistently in different conditions. That said, only few works [6, 16] try to adress it. We simulate all these variations in our data synthesizer. Additionally, the ensemble of two different models, mitigates such errors. **Generalization.** There are a variety of medical devices each having unique characteristics, such as different font styles (including various types of seven-segments and/or digital styles), background and foreground colors, screen size and shape, units, backlit, etc. (Fig. 4). Nevertheless, the previous studies, except [16], use one or very few specific devices so that their proposed algorithms highly depend on the properties of those selected devices; hence, may not be considered as a general solution. For example, [13] is designed for a particular medical device with a blue backlit screen, [6] considers only seven-segment screens, and [15] assumes the largest contour as the screen, and hardcodes exact location of RoIs. In contrast, we cover a broad market, and illustrate this using an additional public dataset from Oxford [6]. **Response Time.** Since larger portion of the previous studies use conventional and light-weight image processing methods, they can achieve reasonable response time. For instance, [13] is implemented on a Samsung Galaxy S i9000 mobile device, and can process 20 frames per second, or [12] is deployed on an N95 mobile device, and achieves five frames per second. The only exception is [15] which uses a deep convolutional neural network (CNN) for the digits classification step. It takes 10 seconds, that is unsatisfactory. Although we leverage CNN-based object detection models, we choose parameters so that the response is prepared in less than half a second. Besides, we design a simple compression technique when using the cloud-side engine in poor network conditions, and, therefore, reduce the end-to-end response time. ### _Deep Learning on Edge_ In practice, an edge device can be any computing machine (indluding smartphones) that generates data or is near a data generation source. Edge devices are usually resource-constrained. So it is challenging to deploy big deep learning models on edge. A concise comparison of different methods is provided in Table II. The efforts to overcome the limitations can be divided into three major directions. The first direction is designing light-weight DNN models or optimizing existing ones. Several successful works have studied light-weight models including [17, 18, 19, 20, 21], and [30]. In general, related work in this category leverage a combination of using convolution blocks with lower parameters (e.g., separable convolutions), quantization, pruning, and model distillation. These techniques Fig. 3: Our mobile-cloud architecture. \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline & & & & & & \\ \hline D1 & [17, 18, 19], & & & & & \\ & [20, 21] & & \(\star\) & \(\star\) & \(\star\) & \(\star\) & \(\star\) & E,C \\ \hline D2 A1 & [22, 23, 24] & \(\star\) & \(\star\) & \(\star\) & \(\star\) & \(\star\) & 7 & E,C \\ \hline D2 A2 & [25, 26, 27] & \(\star\) & \(\star\) & \(\star\) & \(\star\) & \(\star\) & 7 & E,C \\ \hline DS & [27, 28, 29] & \(\star\) & \(\star\) & \(\star\) & \(\star\) & \(\star\) & E,C \\ \hline Ours & \(\star\) & \(\star\) & \(\star\) & \(\star\) & \(\star\) & \(\star\) & \(\star\) \\ \hline \hline \end{tabular} * Independence from a central cloud entity. * Bandwidth usage optimization. \end{table} TABLE II: Related Work on Deep Learning at Edge: A summary. mainly focus on optimizing response time and memory footprint, and thereby sacrifice accuracy (Fig.5(b)). The second direction, distributes computation across edge and central cloud, vertically (Fig.5(c)). Some techniques aim to reduce the required computation and bandwidth by dropping insignificant frames of input stream at the edge, before sending them to the cloud, and depending on the nature of a task may follow different filtering policies [22, 23, 24]. Some distribute the inference model across edge and cloud (Split Computing) [25, 26, 27], and usually trade accuracy for better response time or bandwidth usage. These methods rely on central cloud; hence, in the case of network outages, they become unavailable. Another recent direction determines one or more exit points within neural model (including pre-processing steps). Exit points are usually designed so that at least one of them stay on the edge. Depending on the task and its requirements, the model can exit early, sacrificing accuracy to meet delay constraints [27, 28, 29]. Despite their appealing results, they still are immature and are evaluated on simple tasks such as classification (Fig.5(d)). In comparison, the strength of our proposed mobile-cloud architecture (Fig.5(e)) is its ability to take advantage of both worlds: it improves accuracy, and in general performability as well as availability, thanks to the independent nature of our ensemble models. ## 3 Proposed Method Our service captures images from medical devices using phone's camera (Fig.3(1)), then performs a pre-processing step on mobile (Fig.3(2)). The mobile device concurrently sends the prepared image to cloud (Fig.3(3.1, 4)), while executing light-weight inference engine locally (Fig.3(3.2)). After receiving predictions of both sides, the mobile device runs our ensemble algorithm (Fig.3(5)). It also performs a post-processing step, to correct the initial answer that is produced by the mobile, if required (Fig.3(6)). If user recognizes a misprediction, they can send the image and its corresponding true reading (Fig.3(7)) to cloud storage for future analysis and training iterations (Fig.3(8)). The mobile model provides availability, while the cloud model improves performability (Refer to SS5.5). Both models together improve accuracy. We reduce the problem into object detection so that digits of the sensed value (and not other digits illustrating noisy information such as temperature, time and date) are objects of interest, together with a post-processing step to reorder objects and prepare the final response. DNN-based methods have shown remarkable results, but to get the most out of them, we need a well-annotated training dataset, which is not available for our problem. We design a better workflow and a data generation algorithm to automatically synthesize thousands of training images with well-aligned annotations. Our data generation workflow is described in detail in SS4. We use mobile smartphones to capture images from medical devices and to extract useful information. While inference on the mobile device provides low latency and high availability for users, it usually suffers from low accuracy due to resource constraints. On the other hand, inference on the cloud provides accuracy and performability, but Fig. 4: Examples of images captured from medical devices. Fig. 5: Deep Learning Serving Architectures. high latency and unavailability in the case of poor network connection are its big weaknesses. As depicted in Fig. 3, our proposed hybrid mobile-cloud serving architecture takes advantage of both mobile and cloud computing. Additionally, we designed a specialized ensemble model to further improve our desired performance metrics. ### _Deep Learning Models_ We reduce the problem of image-based reading from screen of medical devices to an object detection and post-processing task. Each digit and decimal point belonging to the sensed value occupies a region of interest (RoI) and has a corresponding class (Fig. 2-1). In post-processing step, we convert a group of objects to a meaningful string (Fig. 2-2). For the object detection part, we design and train two convolutional models (CNNs) based on single-shot detector (SSD) architecture [31]. In general, we have powerful resources on cloud, but limitation at the mobile. Thus, we consider two proportionate backbones for our SSD architectures. One is highly optimized for smartphones and has fewer parameters; hence, lower accuracy. Another one is more accurate and consumes more resources so that can not be deployed on mobile. We prefer SSD-based networks for both sides, because they achieve better end-to-end latency [32]. #### 3.1.1 Cloud-Side Model We employ ResNet-50 [33] as the backbone of our SSD model on the cloud-side. It contains 16 convolutional blocks with shortcuts and one fully connected layer. It has more than \(25M\) parameters, and requires 4 billion multiplication-accumulation operations (MACs) per sample. We remove its last few layers including the classification head to use the remainder as a feature extractor backbone for the SSD architecture. Since the size of RoIs in input images varies, we utilize feature pyramid network (FPN) [34]. It leverages intrinsic pyramid structure of modern CNNs (i.e., ResNet) to generate multi-scale feature maps. FPN can improve accuracy of the model, and is much more compute efficient than feeding an image with different sizes multiple times. #### 3.1.2 Mobile-Side Model Similar to the cloud-side, we stacked up a CNN-based classification model as a feature extraction backbone for our mobile-side SSD architecture. However, we employed edge-friendly architectures both for the backbone and the detection head. Our mobile-side model is directly inspired by the current state-of-the-art detection architecture for mobile devices, the MobileDets [35]. The reason of using depth-wise separable convolutional layers instead of regular convolutional layers was their fewer parameters and MACs, while hoping these metrics directly relate to less train and inference time. However, recent studies [36, 37] have shown network parameters and MACs may not be good proxies to model inference throughput and latency as they are not the only factors. Therefore, MobileDets expands the neural architecture search space by adding regular convolutions as well. After multiple experiments on different backbone architectures, we found the architecture proposed by [35] for Mobile CPUs is the most appropriate for our mobile-side model. Since mobile devices are resource-constrained, we do not leverage FPN. ### _Ensemble Algorithm_ As illustrated in Fig. 3, we design and deploy a light-weight engine at the mobile along with a more compute-intensive and accurate one on the cloud. We integrate predictions of our two deep learning models through our ensemble algorithm (Algorithm 1). Having regions of interest (RoIs), labels and confidence scores of the objects detected by both models, our ensemble algorithm first finds corresponding RoIs among mobile and cloud predictions (line 3). They must have identical labels (e.g., class of digit '5') and their distance should not be more than a tolerable amount \(\epsilon\). The intuition behind is, the RoI coordinates predicted by different object detection models may not exactly match with each other. \(\epsilon\) controls maximum distance between two corresponding predictions from two different models. There may exist more than one object with same labels. So if we do not check that, it will lead to misrecognition of RoIs. However, strict comparison is a bad idea because two quite different deep learning models may have small differences in RoI refinement after training. Hence, we add a tolerable distance to make it more suitable. After finding, when both scores are higher than specified thresholds, the ensemble algorithm adds them up as the final confidence score (line 7). Otherwise, picks the highest score (line 9). In the case of only one prediction for a particular object on either mobile or cloud, simply adds it to our final results (lines 13, 16). The second elimination step happens in (line 17), where all remaining objects are compared via a higher universal threshold. Eventually, our algorithm reorders the remaining objects by their placement within image (line 18), and concatenates a sequence of corresponding labels to generate the output string (line 19). ### _Post-Processing Algorithm_ To remove redundant objects more, and to improve accuracy even further, we add an additional post-processing algorithm right before the line 18 in Algorithm 1. We apply a modification of Non-Max Suppression (NMS) [38] to be compatible with our problem. Here, our objective is to find and remove those RoIs that significantly overlap with each other. It means that there are more than one object of interest in a region while it must not. Leveraging NMS reduces false positive (FP). Our modified NMS is defined in Algorithm 2. It executes iteratively. Having RoIs and their corresponding confidence scores and labels, it first sorts RoIs based on their confidence in a descending order (line 7). It then removes those objects that their overlap of the occupied area with another object is high enough (\(>T_{nms}\)), and their confidence is simultaneously lower (lines 13, 14). The overlap between two objects is calculated by intersection over union (IoU) of their corresponding RoIs. Although this works well, using a single threshold for all labels introduces a problem in our specific task. The _unit_ label, representing the decimal point of sensed numbers, considerably overlaps with other objects, but it must not be removed. This also applies to the classes of '1' and '7'. To avoid that, we consider the _unit_ label separately in our calculations within NMS procedure (lines 2-6). ``` 1:\(R^{M}\leftarrow\emptyset\), \(\rho^{M}\leftarrow\emptyset\), \(L^{M}\leftarrow\emptyset\) 2:\(i\gets j\in\mathbb{N}\mid j\geq|L|,L_{j}=\)'.' \(\triangleright\) ('\(\prime\).' \(\equiv\) decimal point) 3:if\(i\neq 0\)then 4:\(L^{M}\gets L_{i}\), \(\rho^{M}\leftarrow\rho_{i}\), \(R^{M}\gets R_{i}\) 5:\(L\gets L-L_{i}\), \(\rho\leftarrow\rho-\rho_{i}\), \(R\gets R-R_{i}\) 6:endif 7:\((L,\rho,R)\leftarrow[(L_{i},\rho_{i},R_{i})\mid\forall(i,j):i<j\Leftrightarrow \rho_{i}\geq\rho_{j}]\) 8:while\(R\neq\emptyset\)do 9:\(R^{M}\gets R_{1}\cup R^{M}\), \(R\gets R-R_{1}\) 10:\(L^{M}\gets L_{1}\cup L_{M}^{M}\), \(L\gets L-L_{1}\) 11:\(\rho^{M}\leftarrow\rho_{1}\cup\rho^{M}\), \(\rho\leftarrow\rho-\rho_{1}\) 12:for\(i\gets 1\) to \([R]\)do 13:if\(\text{IoU}(R_{i},R_{i}^{M})\geq T_{nms}\)then 14:\(R\gets R-R_{i}\), \(L\gets L-L_{i}\) 15:endif 16:endfor 17:endwhile 18:return\(R^{M}\cup R\), \(L^{M}\cup L\), \(\rho^{M}\cup\rho\) ``` **Algorithm 2** Post-Processing Algorithm ### _Bandwidth Optimization_ Our target usecase is in underdeveloped countries, thus more often than not, some users may be located in areas or situations that do not have access to the internet or their connection is poor, e.g., due to limited available bandwidth or network congestion problems. Therefore, we prefer providing the highest availability at the cost of less accurate response, to increase performability, in general. One can still get the mobile answer in zero-connection situations. Nevertheless, user will experience accuracy degradation. For poor network conditions, we present a simple image compression technique which reduces the bandwidth usage when sending captured images (Algorithm 3). We downscale the image, transform it from RGB to HSV colorspace, and then only send the value (\(V\)) channel of the image to the cloud. On the cloud, \(H\) and \(S\) are filled with predefined constants and are up-scaled to the original dimension before performing inference. ``` 1:\(Img\)\(\circled{\circled{\circled{\circled{\circled{\circled{\circled{\circled{\circled{\circled{\circled{\circled{\circled{\c ### _Data Preparation_ After analyzing the images captured from displays of various glucometers, we discovered a number of common differentiating properties, summarized in Table III. We learned that a lot of problems stem from the environmental conditions. For example, as depicted in Fig. 4, some images have been rotated (a, b, d, f, g), taken from different viewpoints (a, b, d, e, f), have poor contrast between the sensed quantity and its background (a, e, f), and have flashlight reflection (h), distorted and/or blurred (e, h). Some of the patterns from Table III are observable as well. For example, they may contain charts (d), have backlit (c, d, g), irregular screen shapes (g, h). And almost all of them are different in background color, display font style and type, etc. This diversity in medical devices together with environmental variations introduce many difficulties for previous methods as they are based on feature engineering. Instead, we utilize such properties to synthesize a large training dataset with annotation. We gather 100 distinct images of medical devices and 150 open-source fonts (including seven-segment, dot-matrix, LCD, and LED styles) through the Internet. We then manually transform all images to obtain images with standard point of view. We also determine \(\sim\)\(20\) different point coordinates per image corresponding to display screen corners and different items that can be on it (e.g., sensed value, measurement unit, date, etc.). We eventually feed all these preliminary information into our _Data Synthesizer_ algorithm which can synthesize almost infinite distinct images of medical device with different sensed values. ### _Data Synthesizer_ Our _Data Synthesizer_ (Algorithm 4) receives the data prepared from the last step (line 1), namely a number of transformed images (\(\mathcal{I}mages\)) along with their corresponding point coordinates (\(\mathcal{A}\)) a set of fonts (\(\mathcal{F}\)), a degree-of-freedom (\(\mathcal{D}\)) for every item on the display and their properties, and maximum number of images to synthesize (\(\mathcal{N}\)). It then generates new images (line 4) and strict annotations (line 5) following Algorithm 4. Producing thousands of images from a small sample size, particularly to train deep learning models, is an important procedure, because any subtle noise or shift in distribution of training data leads to the overfitting issue: our models may reach to a reasonable accuracy on our synthesized training data, but perform poorly on real-world test data. Therefore, all artificially generated images must be quite similar to real images. On the other hand, to train the deep learning models, our training set should contain enough distinctive features. Also, our algorithm has to generate well-aligned annotations. ``` 1:procedureSynthesizer(\(Images^{Set},\mathcal{A},\mathcal{D},\mathcal{F},\mathcal{N}\)) 2:for\(t\gets 1\)to\(\mathcal{N}\)do 3:repeat 4:\((Img,R^{Screen})\leftarrow\text{Sample}_{Rand}(\)\((Images^{Set},\mathcal{A})\) ) 5:\((\text{Value},Items)\leftarrow\text{Generate}_{Rand}(Img,\mathcal{A}_{Img}, \mathcal{D}_{Img},\mathcal{F})\) 6:\(\forall\)item\(j\in Items\)\(\forall\)object\(k\in\)item\(j:\)Calculate\((R^{Items}_{j,k})\). 7:\(\forall\)object\(i\in\)Value\(:\)Calculate\((R^{Value}_{j,k})\). 8:until\(R^{Value}_{j}\cap R^{Items}_{j,k}=\emptyset\)and 9:\(\text{IoU}(R^{Items}_{j,k},R^{Items}_{j})<\epsilon,\ i\neq j\) 10:repeat 11:\(Img\leftarrow\text{Transform}(Img,\ Set)\) 12:\(\forall R_{i}\in R^{Value}:\)Calculate\((R_{i})\)\(\triangleright\)(Recalculates RoIs.) 13:until\(\forall R_{i}\in R^{Value}:\ R_{i}\cap(\bigcup R^{Screen})=\emptyset\) 14:store\(\{Img,R^{Value}_{i},\text{Value},R^{Items}\}\)\(\triangleright\)Value\(\equiv\)Label\(L_{Img}\) 15:endfor 16:endprocedure ``` **Algorithm 4** Data Synthesizer We use \(\mathcal{D}\) to make sure the synthesized images and their corresponding annotations are both similar to real captured images and follow the patterns described in Table III. We introduce controlled stochasticity to our synthesizing procedure. Randomness is a key enabler to increase mutual distinction between training samples, to prevent overfitting and improve eventual accuracy. Additionally, our algorithm randomly selects or generates different font styles, sizes, colors, background lights, time, date, and different indicators (e.g., temperature or blood drop symbol) (line 5). The font styles, font sizes, positions, and colors of additional information are selected from a range randomly as well. It validates the placement of each item afterwards (line 8). There must not be any overlap between the sensed Value and other additional information on screen (\(Items\)) such as time, date, temperature, etc. Also, there should not be any significant (\(>\epsilon\)) intersection between two different \(Items\). Consequently, it calls a transformation procedure (line 10) that modifies each generated image together with its annotation. Our transformation procedure consists of two major sections: _geometrical_ and _visual_. In the geometrical part, we apply scaling, cropping, rotation, shearing, perspective transformation, and translation. In the visual part, we modify color, contrast, lightness, and sharpness. We insert noise, and arbitrarily drop some pixels to simulate Fig. 6: (a) conventional computer vision workflow. (b) proposed workflow. Note that although some blocks are the same in both, their internal procedure might be completely different. \begin{table} \begin{tabular}{p{42.7pt} p{34.1pt} p{34.1pt}} \hline \# & Property & Pattern \\ \hline 1 & Display & Different aspect ratios \\ 2 & Different screen shapes \\ 3 & LCD, LED, \(\mathcal{D}\)-segment, dot-matrix \\ 4 & With or W/O backlit \\ 5 & Regular or irregular screens \\ 6 & Rounded or sharp corners \\ 7 & Value & Different measurement units (\(mg/dL\), \(mmol/L\)) \\ 8 & (Label) & Usually, the biggest item on display \\ 9 & No specific position on different displays \\ 10 & Additional & Include date, time, various signs and symbols, measurements \\ 11 & \(Items\) & surement unit, diagrams, etc. \\ 12 & No specific position on different displays \\ \hline \end{tabular} \end{table} TABLE III: Common differentiating properties and troublemakers in image-based readout from medical devices. light reflections or scratches in display screen. The range and likelihood of each transformation depend on the generation set (\(Set\)). In general, we designate a broader range and higher likelihood in the training set rather than validation set. The intuition behind is, we aim to generalize in the training time, while in validation, a set of data much more similar to real-world images is needed. Note that we never apply our _Data Synthesizer_ on test set. In addition, to prevent any meaningful leakage, we make sure there is no similarity in medical devices in test set and training/validation set. After applying transformations, Value must still be within \(Screen\) Rol (\(R^{Screen}\)). Otherwise, we apply transformation again until the condition is met. A few synthesized samples, generated using only one image from \(\mathcal{I}Images\), is depicted in Fig.7. Value in synthesized images, ranges from 0 to 1000 and follows _discrete pseudo iid_ distribution. Half of the numbers are integers. 35% are single decimal (e.g., 12.5), and remainders are double decimal (e.g., 1.25). The generated measurement unit must be compatible with its synthetic Value. Backlit and display background colors are not uniformly random, rather our algorithm selects colors that are more likely to appear in real-world devices with higher probability, but there is a probability to generate completely new colors in order to make the models robust against new devices or outliers. Each item \(\in Items\) may randomly appear or disappear on \(Screen\). The \(\mathcal{I}Images\) are split into train (\(\mathcal{I}Images^{train}\)) and validation (\(\mathcal{I}Images^{val}\)) parts, in 90 and 10 shares, respectively. Each image comprises a different medical device. ## 5 Evaluation ### _Experimental Setup_ **Training Dataset.** We generated \(1M\) distinct well-annotated training samples using our _Data Synthesizer_. The image size directly affects training throughput, inference latency, size of detection models, and the space needed to store the dataset. Therefore, we down-scale each image to \(3\times 320^{2}\) and save in JPEG format. Note that the input sizes of our CNN models are different. It occupies \(45GB\) disk space and takes \(\sim\)\(11\) hours to generate and store \(1M\) samples. **Test Dataset.** We collected 300 images directly captured by our clients from various glucometer devices without any modifications except down-sizing to the desired scale. As Fig. 8 depicts, the class of '1' has the most frequency of occurrence, and the class of '.' has the least among others. Additionally, we evaluate our method on another publicly available dataset from Cameralab from University of Oxford (referred to as Oxford for simplicity from now on) [6]. **Training.** We trained both our mobile and cloud neural models on a single server. We use an in-house GPU server (Table VI) to train our models. We designed and trained our models via TensorFlow framework. We reduce the time and resources needed for training the models by employing transfer learning from pre-trained weights on COCO dataset [39]. It helps our models extract better low-level features. We trained each model for 500-600k epochs (\(\sim\)\(3.5\) days), and applied simple augmentations during training to prevent overfitting. **Inference Setup.** For the proof of concept, we use the same server as the cloud side. For the mobile side, we use Galaxy Tab A (2019), a mediocre tablet (Table V). We leveraged TensorFlow Lite for Android to convert, optimize, measure the performance of our mobile model. During the inference, input images get resized to \(3\times 416^{2}\) and \(3\times 350^{2}\) for the cloud and mobile models, respectively. ### _Accuracy Metric_ We report the accuracy of our algorithm as formulated in (1). It is more illustrative in comparison with the previously reported formulas, namely Precision, Recall and F1-score for classification and localization. That said, it is the strictest as well, and is necessary due to medical nature of the values being read. Given a prediction \(\hat{Y}\) and its ground-truth \(\Lambda\). Let: \[\hat{Y} :=\hat{y_{1}}\hat{y_{2}}\cdots\hat{y_{m}}\] \[\Lambda :=\lambda_{1}\lambda_{2}\cdots\lambda_{n}\] \(\hat{Y}\) is then considered as a _correct_ prediction if: \[m=n\quad\text{and}\quad\forall i:\hat{y_{i}}=\lambda_{i},\ i\in\mathbb{N},\ i\leq n \tag{1}\] A prediction is correct if 1 all ground-truth RoIs are correctly detected, 2 the labels are correctly predicted, 3 objects are in the same order as they are in the ground-truth, and 4 no additional object (e.g., from \(Items\)) is detected. For instance, assuming a ground-truth label _11.9_ (Fig. 2), none of 119, 1.19, 111, 1149, 1109, 11, 19, 1.9, etc. are correct predictions, which makes it more difficult than Precision or Recall. ### _Accuracy on Our Test Set_ We first evaluate our algorithms against 300 images directly captured by our clients from various glucometer devices without any modifications except down-sizing to the desired scale. As Fig. 8 depicts, the class of '1' has the most frequency of occurrence, and the class of '.' has the least among others. While cloud- and mobile-only predictions fluctuate around \(89-90\%\) (Fig. 9), our ensemble model improves the accuracy by \(7.3\) percentage points. The reason behind is illustrated in Fig. 10. In each confusion matrix, every column represents objects in a predicted class while each row represents the objects in its actual class. Making matrices more informative, we added another row and column. The last column represents the objects in a true class that are ignored, and the last row shows the detected objects that do not belong to any classes. To get a better perspective, each matrix is scaled to one in rows. Our cloud model performs well on most classes except '.' and '1'. On the other side, our mobile model performs poorly on '0', '4' and '9', but predicts '.' and '1' much better than the cloud. The last matrix confirms that our ensemble algorithm perform better than the single models. It also reduces false positives and false negatives. The mispredicted images usually contain some special objects/symbols on their screen. For example, the device shown in Fig. 12(f) displays an arrow on the screen which is pretty similar to '7', and sometimes misleads the models. #### 5.3.1 Post-Processing Impact Our post-processing algorithm shows its positive effect on prediction of test samples. It increases accuracy by \(0.5\) up to \(1\) on our test set (Fig. 9). The main reason for fewer false positive errors is applying post-processing after the ensemble step (Fig. 10). #### 5.3.2 Device-Aware Prediction (Knowledge Injection) The least accurate prediction in our proposed method belongs to the class ',' (decimal point) with \(90\%\) accuracy. The current state-of-the-art [6] only detects objects and simply adds a decimal point manually. This is because their dataset contains two devices that both use \(mmol/L\) as measurement unit, meaning that all values are floating point. In contrast, our dataset is more general and comprises \(mg/DL\) as well. We can improve the accuracy of our algorithm by injecting one bit prior knowledge about the measurement unit of a medical device (\(mg/DL\) or \(mmol/L\)). As shown in Fig. 9, it increases the accuracy by \(0.5\). However, considering the displayed measurement unit on the screen as another object of interest can be used in future work. \(L_{k}\)_accuracy while being able to prepare response within \(5s\). Availability_ can be achieved by responding in \(<\)\(500ms\), as a soft deadline. Here, \(L_{1}\) (\(\mathscr{A}\)) and \(L_{2}\) (\(\mathscr{V}\)) stand for \(>\)\(90\%\) and \(>\)\(85\%\) accuracy, respectively. We evaluate each service in three different connection qualities: excellent, poor (e.g., due to congestion or limited bandwidth) and zero (no connectivity). As summarized in Table IV, our model performs better in all three cases. To the best of our knowledge, no comparable Split Computing or Early Exit method currently significantly compresses its intermediate data while losing little to no accuracy (rows 4 and 6). ## 6 Conclusion We presented a mobile-cloud automated image-based glut-meter reading system broadly applicable to various medical devices; our method uses the camera, the wireless communication, and the computing capabilities of the mobile phone to provide a low-cost alternative to expensive devices available more in developed countries. Our deep learning-based ensemble algorithm together with our mobile-cloud service architecture achieves higher availability and performability compared with mobile-only, cloud-only, split computing, and early exit rival models. We proposed a data generation algorithm to address the data scarcity problem, and synthesized one million well-annotated samples. Note that the massive varieties in glucometer devices and their screen outputs, has conventionally posed challenges to applicability of existing techniques, but we instead took benefit from them in our data generation techniques to produce more training samples from existing photos, and thus to achieve a more robust model. Our method is capable of proper readout even in imperfect conditions such as dark ambience, reflections on the screen, blurry and out-of-focus photos. Our ensemble algorithm efficiently combines results obtained from two separate DNN models. Specifically, we take into account the slight shifts in bounding boxes identified by the two models, as well as the special case of '\(\cdot\)' symbol that, unlike other symbols, can significantly overlap with other detected objects. We evaluated the accuracy of our method on two different real-world test sets, one collected from our users, and one from Camera Lab [6]. Our method achieved \(97.7\%\) on our test set, and \(92.1\%\) on Oxford's, outperforming the current state-of-the-art. Our results showed that our algorithm is robust and generalizes well on other datasets.
2310.10114
Node classification in networks via simplicial interactions
In the node classification task, it is natural to presume that densely connected nodes tend to exhibit similar attributes. Given this, it is crucial to first define what constitutes a dense connection and to develop a reliable mathematical tool for assessing node cohesiveness. In this paper, we propose a probability-based objective function for semi-supervised node classification that takes advantage of higher-order networks' capabilities. The proposed function reflects the philosophy aligned with the intuition behind classifying within higher order networks, as it is designed to reduce the likelihood of nodes interconnected through higher-order networks bearing different labels. Additionally, we propose the Stochastic Block Tensor Model (SBTM) as a graph generation model designed specifically to address a significant limitation of the traditional stochastic block model, which does not adequately represent the distribution of higher-order structures in real networks. We evaluate the objective function using networks generated by the SBTM, which include both balanced and imbalanced scenarios. Furthermore, we present an approach that integrates the objective function with graph neural network (GNN)-based semi-supervised node classification methodologies, aiming for additional performance gains. Our results demonstrate that in challenging classification scenarios-characterized by a low probability of homo-connections, a high probability of hetero-connections, and limited prior node information-models based on the higher-order network outperform pairwise interaction-based models. Furthermore, experimental results suggest that integrating our proposed objective function with existing GNN-based node classification approaches enhances classification performance by efficiently learning higher-order structures distributed in the network.
Eunho Koo, Tongseok Lim
2023-10-16T06:48:17Z
http://arxiv.org/abs/2310.10114v2
# Node classification in networks via simplicial interactions ###### Abstract In the node classification task, it is intuitively understood that densely connected nodes tend to exhibit similar attributes. However, it is crucial to first define what constitutes a dense connection and to develop a reliable mathematical tool for assessing node cohesiveness. In this paper, we propose a probability-based objective function for semi-supervised node classification that takes advantage of higher-order networks' capabilities. The proposed function embodies the philosophy most aligned with the intuition behind classifying within higher-order networks, as it is designed to reduce the likelihood of nodes interconnected through higher-order networks bearing different labels. We evaluate the function using both balanced and imbalanced datasets generated by the Planted Partition Model (PPM), as well as a real-world political book dataset. According to the results, in challenging classification contexts characterized by low homo-connection probability, high heterojunction probability, and limited prior information of nodes, higher-order networks outperform pairwise interactions in terms of objective function performance. Notably, the objective function exhibits elevated Recall and F1-score relative to Precision in the imbalanced dataset, indicating its potential applicability in many domains where detecting false negatives is critical, even at the expense of some false positives. Node classification, semi-supervised, simplex, clique, node interaction, higher-order networks, hypergraph, probabilistic objective function. ## I Introduction Networks represented by graphs consist of nodes representing entities of the system, and edges depicting their interactions. Such graphical representations facilitate insights into the system's modular structure or its inherent communities [1, 2]. While traditional graph analysis methods only considered pairwise interaction between nodes, recent research, including those in social sciences [3, 4, 5] and biochemical systems [6], have experimentally demonstrated that networks in real systems often rely on interactions involving more than two nodes or agents. As a result, to analyze the attributes of a network, it is essential to illuminate the causal interactions of the network using higher-order networks (or hypergraphs) beyond pairwise relationship [7, 8, 9]. There are various approaches to address this point of view, and recent studies are elucidating the relationships between cliques (a subset of nodes such that every two distinct nodes in the clique are adjacent) that form higher-order networks using probabilistic modeling based on the Stochastic Block Model (SBM) [10, 11, 12, 13]. SBM is a generative model for random graphs that includes the following parameters: the number of nodes, the number of disjoint communities to which each node belongs, and the probability of edge connections between each community. The most basic and widely used form of SBM assumes that the number of nodes in each community and the probability of edge connections within the same community are the same, but various modified versions of SBM have also been studied [14, 15, 16, 17]. Studies related to community detection (or network clustering), on the other hand, have also been actively pursued [18, 19, 20, 21]. The goal of these studies is to divide the entire system's nodes into several communities, with nodes in each community being densely connected internally [22]. Research involving the higher-order network analysis has also advanced in this field, including the Bayesian framework [23], \(d\)-wise hypergraph SBM on the probability of a hyperedge [24], the sum-of-squares method of SBM [13], and spectral analysis based on the Planted Partition Model (PPM), a variant of SBM [25, 11]. We point out that many studies only consider the network's internal topology and do not take into account prior information. However, in many real-world networks, even if it is a very small proportion of the total number of nodes, the use of prior information is available, that is, we can utilize some known labels of nodes and the total number of labels (communities). It has been reported that with only limited prior information, prediction accuracy and robustness in real noisy networks can be significantly improved [26, 27, 28, 29, 30, 31], and various methods have been suggested, including discrete potential theory method [32], spin-glass model in statistical physics application [26], strategies integrating known cluster assignments for a fraction of nodes [33], and nonnegative matrix factorization model [29]. In this study, we propose a novel probability-based objective (loss) function for the semi-supervised node classification (community detection) task using higher- order networks. The loss function is motivated by the intuition that nodes densely interconnected with edges in a given network are likely to exhibit similar labels. It is intended to incentivize nodes in a hyperedge (a clique) to have the same label by imposing a penalty when nodes within the hyperedge have diverse labels. It is worth noting that the intuition is consistent with SBM's general assumption that nodes with the same label are more likely to be connected in a network. In conjunction with the objective function, we use discrete potential theory to initialize the node probability distribution, specifically the solution to an appropriate Dirichlet boundary value problem on graphs, which can be effectively solved using the concept of equilibrium measures [34]. We then generate balanced and imbalanced graphs using the PPM, a variant of the SBM, to evaluate the performance of the proposed objective function and our optimization formulation. We also test the proposed method on real-world political book data [22, 35]. It is worth noting that the proposed objective function is applicable in a variety of situations, such as with different SBM parameters such as the number of nodes, the number of labels (communities), and the connection probabilities within the same or different communities. In other words, regardless of whether the communities are balanced or imbalanced, or the number of communities, this study proposes a versatile approach that aims to improve classification performance by fully utilizing the structure of the higher-order networks. This paper is structured as follows. Some preliminary information is provided in Section II. The objective function is illustrated in detail in Section III. The experimental setup is described in Section IV. The result is evaluated in Section V. Finally, in Section VI, the conclusion and future work are presented. ## II Preliminaries In this section, we give some basic definitions and mathematical concepts used in this study. ### _Higher-order networks_ In many real-world systems, network interactions are not just pairwise, but involve the joint non-linear couplings of more than two nodes [36]. Here, we fix some terminology on higher-order networks that will be used throughout the paper for the undirected graphs. A (undirected) graph \(G=(V,E)\) consists of a set \(V=\{1,2,...,n\}\) of \(n\) nodes, and a set \(E\subset\{(i,j)\,|\,i,j\in V,\,i\neq j\}\) of edges. We have \((i,j)=(j,i)=\{i,j\}\) as \(G\) is undirected. We assume that a graph \(G\) is connected.1 Footnote 1: Because our proposed algorithm can be applied to each connected component of a graph, the assumption can be made without loss of generality. A hypergraph generalizes \(E\) as \(\mathcal{E}\subset 2^{V}\) where \(2^{V}\) denotes the power set of \(V\), and we denote the hypergraph as \(\mathcal{H}=(V,\mathcal{E})\). In this paper, we focus on the case where \(\mathcal{E}\) consists of the _simplices_ in \(G\): For \(k\in\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\), we say \(\sigma=\{n_{0},n_{1},...,n_{k}\}\) is a \(k\)-simplex (which is also called a \((k+1)\)-clique) if the vertices \(n_{i}\in V\) are distinct (i.e., \(|\sigma|=k+1\)) and for every \(0\leq i<j\leq k\), we have \((n_{i},n_{j})\in E\). Let \(K_{k}=E_{k-1}\) denote the set of all \(k\)-cliques, or \((k-1)\)-simplices, in \(G\). Note that \(E_{0}=V\), \(E_{1}=E\); a node is a 0-simplex, an edge a 1-simplex, a triangle a 2-simplex, a tetrahedron a 3-simplex, and so on. The set comprising all cliques of a graph \(G\), \[K(G):=\bigcup_{k=1}^{\omega(G)}K_{k}(G), \tag{1}\] is called the _clique complex_ of the graph \(G\). The clique number \(\omega(G)\) is the number of vertices in a largest clique of \(G\)[37]. In this paper, we will consider \(\mathcal{E}\) a subset of \(K(G)\) in terms of a hypergraph. **Example 1**.: _For a given \(V=\{1,2,3,4\}\) in Figure 1, consider_ \[\mathcal{E}=\{\{1\},\{2\},\{3\},\{4\},\{1,2\},\{1,3\},\{2,3\},\{3,4\},\{1,2,3 \}\}.\] _There are four \(0\)-simplices: \(K_{1}=\{\{1\},\{2\},\{3\},\{4\}\}\); four \(1\)-simplices: \(K_{2}=\{\{1,2\},\{1,3\},\{2,3\},\{3,4\}\}\); and one \(2\)-simplex: \(K_{3}=\{\{1,2,3\}\}\)._ ### _Node classification algorithm based on random walk on graphs_ In the semi-supervised node classification (partially labeled data classification) tasks, a classic and widely used algorithm based on a random walk on a graph is the following. Given a graph \(G=(V,E)\), a (unbiased) random walk moves from \(i\in V\) to \(j\in V\) with probability \(1/k\) if \((i,j)\in E\) (i.e., \(i\) and \(j\) are adjacent) and the degree of \(i\) (the number of nodes adjacent to \(i\)) is \(k\). For a node set \(V=\{1,2,...,n\}\) and a label-index set \(I=\{1,...,l\}\), we assume that each node corresponds to one label in \(I\) and we know the labels for only a small proportion of nodes relative to \(|V|\). For \(i\in I\) and \(y\in V\), let \(P_{i}(y)\) denote the probability that a random walk starting from an unlabeled node \(y\) will reach an \(i\)-labeled node before arriving at any other labeled node. If \(\operatorname*{argmax}_{i\in I}P_{i}(y)=k\), the algorithm concludes that the label of the unlabeled node \(y\) is \(k\). If a node \(y\) is already labeled as \(k\), we have \(P_{i}(y)=1\) if \(i=k\) and \(P_{i}(y)=0\) if \(i\neq k\). In this study, we refer to this algorithm as \(\mathsf{RW}\), which stands for Random Walk. Now the question is how to obtain \(P_{i}(y)\) for all \(y\in V\) and \(i\in I\). Potential theory shows that \(P_{i}(y)\) can be obtained from the solution \(u\) of the following Dirichlet boundary value problem \[Lu(x)=0\ \ \text{if}\ \ x\in F=(E_{i}\cup H_{i})^{c}, \tag{2}\] \[u(x)=1\ \ \text{if}\ \ x\in E_{i},\] \[u(x)=0\ \ \text{if}\ \ x\in H_{i},\] where \(L=D-A\) is the graph Laplacian matrix where \(D,A\) are degree and adjacency matrix of a given graph \(G\), respectively [37], \(E_{i}\) is the set of \(i\)-labeled nodes, \(H_{i}\) is the set of labeled nodes excluding \(i\)-labeled nodes, and \(u\) is a function on \(V\), valued in \([0,1]\). Then it holds \(P_{i}(y)=u(y)\) for all \(y\in V\). Bendito, Carmona and Encinas [34] proposed an elegant solution to the Dirichlet problem (2) in terms of _equilibrium measures_. For any decomposition \(V=F\cup F^{c}\) where \(F\) and \(F^{c}\) are both non-empty, they showed there exists a unique measure (function)2 such that \(Lv(x)=1\) (and \(v(x)>0\)) for all \(x\in F\) and \(Lv(x)=0\) (and \(v(x)=0\)) for all \(x\in F^{c}\). The measure is called the equilibrium measure and denoted by \(v^{F}\). Now for \(V=F\cup F^{c}\) where \(F,F^{c}\) are the set of unlabeled and labeled nodes, they showed that the solution \(u\) of (2) can be represented by \[u(x)=\sum_{z\in E_{i}}\frac{v^{\{z\}\cup F}(x)-v^{F}(x)}{v^{\{z\}\cup F}(z)}, \quad x\in V. \tag{3}\] Because \(v^{F}\) can be obtained by solving a linear program, (3) provides an efficient way to solve the Dirichlet problem (2); see [34] for more details. We will use \(\mathsf{RW}\) as a baseline algorithm for the semi-supervised node classification. Note that \(\mathsf{RW}\) employs random walks and does not utilize higher-order interactions (HOI). However, \(\mathsf{RW}\) will be useful not only for comparing performance with our HOI-applied strategies, but also for providing a useful initialization method for training HOI algorithms. ### _Planted partition model (PPM)_ Initially conceptualized within social networking and bioinformatics, _Stochastic Block Model_ (SBM) [10] harnesses a probabilistic approach, giving a simple generative model for random graphs. For a node set \(V=\{1,...,n\}\), we consider \(l\) distinct labels or communities such that \(C_{i}\)'s are non-empty disjoint subsets of \(V\) for \(i\in I=\{1,...,l\}\). The connection probabilities between nodes in \(V\) by edges is identified by a \(l\times l\) edge-probability matrix \(P\) where \(P_{ij}\) indicates the probability that a node belonging to the \(i\)th label connects with a node in the \(j\)th label by an edge. In many cases, the diagonal elements of \(P\) are greater than off diagonal elements, implying that the connection probability within the same label is higher than between different labels. Most used SBM is the _planted partition model_ (PPM) that has a constant probability \(p\) on the diagonal \(P\) and another constant probability \(q\) (in general, less than \(p\)) off the diagonal. The essence of PPM is that connections are not formed by chance, and they inherently reflect block membership. Recent advancements [38] have expanded PPM's capabilities to include adaptive algorithms that evaluate optimal block configurations and detect various block interaction patterns. ## III Proposed model We now propose an objective function for node classification using higher-order networks. Given a graph \(G\), let \(V=\{1,...,n\}\) be the node set, \(I=\{1,...,l\}\) be the label index set, that is, the graph consists of \(n\) nodes, and each node has a label ranging from \(1\) to \(l\). Probability distribution over the labels for the node \(j\) is given by \((p_{1}^{j},p_{2}^{j},...,p_{l}^{j})\), where \(p_{i}^{j}\) denotes the probability that node \(j\) having label \(i\), thus \(\sum_{i=1}^{l}p_{i}^{j}=1\) for every \(j\in V\). We define \(K_{k}\) as the set of \((k-1)\)-simplices in the graph, e.g., \(K_{1},K_{2},K_{3}\) corresponds the set of nodes, edges, triangles, respectively. Let \(M=\omega(G)\) be the maximum possible value of \(k\), that is, the simplex composed of the most nodes in the graph is a \((M-1)\)-simplex with \(M\) nodes. Also, we define permutation set with repetitions, denoted by \(S_{k}\), as the set of ordered (and repetition allowed) arrangements of \(k\) elements over \(I=\{1,...,l\}\), hence \(|S_{k}|=l^{k}\). Finally, we define an objective function for node classification as \[J=\sum_{k=2}^{M}w_{k}\sum_{(j_{1},...,j_{k})\in K_{k}}\sum_{(i_{1},...,i_{k})= \theta\in S_{k}}C_{\theta}p_{i_{1}}^{j_{1}}p_{i_{2}}^{j_{2}}\cdots p_{i_{k}}^{ j_{k}} \tag{4}\] where \(w_{k}\geq 0\) are constants, \(C_{\theta}=\binom{k}{e_{1},e_{2},...,e_{l}}=\frac{k!}{e_{1}e_{2}!...e_{l}!}\) such that \(\sum_{i=1}^{l}e_{i}=k\), and \(e_{i}\) is the number of occurrences of the label \(i\) in \((i_{1},i_{2},\dots,i_{k})=\theta\in S_{k}\) for each \(i\in I\). Notice \(J\) is determined by the underlying graph \(G\), and is a function of probabilities \(\{p_{i}^{j}\}_{i\in I,j\in V}\). We then solve the minimization problem \[\text{Minimize}\;\;J\;\;\text{over}\;\;\Delta^{n}=\Delta\times\Delta\times \cdots\times\Delta \tag{5}\] where \(\Delta\) is the probability simplex in \(\mathbb{R}^{l}\), such that \(p^{j}:=(p_{1}^{j},p_{2}^{j},...,p_{l}^{j})\in\Delta\). The idea behind the formulation of \(J\) is that for a fixed \(k\) (that is, fixed \((k-1)\)-simplex), a higher penalty is assigned via \(C_{\theta}\) to the simplex with a greater diversity of labels and vice versa, and we sum the penalties over all \((k-1)\)-simplices, finally taking a weighted sum over \(K_{k}\). This leads us to expect that a probability distribution \(\{p_{i}^{j}\}_{i\in I,j\in V}\) which minimizes \(J\) over \(\Delta^{n}\) will find a node labeling that encourages the least diversity of labels within each simplex on average. This is consistent with our model assumption that the connection probability within the same label is higher than between different labels in forming the network. Finally, from a computational standpoint, solving the problem (5) may necessitate a suitable initialization of the value \(p_{i}^{j}\in\Delta^{n}\). We employ \(\mathsf{RW}\) and use its solution as our initial value, which is simple to compute through linear programs. This is the main idea of this paper. In this paper, we used an exponential base weight \(w_{k}=\alpha^{k-1}\) with various base values \(\alpha>0\), and the default value of \(\alpha\) is set to \(1\) leading to the constant weight. It is worth noting that it can be shown \(\mathbb{E}|K_{k}|\leq 2^{k-1}|V|^{k}p^{\frac{k(k-1)}{2}}\) for each \(k\geq 2\) in the PPM (constant probability \(p\) on the diagonal \(P\) and \(q\) off the diagonal with \(p>q\)). This explains why, in many real-world applications, higher-order simplicial structures become increasingly difficult to observe as \(k\) increases, because \(p\) is a small number of the order of \(k^{2}\). Also, for a fixed element in \(K_{k}\), the computational complexity in the objective function is comparable to that of the multinomial expansion \((a_{1}+\cdots+a_{k})^{l}\) which is \(O(l^{k})\). **Example 2**.: _Let \(I=\{1,2\}\) and let \(K_{3}\) be the set of all \(2\)-simplices in a given graph. Fix a member \((j_{1},j_{2},j_{3})\in K_{3}\). Then possible combinations for the binary labeling leads to \(2^{3}=8\) terms of the form \(p_{i_{1}}^{j_{1}}p_{i_{2}}^{j_{2}}p_{i_{3}}^{j_{3}}\) where \(i_{1},i_{2},i_{3}\in I\), which represents the probability that the nodes \(j_{1},j_{2},j_{3}\) have labels \(i_{1},i_{2},i_{3}\), respectively. We impose a penalty of multinomial order according to the distinct node labels within a given simplex. For a \(2\)-simplex, the objective with respect to two labels classification, which is the third summation term in (4), consists of the following eight terms:_ \[\binom{3}{3,0}p_{1}^{j_{1}}p_{1}^{j_{2}}p_{1}^{j_{3}}+\binom{3}{2,1}p _{1}^{j_{1}}p_{1}^{j_{2}}p_{2}^{j_{3}}+\binom{3}{2,1}p_{1}^{j_{1}}p_{2}^{j_{2}}p_ {1}^{j_{3}}\] \[+\binom{3}{2,1}p_{2}^{j_{1}}p_{1}^{j_{2}}p_{1}^{j_{3}}+\binom{3}{1,2}p_{1}^{j_{1}}p_{2}^{j_{2}}p_{2}^{j_{3}}+\binom{3}{1,2}p_{2}^{j_{1}}p_{1}^{j_ {2}}p_{2}^{j_{3}}\] \[+\binom{3}{1,2}p_{2}^{j_{1}}p_{2}^{j_{2}}p_{1}^{j_{3}}+\binom{3}{0,3}p_{2}^{j_{1}}p_{2}^{j_{2}}p_{2}^{j_{3}},\] _and of course, \(\binom{3}{3,0}=\binom{3}{0,3}=1\), \(\binom{3}{2,1}=\binom{3}{1,2}=3\), and \(p_{1}^{j}=p_{2}^{j}=1\)._ _Now consider an edge-probability matrix \(P\) that has a constant probability \(p\) on the diagonal and \(q\) off the diagonal. Let \(|V|=3N\) and \(I=\{1,2,3\}\) such that the number of nodes corresponding to three labels is \(N\) each (i.e., \(|C_{1}|=|C_{2}|=|C_{3}|=N\).) Then the expected number of \(2\)-simplices (that is, \(|K_{3}|\)) in a random graph generated under \(P\) is given by_ \[\binom{3}{1}\binom{N}{3}p^{3}+2!\binom{3}{2}\binom{N}{1}\binom{N}{ 2}pq^{2}\] \[+\binom{3}{3}\binom{N}{1}\binom{N}{1}\binom{N}{1}\binom{N}{1}q^{3},\] _where \(\binom{3}{1}\binom{N}{3}p^{3}\) is the expected number of \(2\)-simplices with three vertices in the same group \(C_{i}\), \(i=1,2,3\), \(2!\binom{3}{2}\binom{N}{1}\binom{N}{2}pq^{2}\) is the expected number of \(2\)-simplices with two vertices in the same group but one in another, and \(\binom{3}{3}\binom{N}{1}\binom{N}{1}\binom{N}{1}q^{3}\) is the expected number of \(2\)-simplices with three vertices in all different groups. The sum consists of \(2^{k-1}=4\) terms, with the second term counted twice. Each term is bounded by \(|I|^{k}(|V|/|I|)^{k}p^{\frac{k(k-1)}{2}}=|V|^{k}p^{\frac{k(k-1)}{2}}\) since \(p>q\), hence the sum is bounded by \(2^{k-1}|V|^{k}p^{\frac{k(k-1)}{2}}\)._ ## IV Experimental setup ### _Data_ In this study, we generate graphs using PPM in order to evaluate the proposed objective function in higher-order networks. First, a balanced graph is generated with \(|I|=3\) and \(|V|=150\), that is, there are 50 nodes corresponding to each label. The diagonal constant probability \(p\) of the edge probability matrix \(P\) is evaluated in the range \(0.1\leq p\leq 0.2\), while the off diagonal probability constant \(q\) is evaluated in \(0.01\leq q\leq 0.025\). Furthermore, for semi-supervised node classification, the RW algorithm (as discussed in Section II-B) is employed as the initial probability distribution method over the nodes, and prior information ratio (the proportion of nodes whose labels are revealed) is assessed in the range from 0.01 to 0.10. Second, for an imbalanced graph with \(|I|=3\) and \(|V|=120\), the number of nodes corresponding to each label are set to be 60, 40, and 20. The initial probability distribution method as well as ranges of \(p,q\), and prior information ratio remain consistent with the balanced experiment. We utilize the co-purchase dataset of political books [22, 35] to further evaluate using a real dataset. The dataset captures the co-purchase records of 105 political books on Amazon during the 2004 U.S. president election period. Each book belongs to one of three labels: conservative (49 books), liberal (43 books), or neutral (13 books). Since edges represent frequent co-purchasing of books by the same buyers, this data set encapsulates various higher-order networks. It is noteworthy to report that the average connection probability between the same labeled nodes and different labeled nodes is 0.172 and 0.021, respectively. One node is chosen at random from each label and used as prior information, resulting in a prior information ratio of 0.029. The RW algorithm is also used for node initialization. The configuration of the data set is presented in Figure 1. ### _Optimization method_ Sequential Quadratic Programming (SQP) [39, 40] is an iterative method used for constrained nonlinear optimization. The basic idea is to solve sequence of quadratic programming subproblems that approximates the original nonlinear problem. Each iteration of SQP refines the approximation and moves closer to the solution of the nonlinear problem. Sequential Least Square Programming (SLSQP) [41, 42], which is Fig. 1: Configuration of the political book dataset. (a) indicates the data illustration using a graph, (b) presents the number of \(k\)-simplices, and (c) illustrates the connection probabilities between three labels: C (49 conservative books), L (43 liberal books), and N (13 neutral books). employed as our optimization method, is a variant of SQP, and it utilizes an approximate quadratic form of the objective function and constraints, transforming the problem into a constrained least square problem. SLSQP internally employs the quasi-Newton method to approximate the quadratic form of the objective function. This approach enhances efficiency by using an approximation instead of calculating the actual Hessian matrix (second derivative of the objective function.) It is known that SLSQP requires \(O(n^{2})\) storage and \(O(n^{3})\) time in \(n\)-dimensions [40]. ### _Error metrics_ _Precision, Recall, F1-score_, and _Accuracy_ are employed as error metrics to evaluate the performance of the proposed objective function. In a multi-label classification, the concepts of the above error metrics remain fundamentally the same as in binary classification, but they can be computed for each label individually, that is, one vs rest. Let TP, TN, FP, and FN be the number of true positive, true negative, false positive, and false negative, respectively. Then the error metrics are defined by \[\text{\emph{Precision}} =\frac{TP}{TP+FP},\] \[\text{\emph{Recall}} =\frac{TP}{TP+FN},\] \[\text{\emph{F1-score}} =\frac{2*\text{\emph{Recall}}*\text{\emph{Precision}}}{\text{ \emph{Recall}}+\text{\emph{Precision}}},\] \[\text{\emph{Accuracy}} =\frac{TP+FN}{TP+TN+FP+FN}.\] In addition, _Area Under Curve (AUC)_ is employed as an error metric. _AUC_ denotes the area under the curve that plots the true positive rate (defined by TP/(TP+FN), that is _Recall_) against the false positive rate (defined by FP/(FP+TN)) at various threshold settings. _AUC_ values of 1 and 0.5 indicate perfect classification and no better than random guessing of the algorithm, respectively. ## V Results In this section, we evaluate the node classification performance of the proposed higher-order networks-based method (5) on balanced generated data (Section V-A), imbalanced generated data (Section V-B), and imbalanced real data (political book data in Section V-C). We compare the performance of (5) (which we will call SI, standing for Simplicial Interactions) with the RW algorithm and with the following algorithm (PI), which uses an objective function involving only pairwise interactions \[\text{Minimize}\;\;J_{2}\;\;\text{over}\;\Delta^{n},\;\text{where} \tag{6}\] \[J_{2} =\sum_{(j_{1},j_{2})\in K_{2}}\sum_{(i_{1},i_{2})=\theta\in S_{2} }C_{\theta}p_{i_{1}}^{i_{1}}p_{i_{2}}^{j_{2}}.\] Problems (5) and (6) are both solved using RW for initialization, that is, the initial probability distribution \(\{p_{i}^{j}\}_{i\in I,j\in V}\) is obtained as the solution (3) to the Dirichlet problem (2). The evaluation is conducted on the ranges of \(p,q\) and prior information ratio described in Section IV-A with respect to five error metrics _AUC_, _Precision_, _Recall_, _F1-score_, and _Accuracy_ where \(p,q\) indicates the constant probability of the edge probability matrix in PPM at on, off diagonal, respectively. Result summary and discussion is presented in Section V-D. The implementation algorithm can be found at [https://github.com/kooeuwho/HOI_objective](https://github.com/kooeuwho/HOI_objective). ### _Balanced experiment_ In this experiment, for the diagonal constant on and off probability \((p,q)\) of the edge probability matrix \(P\) in PPM, three values of \(p=0.10,0.15,0.20\) and four values of \(q=0.010,0.015,0.020,0.025\) are tested. Also, the prior information ratio is tested with 0.01, 0.04, 0.07, and 0.10. Because the experiment selects the prior information (the nodes whose labels are exposed) randomly, we conduct 10 experiments for each value of \(p,q\), and prior information ratio. Then the performance of the objective function is evaluated based on the averaged value of the 10 experiments with respect to the five error metrics. Overall averaged experimental result is presented in Figure 2. The result demonstrates that the performances of RW and PI are comparable. Thus, we primarily report the performance gain of the experiment applying SI in comparison to the mean performance of the RW and PI with respect to five error metrics in Figure 3. There is a trend that the gains are greater when \(p\) is lower, \(q\) is higher, and the prior information ratio is lower, implying that SI obtains additional performance gains when the network structure information and the amount of prior information are unclear and limited. ### _Imbalanced experiment_ The hyperparameter setting (\(p,q\) and prior information ratio) is the same as in the balanced experiment. Figure 4 depicts the overall averaged experimental result in terms of hyperparameters and five error metrics. Figure 5 depicts the performance gains. The characteristic feature is that the results of experiments applying SI exhibit lower _Precision_ and higher _Recall_ compared to those using RW and PI. In essence, this means there are more false positives and less false negatives with SI. Such characteristics are critical when selecting algorithms for medical tests where, while there may be misidentifications of disease presence (false positives), missing an actual case (false negatives) can have serious consequences. In imbalanced experiments, we find that experiments using SI outperform experiments using RW and PI in accurately identifying labels corresponding to a minority of nodes, rather than classifying a majority of nodes under specific labels that encompass many nodes. When analyzing the results of many experiments involving imbalanced data, several factors come into play, such as the significance of minority labels and the balance between _Precision_ and _Recall_. In these circumstances, the _F1-score_ is frequently regarded as a reliable error metric [43]. The average percentage performance gain in _F1-score_ for all hyperparameters in this experiment is 61.49 (Figure 5). ### _Political books_ As described in Section IV-A, the political book data set is imbalanced (49 conservative books, 43 liberal books, and 13 neutral books). For the semi-supervised experiment, one node is randomly chosen from each label, resulting in a prior information ratio of 0.029. The percentage performance gain of the SI compared to the mean performance of RW and PI with respect to _AUC_, _Precision_, _Recall_, _F1-score_, and _Accuracy_ is found to be 29.18, 0.69, 105.04, 52.46, and 50.95, respectively (Figure 6(a)). The trend of the gain being lower in _Precision_ and higher in _Recall_ and _F1-score_ is consistent with the findings in Section V-B. To fully investigate the fact that the political book data has 5-simplices as its maximal dimensional simplex, we evaluate the objective function's performance using up to \(K_{m}\) (the set of \((m-1)\)-simplices in the graph) for each \(m=2,3,4\), and \(6\). That is, we also consider the following optimization \[\text{Minimize }J_{m}\text{ over }\Delta^{n},\text{ where } \tag{7}\] \[J_{m}=\sum_{k=2}^{m}w_{k}\sum_{(j_{1},\dots,j_{k})\in K_{k}}\sum_{(i_{1}, \dots,i_{k})=\theta\in S_{k}}C_{\theta}P_{i_{1}}^{j_{1}}p_{i_{2}}^{j_{2}}\dots P _{i_{k}}^{j_{k}},\] and call it SI-m. Thus SI-2 = PI and SI-6 = SI. We omit SI-\(5\) because the difference between SI-\(5\) and SI-6 is expected to be small due to the scarcity of 5-simplices. In addition, for Fig. 2: Experimental result on balanced generated graphs. The generated graphs consist of three labels, with 50 nodes for each label, totaling 150 nodes. Row and column correspond to homo-connection probability \(p\) and hetero-connection probability \(q\). Blue, green, and red indicate the performance with respect to _AUC_ (first left plot in each panel), _Precision_ (second), _Recall_ (third), _F1-score_ (fourth), and _Accuracy_ (fifth). In each left panel, \(x\)-axis indicates the prior information ratio. Gray bars denote the number of \(k\)-simplices for each \(p\) and \(q\). \(w_{k}=\alpha^{k-1}\), we evaluate performance using \(\alpha\)-values of 1, 1.5, 2, and 2.5. In other words, we assess the performance of the objective function that assigns more weights as \(k\) increases. The overall experimental results are shown in Figure 6(b),(c). The _AUC_ performance gain of SI-\(3\), SI-\(4\), and SI over the mean performance of RW and PI is 30.94, 28.34, and 31.64, respectively. For _Precision_, 2.27, 3.53, and 3.12. For _Recall_, 110.16, 114.31, and 112.91. For _F1-score_ and _Accuracy_, 55.53, 58.00, 57.19, and 53.78, 56.61, 55.19, respectively. It is found that the ratio of increase in performance gain diminishes considerably as \(k\) increases. Furthermore, the performance gains of _AUC_ for \(\alpha=2.5\) against for \(\alpha=1,1.5,\) and \(2\) is found to be 0.011, 0.004, and -0.002, respectively. For _Precision_, 0.019, 0.011, and 0.001. For _Recall_, 0.039, 0.022, and 0.002. For _F1-score_ and _Accuracy_, 0.029, 0.016, 0.001, and 0.026, 0.020, 0.003, respectively. This result demonstrates that placing more weight on higher-order networks can be beneficial in achieving additional performance gains. It also shows that the weight is one of the important hyperparameters in the proposed higher-order networks-based objective function. ### _Summary and discussion_ We examined the performance of the proposed objective function (4) in a variety of contexts, including balanced, imbalanced, and political books datasets. Several notable experimental results and discussions are listed below. First, it is evident that nodes corresponding to each label become more distinguishably separated in node classification tasks as the homo-connection probability (\(p\)) increases, the hetero-connection probability (\(q\)) decreases, and as the prior information ratio increases, facilitating community detection. However, the proposed function showed significant performance gains in the opposite scenario (lower \(p\), higher \(q\), and smaller prior information ratio) over all error metrics. This trend is consistent across both balanced and imbalanced experiments, implying the proposed objective function can be used in difficult classification settings, such as detecting overlapping communities. Second, in imbalanced datasets (covering both generated and real data sets), experiments using all higher-order networks (dubbed SI) showed a performance gain with lower _Precision_ but higher _Recall_ and _F1-score_ than the counterparts such as RW and PI. Because of this feature of the proposed objective function, it is applicable in many domains where precise classification of false negatives is critical, even if it comes at the cost of some false positives. Third, according to the results from the political book real data, as we applied the objective function up to \(K_{k}\) (the set of \((k-1)\)-simplices in the graph), there is an improving trend as \(k\) increases with respect to the five error metrics. However, this rate of growth showed diminishing returns. This implies that using all higher- order networks in the proposed objective function may be inefficient when compared to using up to a specific size of simplices. There is a need for research methodologies that select a small number of simplices while maintaining comparable node classification performance. Fourth, increasing the weight parameter (\(w_{k}\) in (4)) assigned to each \(K_{k}\) as \(k\) increases resulted in additional performance gain (Figure 6(b)). While this study used exponential weight parameters, further research into other types of weight parameters and their effects on performance is desired. Fifth, the proposed objective function includes a domain constraint: the condition \(\sum_{i=1}^{l}p_{i}^{j}=1\) must be satisfied, where \(p_{i}^{j}\) denotes the probability of node \(j\) having label \(i\). Although there are ongoing neural network-based studies for optimizing objective functions with constraints [44], it is expected that when there are many nodes and labels, the number of parameters required for neural networks will be substantial. Because this study used the SLSQP optimization method without GPU Fig. 3: Performance gains of the balanced experiments applying SI in comparison to the mean performance of the RW and PI. Orange, yellow, lime, purple, and violet indicate _AUC_, _Precision_, _Recall_, _F1-score_, and _Accuracy_, respectively. Left, middle, and right correspond to the gains with respect to all candidates of homo-connection probability, hetero-connection probability, and prior information ratio, respectively. acceleration, computational efficiency for large datasets with many nodes is limited. The development of neural network-based constrained optimization methodologies will be crucial for applying the proposed objective function to large real-world datasets. Finally, for initialization in our node classification task (5), we used random walk and equilibrium measure-based Dirichlet boundary value approach. Many previous probability-based node classification studies focused on pairwise interactions or random walks, but they frequently overlooked higher-order interactions (HOI). Our findings show that when combined with HOI, the objective function can outperform in various PPM parameters when compared to the objectives relying solely on pairwise interactions. As a result, we believe that using previously trained node distributions from probability-based research as initial node distributions can improve performance when applying HOI to our proposed objective function. ## VI Conclusion In this paper, we propose a probability-based objective function for semi-supervised node classification that takes advantage of simplicial interactions of varying order. Given Fig. 4: Experimental result on imbalanced generated graphs. The generated graphs consist of three labels, with 60,40, and 20, totaling 120 nodes. Row and column correspond to homo-connection probability \(p\) and hetero-connection probability \(q\). Blue, green, and red indicate the performance with respect to _AUC_ (first left plot in each panel), _Precision_ (second), _Recall_ (third), _F1-score_ (fourth), and _Accuracy_ (fifth). In each left panel, \(x\)-axis indicates the prior information ratio. Gray bars denote the number of \(k\)-simplices for each \(p\) and \(q\). that densely connected nodes are likely to have similar properties, our proposed loss function imposes a greater penalty when nodes connected via higher-order simplices have diversified labels. For a given number of labels \(l\), each node is equipped with an \(l\)-dimensional probability distribution. Using the Sequential Least Square Programming (SLSQP) optimization method, we seek the distribution across all nodes that minimizes the objective function under the constraint that the sum of node probabilities is one. Evaluations of our proposed function are carried out on balanced and imbalanced graphs generated using planted partition model (PPM), as well as on the political book real dataset. In challenging classification scenarios, where the probability of connections within the same label is low, the probability of connections between different labels is high, and there are fewer nodes with pre-known labels, incorporating higher-order networks into our proposed function outperforms results obtained by using only pairwise interactions or the random walk-based probabilistic method. Notably, while _Precision_ is moderate for imbalanced data, both _Recall_ and _F1-score_ are significantly improved with our approach. This suggests potential applications in contexts like medical tests where minimizing false negatives, even at the Fig. 5: Performance gains of the imbalanced experiments applying SI in comparison to the mean performance of the RW and PI. Orange, yellow, lime, purple, and violet indicate _AUC_, _Precision_, _Recall_, _F1-score_, and _Accuracy_, respectively. Left, middle, and right correspond to the gains with respect to all candidates of homo-connection probability, hetero-connection probability, and prior information ratio, respectively. Fig. 6: Experimental results on political book real dataset. (a) illustrates the performance change when applying up to \(k\)-simplices in the objective function as \(k\) increases. (b) depicts the Accuracy gain when using SI (= SI-6) over RW, PI, SI-3, and SI-4 with varying weight parameter. (c) compares performance based on the weight base and the size of the applied simplices. expense of some false positives, is imperative. This study was framed within the optimization of functions with constraints, presenting a limitation: we cannot currently conduct GPU-based experiments, making it difficult to apply to real datasets with a large number of nodes. The advancement of research in neural network-based constrained optimization will allow us to apply our proposed objective function to large-scale real-world datasets, paving the way for future work. ## Acknowledgments Eunho Koo gratefully acknowledges the support of the Korea Institute for Advanced Study (KIAS) under the individual grant AP086801. Tongseok Lim wishes to express gratitude to the Korea Institute of Advanced Study (KIAS) AI research group and the director Hyeon, Changbong for their hospitality and support during his stay at KIAS in 2023, where parts of this work were performed.
2304.12844
Comprehensive Characterization of a State-of-the-Art Apparatus for Cold Electromagnetic Dysprosium Dipoles
We developed a new advanced ultra-cold Dysprosium (Dy) apparatus, which incorporates a quantum gas microscope (QGM) with a resolution of a quarter micrometer. The QGM and the cooling and trapping regions are within the same vacuum glass vessel assuring simple atom transport between them. We demonstrate the essential experimental steps of laser and evaporative cooling, lattice loading, transporting and precise positioning of a cloud of the bosonic isotope 164 Dy at the QGM focal plane. Preliminary basic characterization of the QGM and future plans in enabling its full capacity are outlined. We also present a feasible platform for simulating complex spin models of quantum magnetism, such as XYZ model, by exploiting a set of closely spaced opposite parity levels in Dy with a large magnetic and electric dipole moment. We isolate a degenerate isospin-1/2 system, which possesses both magnetic and electric dipole-dipole coupling, containing Ising, exchange and spin-orbit terms. The last gives rise to a spin model with asymmetric tunable rates, dependable on the lattice geometry.
Gregor Anich, Rudolf Grimm, Emil Kirilov
2023-04-25T14:17:19Z
http://arxiv.org/abs/2304.12844v1
Comprehensive Characterization of a State-of-the-Art Apparatus for Cold Electromagnetic Dysprosium Dipoles ###### Abstract We developed a new advanced ultra-cold Dysprosium (Dy) apparatus, which incorporates a quantum gas microscope (QGM) with a resolution of a quarter micrometer. The QGM and the cooling and trapping regions are within the same vacuum glass vessel assuring simple atom transport between them. We demonstrate the essential experimental steps of laser and evaporative cooling, lattice loading, transporting and precise positioning of a cloud of the bosonic isotope \({}^{164}\)Dy at the QGM focal plane. Preliminary basic characterization of the QGM and future plans in enabling its full capacity are outlined. We also present a feasible platform for simulating complex spin models of quantum magnetism, such as XYZ model, by exploiting a set of closely spaced opposite parity levels in Dy with a large magnetic and electric dipole moment. We isolate a degenerate isospin-1/2 system, which possesses both magnetic and electric dipole-dipole coupling, containing Ising, exchange and spin-orbit terms. The last gives rise to a spin model with asymmetric tunable rates, dependable on the lattice geometry. ## I Introduction Low-temperature systems with dominant dipole-dipole interaction (DDI), which is both long range and anisotropic [1], have recently sparked an enormous interest because of intriguing prospects related to the simulation of condensed matter systems, quantum information and ultracold chemistry [2]. A simple example already containing remarkably complex physics is a system of free dipoles in a 2D geometry [3], with a direct resemblance to the behavior of a 2D electron gas in metal-oxide-semiconductor field-effect transistors [4]. At higher temperatures the system presents a unique opportunity to study the Berezinskii-Kosterlitz-Thouless (BKT) transition, expected to take place between a normal fluid and superfluid [3]. Another unique phase, coined microemulsion phase [4], should at low temperatures mediate the transition between a Fermi liquid and a Wigner crystal. Models of quantum magnetism are a paradigms of key importance in strongly correlated many-body physics [5]. The theoretical outset when one tries to recast those models with cold dipolar particles, occupying the sites of an optical lattice, lies in the projection of the DDI operator among a few isolated magnetic sublevels. This results in spin-spin coupling, which can simulate a plethora of lattice spin models utilized to describe quantum magnetism such as the generalized XXZ [6], generalized XYZ model [7; 8], or models of high-Tc superconductivity [9] such as the t-J model [10]. To give an instance, some of the above experimental platforms allow for precise tuning of the mutual weights of the spin exchange and Ising terms comprising the XXZ Hamiltonian. This is essential in order to map out the full phase diagram, and possibly observe the expected phase transition from paramagnetic spin fluid to Neel-ordered antiferromagnet. The possibility to quench the system across the different regions of its phase diagram should spontaneously break global symmetries and therefore enable the Kibble-Zurek mechanism [11; 12]. Furthermore of particular interest are magnetic systems with implemented competing interactions that cannot be simultaneously fulfilled, and therefore generate frustration [13]. Because of the anisotropy of the DDI itself or lattice geometry, which conspire with quantum fluctuations, classical order even at zero temperature is prevented. Such systems, coined quantum spin liquids [13; 14; 15], exemplify some of the most demanding problems in quantum magnetism [16; 17]. Adding disorder to the periodic lattice, either by projecting a disordered potential landscape or relying on random vacancies naturally driving disorder in the system, opens up an opportunity to investigate many-body localization, a phenomena largely unexplored [18]. Lastly, naturally embedded in the DDI is also spin-orbit coupling (SOC) with a novel many-body character [19]. The excitations generated by such SOC possess non-trivial topology and a Berry phase reminiscent to the one responsible for the unconventional quantum Hall effect in bilayer graphene [20]. Uncovering the above phenomena in the laboratory will elucidate their exotic properties, challenging their theoretical description and most probably will also lead to unexpected findings. Under the ultra-cold atomic physics umbrella, there exist a few experimental platforms to tackle the above physics. One of the most mature approaches is based on ultracold diatomic molecules, exploiting the existence of nearby rotational opposite parity states (OPS) in the ground electronic manifold (\({}^{1}\Sigma\)) of such linear rigid rotor molecules. The atomic constituents of these diatomic molecules are typically chosen from the alkali family due to the ability to prepare them at degeneracy [21; 22; 23; 24; 25; 26]. The OPS can subsequently be polarized with static electric fields or a microwave (MW) radiation, which essentially provides a system of electric dipoles oriented in the laboratory frame in the vicinity of few Debye (at full polarization 0.1 D for LiNa to \(\sim\)4 D for NaCs, LiCs). Additionally tunability of the dipole strength and even advanced tailoring of the dipolar potential itself [27; 28; 29] is possible. The only drawback is the extreme complexity and the difficulty to prepare samples at low enough entropy and therefore a filling fraction needed to enable percolation [30]. High-lying Rydberg states in atoms provide yet another example of a relatively young and versatile toolbox toward engineering of various spin models [31; 32; 33; 34]. The short lifetime of laser-accessible Rydberg states due to spontaneous decay and their susceptibility to MW blackbody state transfer have been remedied by utilizing high-lying circular Rydberg states, cryogenic environment and even spontaneous emission inhibiting structures [35]. Many proposals exist also with simple alkali atoms where the magnetic interaction appears as a super-exchange process which scales as \(t^{2}/U\) (\(t\) is the tunneling rate, \(U\) on-site interaction energy). The fact that the magnetism is generated through motion, demands temperatures in the pK range [36]. Atoms loaded in a \(p\)-band utilizing the same super-exchange mechanism can realize even more demanding models of magnetism, where apart from low temperatures needed, an additional predicament is the unit filling of the \(p\)-band [7]. Atoms with a high magnetic dipole moment offer another path for the same objective [37]. The simplicity of working with an atom is highly auspicious although there is an added complexity in order to achieve tuning of the dipole strength and change the sign of the DDI [38] needed to tune the rates in the particular spin model. The magnetic moment is also weaker than its electric counterpart, reaching an equivalent of 0.1 D for the most magnetic atom of Dy. The above systems are also conceptually different, especially if multiple Zeeman quantum states are allowed to participate in the DDI. Namely the induced electric dipole scaling with the applied field compared to its magnetic counterpart is different, given that the last one is always on. Some unique to the magnetic moment experiments were performed on dipolar relaxation [39] and resonant demagnetization in dipolar condensates [40]. In this article, we describe in detail a new cold-atom experimental apparatus based on Dy. Our main research goal is to simulate spin models of quantum magnetism. To achieve these goals we are set up to pursue a familiar strategy, utilizing the high magnetic moment available in the ground state of the Dy atom. Additionally we offer an alternative platform, which is attractive for its simplicity and diversity and offers some further means to approach an even more exotic spin systems, currently out of reach in the ultra-cold atoms community or demanding great experimental complexity. The platform is based on a set of closely-spaced OPS in Dy, that can be efficiently populated by a Stimulated Raman Adiabatic Passage (STIRAP) technique, and subsequently manipulated with a MW radiation. The ensuing toolbox, borrows from both exhaustive magnetic DDI (mDDI) and electric DDI (eDDI), including even the embedded spin-orbit interaction constituent. This last component allows for the simulation of the generalized XYZ spin model, with independently tunable and lattice-dependent rates. Experimentally, the apparatus contains a few unique and uncommon design features and engineering choices. A main attribute is a high-resolution QGM, capable of resolving a quarter micron features, comparable to the site spacing in short wavelength optical lattices. Another unique feature is the compactness of the apparatus, containing the laser cooling region and the QGM within the same glass vacuum vessel. Generally, the bulk of the article describes prior measurements and optimization procedures using the \({}^{164}\)Dy isotope, as a prerequisite for pursuing the above goals. The manuscript is organized as follows: First, as a main motivation, we present two approaches capable of simulating spin models of quantum magnetism and discuss their respective advantages. The second approach based on the OPS has been given the major attention in Sec. II due to its unique features. We then describe in detail the apparatus itself (Sec. III), focusing on its main constituents: Vacuum system, magnetic field coils, laser systems and QGM. Section IV describes the steps nec Figure 1: Relevant optical transitions used in the experiment. Two options for STIRAP to the OPS are shown. The inset shows a zoom into the OPS with a possibility to engineer a degenerate isospin-1/2 system coupled both with eDDI and mDDI. The relevant magnetic sublevels of the OPS are shown at no field (black), Zeeman shifted (blue) and dressed by the linearly polarized MW (red). The twin dressed state \(|9,-9\rangle\) is not shown. The degenerate isospin-1/2 system consists of the unperturbed state \(|1\rangle\) and dressed state \(|2\rangle\). essary to achieve quantum degeneracy and prepare a 2D sample of Dy atoms in the focal plane of the QGM: Transverse cooling and Zeeman slower, Magneto-optical trap, Evaporative cooling and transport and Characterization of optical lattices. In Section V we discuss the first preliminary measurements with the QGM. Finally, in Sec. VI we discuss perspectives and conclude. ## II Survey of experimental possibilities We would like to begin by underlining the flexibility of the current setup. We are set to pursue a well established approach to simulate quantum spin models using the ground state magnetic moment of Dy of 10 \(\mu_{B}\)[37; 41]. This approach has both scientific and technical advantages. The long lifetime of the dipoles and the possibility to study, on the same ground, spin-spin interaction and tunneling is an asset of this approach. Adversely to generate significant mDDI one may need lattices with spacing in the UV range, for which our apparatus is suited. In the ground state of Dy one can isolate a region in the UV in the range 350-385nm where the dynamic polarizability (DP) is negative and large (-1000 to -500 a.u.). Furthermore it is free of resonances and heating should be minimal. This approach may bring new challenges, such as possible ionization when the atoms are cycled during the imaging process in such a deep UV lattice. The unknown DP of the upper state, excited by the imaging photon, in the UV and most importantly the feasibility to directly resolve UV-lattice spacing with our QGM. The last problem may be remedied utilizing super-resolution techniques [42; 43], an approach pursued by other groups [44]. A different, more exotic experimental platform inspired by spectroscopic findings in [45], can be pursued in parallel with the above scheme based on ground state mDDI. The proposal capitalizes on the exceptional existence in Dy electronic structure of nearby OPS with long lifetime and a large reduced transition electric and magnetic dipole moments of 8 D and 13 \(\mu_{B}\) respectively. The states with angular momentum J=10 ([\(Xe\)]4\(f^{10}\)6\(s\)6\(p\)) and J=9 ([\(Xe\)]4\(f^{10}\)5\(d\)6\(s\)) are spaced 35GHz apart and could be mixed by a MW or DC electric field (Fig. 1). We assume that if two Dy atoms prepared in the OPS end up in the same lattice site a rapid inelastic loss will occur which will cause the atoms to leave the trap. Therefore for models utilizing the OPS we have to assume a hard-core constraint based on the quantum Zeno effect, meaning no more than one particle per well can reside, similar to chemically reactive molecules [46]. We envision a scenario where two MW dressed states composed by magnetic sublevels from the above OPS can be brought to degeneracy and subsequently coupled by both magnetic and eDDI (Fig. 1 inset). This same scenario was proposed for symmetric top molecules, where the OPS consists of neighboring opposite parity rotational levels, where the linear Zeeman effect, available to us, is substituted with a linear Stark effect [8]. The last feature is possible provided that such an exotic molecule has a non-zero projection of the rotational angular momenta on the symmetry axis of the molecule. In Ref. [8], the DDI interaction was also solely a eDDI. In our scheme, we induce level crossing between neighboring two m-sublevels for the \(J=10\) state (stretched \(m_{J}=-10\) and \(m_{J}=-9\) states for example, but not limited to this pair of states) at some magnetic field, utilizing a linearly polarized MW and the linear Zeeman shift. In the formed degenerate isospin-1/2 basis the eDDI coupling is possible due to the MW dressing, handing some opposite parity portion to one of the states (see for details Appendix A). The fraction of the opposite parity component can be varied with MW Rabi frequency and detuning, by borrowing probability amplitude from the J=9 state. One can then write such a long-range interacting system as an extended XYZ spin model, with tunable and lattice dependent rates. Similar platforms have been theoretically proposed only with complex laser-dressing schemes of Rydberg atoms [31], diatomic molecules with \({}^{2}\Sigma_{1/2}\) electronic ground state [29] or with symmetric top molecules [8; 47] (such as methyl fluoride CH\({}_{3}\)F or hydroxyl radical OH), still to be brought to quantum degeneracy in order to realize their full scientific and technological potential [48; 49]. In general the XYZ spin model in 2D is very computationally intensive to simulate for large systems. In the limiting case of 1D the predicted phase diagram is already extremely rich [7] containing a gapless floating phase non-existent for the symmetric XXZ spin model. The transitions to this phase are also intriguing, including a BKT transition to antiferromagnetic (AFM) phase on one side and a commensurate-incommensurate I-IC [50] transition to a spin-flip phase on the other. The width of this phase is proportional exactly to the asymmetry in the X and Y rates [7]. In 2D, a classical Monte Carlo simulation of few lattice sites of the XYZ model with added Dzyaloshinskii-Moriya interaction reveals a rich interplay between magnetic orders and spin spiral orders [51]. Experimentally, populating the OPS can be performed with STIRAP or Raman pulses. We have a chosen a \(\Lambda\)-level structure incorporating the excited state whose [\(Xe\)]4\(f^{10}\)5\(d\)6\(s\)8,1\({}^{\circ}\) character and J=8 angular momentum provides a decent transitional dipole moment of 0.3 a.u. to the J=9 OPS member (Fig. 1). We can draw conclusions about the DP of the J=10 state of the OPS from a nearby state with the same character at 15972cm\({}^{-1}\) above the ground state [52]. At 1064nm the DP was calculated to be \(\sim\)140 a.u., which is comfortably close to the measured ground state value of 184a.u.[53]. Additionally, the expected large vectorial and tensorial parts of the DP could be exploited, to achieve \({}^{\prime\prime}\)magic wavelength\({}^{\prime\prime}\) conditions for the ground state and OPS [54]. To investigate the above phases we have developed a sensitive diagnostic tool, namely a QGM, capable of single atom spin- and site-resolved in situ detection and therefore measurement of multi-point spatial correlations [55]. The optical resolution of 0.23\(\mu\)m and similar depth of focus of the QGM, makes the apparatus particularly tuned to a single or double layer 2D structures. ## III Experimental setup In this section we give a detailed overview over the main experimental tools. In section III.1 we discuss the general anatomy of the experiment. We then describe in great detail our vacuum system and magnetic field coils (Sec. III.2), laser systems (Sec. III.3), QGM design (Sec. III.4) and optical lattices implementation (Sec. III.6). ### General experimental structure The main apparatus consists of an effusive oven, immediately followed by a transverse cooling (TC) section, Zeeman slower (ZS) and a single glass cell, dedicated to the main cooling and trapping region and containing the QGM itself. There are two laser cooling systems, two STIRAP laser systems, and three high power lasers for dipole traps and generation of the optical lattices. All the lasers that need to be frequency stabilized are referenced to an ultra-low expansion (ULE) cavity. All the necessary electronics consisting of digital and analog cards, direct digital synthesizers (DDS), servo electronics (PIDs) for laser frequency, power and coils current stabilization are home made and controlled by a digital card (NI 6533 Digital I/O Card). Three cameras (labeled 1 to 3) are used to take images at different Dy cloud locations in the glass cell. In the central region of the cell, where laser cooling and trapping takes place initially, we use absorption imaging both horizontally and vertically with camera 1 (Andor 5.5 Zyla sCMOS). To precisely position the sample above the QGM we perform both absorption and fluorescence imaging horizontally with camera 2, identical to camera 1. The fluorescence ce collected by the QGM, which oriented vertically (Figs. 2 and 3), is imaged on a camera 3 (Andor-iXon-897-EXF). The software, written on Visual C++ [56], writes the commands on the NI card (operating at 1MHz) and communicates with the data acquisition computer. The software for setting the cameras properties, acquire and present the atomic images and data analysis, is self-written in Python. ### Vacuum system and magnetic field coils The vacuum system is separated in three sections (Fig. 2). The first section contains an oven (1WEZ 40-10-22-K-HL-C-0, MBE Komponenten) used generally for molecular beam epitaxy. It has a motorized shutter, an independent hot lip temperature control and a water-cooling jacket positioned around a tantalum crucible, containing pellets of Dy (about 15g). The front of the crucible (100mm length, \(\diameter 12\)mm) has a long thin tube (40mm length, \(\diameter 2.5\)mm) to collimate the Dy beam. During operation the oven is held at 1100\({}^{\circ}\)C and the shutter is opened only during the magneto-optical trap (MOT) loading stage. At this ratio of the length to diameter of the tube one theoretically expects a \(\pm 4.2^{\circ}\) atomic cone. The oven is followed by a TC section formed by a standard vacuum cross, with four anti-reflection coated (AR) glass ports (Vaqtec). In this section we use a NexTorr D100-5 combined pump with getter (St-172, 100 l/s for H\({}_{2}\)) and embedded ion pump (6 l/s for Ar). Given that the Dy deposited on the walls also serves as an additional getter the pressure in this section is less than 1\(\times 10^{-11}\) mbar, as indicated by the vanishing ion pump current. We don't incorporate additional ion gauges to monitor the pressure to decrease complexity and cost. Before entering the ZS section, the Dy beam crosses the orifice of an all-metal pneumatic gate valve (VAT GmbH, 48132 Figure 2: View of the main vacuum apparatus, consisting of a Dy atomic-beam oven, transverse cooling section, ZS and main glass cell. Regarding the magnetic field coils, only the bottom combination under the glass cell (two sets of gradient and offset coils for both MOT and QGM regions) is shown. CE44-0002). The valve provides necessary detachment of the oven from the main apparatus during a Dy reloading procedure, and allows for an emergency shut down if the pressure in that region exceeds an alarming value. The next vacuum section is the ZS, which consists of two stainless steel (alloy 316L) cylindrical pieces with an internal conical surface, formed by electrical discharge machining. The pieces are welded together in one piece with total length of 560 mm with small and large diameters of 4 and 14 mm, respectively. This conical shape matches, but doesn't restrict, the atomic beam shape leaving the exit aperture of the effusive oven and arriving at the trapping MOT position. On the outside, the volume remaining between the conical surface and an enclosing cylinder serves as a water cooling section. A simple geometric constraint forces the water flow to travel along the length of the cylinder twice before exiting, and therefore assures proper cooling along the full length. The third vacuum section is a fused-silica glass cell (Precision Glassblowing, 8 horizontal ports \(\diameter 50.8\)mm, top and bottom \(\diameter 101.6\)mm, Fig. 3a) in the shape of an octagon with one side port used as entrance for the atoms and the rest for laser cooling, dipole traps and optical lattices. All windows are corrugated with surface relief microstructures providing AR coating with reflection coefficient \(<\)0.25% in the range 250-1600nm and at angles of incidence 0-60\({}^{\circ}\) (produced by Telaztec). The standard glass to metal transition used in the vacuum industry to bridge between glass vessels and a ConFlat (CF) vacuum seal standard, for this size cell, would be impractically long. Therefore, we designed a custom adapter flange, which contains on one side the CF seal and connects to the glass on the other side with an indium seal. The indium seal is made out of round cold-bonded indium wire (\(\diameter\) 1mm) pressed between the adapter flange and a flat glass lip at the entry of the sell (Fig. 3a). Even with the current design the distance from the end of the ZS and MOT center is 23 cm, substantially longer compared to standard setups [57, 58]. The vacuum here is provided by a getter-ion pump NexTorr D500-5. The pump is positioned about 25 cm from the glass cell at a 45\({}^{\circ}\) angle to the cell entrance, in order to avoid thermally stressing the cell during the getter activation at 600\({}^{\circ}\)C. Nevertheless due to the relative large tube (CF63) used, the pumping speed for hydrogen at the MOT center is almost completely utilized. The pressure in the cell is also \(<1\times 10^{-11}\) mbar. During the initial bake-out the cell was heated to 120\({}^{\circ}\)C for two weeks by wrapping a heat tape around an aluminum enclosure concentric with it. A simple thermal simulation assured us that the cell does not experience large thermal gradients during the NexTorr activation. The glass cell, made out of pure glass with no bolting structures, is not suited for positioning in-vacuum components. We solve this problem by positioning two semicircles out of machinable ceramic (macor, Corning Inc.) with devoted cuts to accommodate the MOT light and the positioning of the desired in-vacuum elements (Figs. Figure 3: Anatomy of the main glass cell illustrating a) the positioning of laser cooling region (1), ZS mirror (2) and QGM (3). Two spring loaded (4) Macor semi-circles (5), provide a stable base for positioning and securing of components (see text). The connection to the rest of the vacuum chamber is accomplished via an indium seal (6) and a custom adapter flange (7). b) A cross section of the QGM, consisting of a beveled front plano-spherical lens (8) and a second plano-aspheric lens (9). The two lens are held by a Macor enclosure. 3 and 4). The two semi-circles were carefully placed in the cell with a vacuum suction tip attached to a precision 3D linear micrometer stage. Once put down on the bottom glass surface they are spring-loaded and seize to the side surfaces. The adhesion to the bottom glass and seizing to the side surfaces creates a stable base and allows positioning of the QGM and a ZS in-vacuum mirror into a devoted slots. In order to compensate ambient magnetic fields and to apply small (\(<\)2G) uniform offsets, a pairs of large compensation coils are symmetrically installed around the glass cell. The coils have the shape of rectangular cuboid (\(xyz\) dimensions \(944\times 1174\times 674\)mm, coordinates defined on Fig. 4 as a reference for the whole manuscript), and are driven by a home-made bi-polar current source based on high-current paralleled operational amplifiers. The fields obtainable with these coils are \((0.8,1.5,2.15)\) G for the \(xyz\) directions. We additionally employ a set of offset/gradient coils (Fig. 2), concentric with the \(z\)-axis of symmetry of the MOT and QGM regions respectively. The offset coils for both the MOT and QGM regions are in Helmholtz configuration. The holding water-cooled structure resembles two circular loops (with centers 34mm apart as the distance between QGM and MOT axis), touching internally at a point. Additionally we have an option to apply a uniform magnetic field in the \(y\)-direction (Fig. 4) with an independent set of coils centered around the QGM \(y\)-symmetry axis. The obtainable fields and fields gradients of those coils, driven by identical bi-polar supplies as for the compensation coils, are \((B_{0}^{z},dB_{gr}^{z}/dz)\)=(18G, 3 G/cm) and \(B_{0}^{y}\)=6G. To prevent induced eddy currents in all metallic structures forming continuous loops nearby the experiment, such as aluminum structures holding the coils, they are sliced and the removed material is substituted with a non-conductive plastic joint to maintain the structural stability. ### Laser systems The laser systems used in the experiment are depicted in Fig. 5. We separate them in three different groups based on the technology employed. The first group is based on diode laser technology. We use two home-made, amplified and frequency-doubled diode laser systems for ZS, TC, imaging and for the first STIRAP branch. To provide the light for the ZS, TC and imaging we utilize the 421-nm transition \(4f^{10}6s^{2}(J=8)\leftrightarrow 4f^{10}6s6p(J=9)\) in Dy (natural linewidth \(\Gamma_{421}\)=2\(\pi\times\)32.2MHz [59], saturation intensity is \(I_{421}=56.3\)mW/cm\({}^{2}\)). The laser system consists of a cat-eye extended cavity diode laser (ECDL, [60]) operating at 842nm (Eagleyard AR coated diode, 50mW), tapered amplifier (TA) (Eagleyard, 2-W chip) and a bow-tie frequency doubling stage based on LBO crystal (Raicol Crystals). The ECDL delivers 35mW to the TA which amplifies it to 1.4-W useful output before the frequency doubler. We utilize type-I critical phase-matching and on resonance attain 600mW of blue light at 421nm. We observe a slight deterioration over the course of a year, possibly related to the coating of the crystal, although the doubler is in a sealed box. The doubling cavity is stabilized relative to the ECDL with a polarization-based Hansch-Couillaud technique [61]. The laser system for the first STIRAP branch addresses the \(4f^{10}6s^{2}(J=8)\leftrightarrow 4f^{10}6s6p(J=8)\) transition at 417nm [45] and is identical to the one described above, except because of the low power requirement, a low power 1W TA chip (Eagleyard) is employed. We obtain 40mW at 417-nm after the second harmonic generation. Both of the ECDLs of the above identical systems are frequency locked using a Pound-Drever-Hall (PDH, [62]) method to a home-built optical resonator positioned in a vacuum vessel (\(10^{-9}\)mbar). The resonator, which serves as a reference for all stabilized lasers, is based on a binocular ULE glass spacer Figure 4: Cross section of the main cell illustrating the two spring loaded macor platforms forming a base for the ZS mirror and QGM. The main tweezer beams performing the trapping, evaporation and transport are shown. Their crossing point moves from the MOT region (center of the cell) to the point on the symmetry z-axis of the QGM and 2.6mm above its focal plane. Lattice beams both at 1064nm (currently 1053nm) and 532nm crossing above the QGM are labeled as X’Y’ beams. The two green interfering 532-nm beams, propagate in the \(x-z\) plane and cross above the QGM, forming a long wavelength lattice (called FA, see text) of 10\(\mu\)m. (Hellma, 1.5GHz free spectral range) and optically contacted partial reflectors (Lens Optics, superpolished) which are coated (LaserOptik GmbH) in the bands of interest to a finesse of \(\sim\)30,000 (Fig. 5, binocular ULE). The detuning relative to the ULE, and the sidebands necessary for the PDH method are generated by a single fibered electro-optic modulator (f-EOM) preceded by a non-linear transmission line (NLTL). The NLTL creates a sawtooth wave from a single frequency, which the f-EOM afterwards converts to a single sideband, bridging the gap between the atomic line and the ULE resonance. The PDH single frequency (10MHz) signal is added to the NLTL sawtooth wave by a broadband mixer. Eventually a highly tunable (300-1700MHz) single sideband, attributed with a PDH modulation is achieved, allowing for convenient tuning over the full free spectral range of the ULE cavity [63]. The second group of laser systems are commercial fiber lasers used as dipole traps, optical lattices, narrow-line MOT light and the second STIRAP branch. The central laser system is a 50-W 1053-nm fiber laser. Most of the power (90%) is devoted to a pair of optical tweezers used for trapping, evaporation and transport of the atomic cloud (detailed in Section IV) and additionally for 3D optical lattice beams (Fig. 4). The remaining 5W are frequency summed with a 5-W, 1544-nm single frequency laser to obtain the necessary 626-nm light, operating the narrow line MOT (natural linewidth \(\Gamma_{626}=2\pi\times 136\)kHz [64]). The 1053-nm (IPG Photonics) and 1544-nm (Keopsys, 10W) amplifiers are both seeded by 5-mW single-frequency fiber lasers (Koheras Adjusc by NKT Photonics). We use a periodically poled crystal lithium niobate crystal(PPLN, Covesion), phase matched at 177.4\({}^{\circ}\)C in a single-pass configuration and derive 1.5W of 626-nm light, which is then distributed to the MOT vertical and horizontal beams. The 626-nm light is frequency stabilized to our ULE cavity with PDH technique and the correction signal is fed to the fast piezo actuator regulating the cavity of the 1544-nm fiber seeder. The 1544-nm seeder additionally powers another fiber amplifier at 10-W (Keopsys amplifier, 10-W) which is frequency doubled to 772-nm in PPLN (Covesion) at 141.8\({}^{\circ}\)C. We obtain 0.7-W useful post-fiber laser power which then serves as a vertical dipole trap for the atoms at the QGM region, by propagating along its symmetry axis. The QGM is AR coated in the range 700-900-nm. The last fiber laser is a 40-mW laser at 1572nm (Koheras adjustic, NKT Photonics), needed for the second STIRAP branch. The laser is inherently narrow (\(<\)1kHz) and is simply stabilized with low bandwidth servo loop to the ULE cavity using the PDH technique. The last laser group is a single laser system of a solid-state type, based on a 18-W laser at 532nm (Coherent Verdi), which is needed for our horizontal short-spacing optical lattice (\(x^{\prime}\)-\(y^{\prime}\) on Fig. 4). It also powers a vertical large spacing of 10\(\mu\)m (see Sec. IV.4) lattice, obtained by interfering two free traveling beams at \(\pm\)1.5\({}^{\circ}\) with respect to the horizontal plane. The interference pattern at the crossing point is coincident with the QGM symmetry axis and additionally overlapped with the QGM focal plane. ### Quantum gas microscope Our QGM is an in-vacuum compact design made only out of two lenses, a Weierstrass sphere serving as a front plano-covex lens [65] and a plano-aspheric lens (Fig. 3b). The design was inspired by [66], and adapted to the blue region, specifically for the blue line at 421-nm of Dy. The Weierstrass sphere enhances the numerical aperture (NA) by a factor of \(n^{2}\) (where \(n\) is the refraction index of the sphere) therefore reducing the need for extreme aspheric surface of the second lens. As a result of that the aspheric surface could be manufactured with magneto-rheological finish [67], achieving the lowest possible irregularity. The optical performance and tolerances of the QGM were optimized with optical design software (Zemax 2013). The two lenses were produced by Sill Optics, with both curved surfaces having a PV maximum irregularity of 0.15\(\mu\)m and iRMS \(<\)0.05\(\mu\)m. The enclosure of the QGM is made out of two cylindrical concentric macor pieces. Each one has a very fine, 0.15mm pitch tap in order to regulate the distance between the two lenses. This degree of freedom was necessary given the lens thickness tolerances and especially the uncertainty of the complex front lens coating (a 30 interchanging layers of Ta\({}_{2}\)O\({}_{5}\) and SiO\({}_{2}\), approximated in the simulation by the total thickness of 2.2 \(\mu\)m of Ta\({}_{2}\)O\({}_{5}\) and 2.9 \(\mu\)m SiO\({}_{2}\)). The achieved parameters are NA of 0.92, working distance of 160\(\mu\)m (longer working distances makes the QGM unfitting for our single cell design), field of view \(\pm\)40\(\mu\)m and a depth Figure 5: Diagram of the laser systems used in the experiment. Abbreviations not defined in the main text: FS- motorized fiber splitter, F- fiber launch, PhCrFi- photonic crystal fiber, d-aom - double pass AOM, SH- mechanical shutter, dit-aom-dithering AOM of focus \(\pm 125\mu\)m. The QGM was tested in air by shifting the test wavelength by \(\sim 1\)-nm to shorter wavelengths, where the performance and distances are the same as for the atomic Dy 421-nm transition when the QGM is positioned in vacuum. We use shear interferometry [68] on the re-collimated light that crosses the QGM twice after a retro-reflection from a silver mirror positioned in the QGM focal plane. We observe parallel interference fringes with no significant deformation until an aperture of \(\diameter=23\)mm corresponding to the desired NA. The light was then focused on a beam-profiler (Thorlabs, BC106N-VIS) with a single lens providing a magnification of 160. The Strehl ratio was deduced across the full field of view, confirming a diffraction limited performance. ### Optical transportation setup To be able to reach quantum degeneracy and transport the atoms into the focal plane of the QGM we developed an optical setup capable of moving in 3D a pair of two crossed high power optical dipole traps (we refer to them as tweezers) at 1053-nm crossing at 14\({}^{\circ}\) (Fig. 4). The setup needs to be able to efficiently load the Dy cloud from the compressed MOT (CMOT), transport it while performing evaporative cooling, and eventually position it onto the focal plane and symmetry axis of the QGM. We utilize a large dynamic aspheric lens, which is a 20mm wide slice, cut symmetrically around the diameter from a large commercial asphere (Thorlabs AL100200-C, \(f\)=200mm, Fig. 4). This aspheric lens is positioned by an aluminum frame onto an air-bearing stage (ABS) (Aeroflex, ABL1000-050-E2-CMS1-PL2-TAS), capable of transporting the two-tweezer intersection point, over a maximum distance of 50mm. The intersection point coincides with the asphere focal point during the full transport, due to the parallelism of the tweezers to the lens symmetry axis, before refracting from it. The customized lens cut removes a lot of unnecessary weight and size from the asphere, although we need carefully positioned counter-weights assuring that the center of mass of the full structure is within the stage requirements. Such an arrangement allows us to precisely transport the atoms in the focus of the tweezer pair across the 35mm distance from MOT to QGM regions along the \(x\)-direction and additionally along \(y\) and \(z\) (coordinate system on Fig. 4). The ABS is slightly magnetic and therefore the holding structure is designed to keep it far from the atoms. The tweezer light is transported onto the experimental table with two photonic crystal fibers (E324-443-500, NKT Photonics), which are water-cooled at the entrance. We monitor the ratio of the transmitted to input power and a home-made microprocessor triggers an alarm if this ratio deteriorates under 80%. The tweezer beams are power regulated with two AOMs after the fiber exit, because we actively steer them horizontally at 100kHz to dynamically broaden the beams at the focus position. Technically we send an arccosine-shaped signal from an arbitrary waveform generator to a fast voltage-controlled oscillator (VCO, AA Optoelectronic, 1\(\mu\)s full range scanning time). To set the desired aspect ratio of the tweezers beams, they cross two expanding telescopes made out of a set of cylindrical lenses in order to increase the beam size only along the \(z\)-direction. The ambient aspect ratio, in the absence of the beam steering is 2.5 (\(\omega_{z}=19\mu\)m and \(\omega_{y}=50\mu\)m). The two telescopes are positioned right before the dynamic asphere where the tweezers are parallel to each other and 50mm apart. The front lenses in each telescope is attached to a miniature precision linear stage equipped with an optical linear encoder (Q-521.130, PI). This allows us to move the tweezers vertically from their original position during the MOT loading stage, at 2.6mm above the QGM front surface, and lower them to its focal plane at 160 \(\mu\)m above. The final necessary degree of freedom along the \(y\)-direction is not automated and therefore optimized once. The ABS rests on a precisely polished smooth granite block, which is channeled to move along the \(y\)-direction with two pairs of ceramic ball bearings pressing on its \(y-z\) opposite walls. The motion along this final degree of freedom is accomplished by moving the granite block with two micrometers, pressing against its two opposite \(x-z\) walls. By countering the micrometers against each other we freeze the \(y\) degree of freedom and prevent the potential displacement of the ABS in the \(x-y\) plane driven by inertial forces, present while it is activated. ### Optical lattices implementation Eventually the atoms have to be loaded in a 2D plane coincident with the focal plane of the QGM and pinned by the lattice geometry of choice in that plane. This is both for technical reasons, given the tight depth of focus of the QGM, and fundamental, given the scientific goals of the experiment. As a first step toward preparing a 2D sample we load the cloud into a single plane of a vertical lattice with spacing of \(10\mu\)m, which we refer to as a fixed accordion (FA). The FA is formed by the interference of two beams at 532-nm derived from a 18W Verdi laser (Fig. 5), propagating in the \(x-z\) plane and intersecting above the QGM at an angle of 3\({}^{\circ}\) (Fig. 4). The two beams are derived from a single beam incident on a custom coated flat glass substrate positioned at 45\({}^{\circ}\) relative to the \(z\)-axis with a partial reflector polarization-independent coating on the front surface of 38% and a high reflective coating on the back surface. The high parallelism and choice of reflectivity of the substrate ensures two identical equal-power and highly parallel beams exiting horizontally (along -\(x\)) and 5mm apart (defined by the substrate thickness of 6.35mm). The two beams are derived from the first reflection and the first transmission of the original beam. About 25% of the incoming power is lost in the beam remaining after the second exit from the substrate. The two beams are then focused onto the focal plane of the QGM with a f=100mm aspheric lens (Asphericon, AFL25-100-U-U-355-L) and form the interference pattern along the \(z\)-direction. The interference pattern has a Gaussian envelope with diameter \(\diameter\)150\(\mu\)m along \(y\)-direction and \(\diameter\)80\(\mu\)m along the interference pattern in the \(z\)-direction. The substrate can be moved vertically with sub-micron precision with a slip-stick actuator (PI, N-470 PiezoMike) to adjust the middle fringe of the interference pattern onto the QGM focal plane. Currently the laser is not frequency stabilized and the substrate is not actively temperature stabilized, nevertheless the interference pattern is sufficiently stable for the initial characterization of the QGM of this work. Stability and positioning below the depth of focus of the QGM will only be achievable once the atomic 2D plane is referenced to the QGM itself (detailed in Section VI). Furthermore for assuring that the \(x-y\) trap frequencies are not severely elevated when increasing the FA power, we will need to expand the horizontal diameter to \(\sim 1\)mm, which demands higher laser power than currently available. To improve the chances of this method we image the interference pattern on a 1D charge-coupled device (CCD) camera, and extract an error signal of the drift in position of the central interference fringe. In future we will feedback this signal back to an actuator, that corrects the FA position. This improvement still does not guarantee a long term stability, and does not cancel vibrations of the interference fringe relative to the QGM. A better strategy, currently pursued is to avoid vibrations and long term drifts, by passively locking the position of the atomic 2D cloud to the QGM focal plane. For this purpose we reflect a 1053-nm vertical lattice beam from the front face of the QGM (\(R>99.5\%\) in the range 1030-1120-nm from \(0-5^{\circ}\) angle of incidence). We discuss in Section VI our future plans to populate a single plane of such a lattice. The horizontal \(x^{\prime}y^{\prime}\) lattices are rotated at 23\({}^{\circ}\) relative to the \(x-y\) coordinate system, due to the available space, and consist of two (for both \(x^{\prime}\) and \(y^{\prime}\) directions) super-lattice beams of co-propagating 532-nm and 1053-nm beams (Fig. 4). Eventually the 1053-nm beams, currently used for testing will be substituted with 1064-nm beams. The retro-reflective optics for each of the super-lattice beams consists of two polarizing beamsplitters with a motorized half-waveplate in between, followed by a mirror. Each of the polarization elements, the mirrors and lenses for shaping the lattice beams are chosen to perform equivalently at both 532-nm and 1064-nm wavelengths. With this arrangement by rotating the waveplate the reflection could be extinguished and the lattice beams become regular dipole traps. This construction simplifies traps alignment procedures and also could be used during the experimental cycle to utilize the lattice beams as traps during the evaporation process and later converting them to lattices (the minimum time for the switch, limited by our servo motors is 100ms). ## IV Experimental procedures and system characterization ### Transverse cooling and Zeeman slower We operate the Dy oven with the crucible and the hot lip temperature at 1000\({}^{\circ}\)C and 1100\({}^{\circ}\)C, respectively (Dy melting point is 1407\({}^{\circ}\)C), to produce a beam with a most probable longitudinal velocity of about 450m/s [57; 69; 70; 68; 67; 69; 71]. Immediately following the oven, before entering the Zeeman slower (Fig. 6), we perform TC (via the four glass ports following the oven in Fig. 2) to decrease the angular spread of the beam [72]. We apply cooling in 2D, by utilizing a orthogonal retro-reflected beam configuration with 30mW in each dimension (elliptical shape with \(\diameter\) 26\(\times\)5mm with long axis along the atomic beam). We experimentally optimize polarization of the individual beams and find the optimum detuning to be 0.5\(\Gamma_{421}\), as expected for low saturation. The increase in atom number in the 3D MOT, due to the TC is shown on Fig. 7a. It is clear that the atom number does not plateau out with laser power, given the available laser intensity. We achieve a factor of 7 increase in atom number in the MOT (described below) at the maximum available power. Next the atoms enter the ZS region. Here the atoms experience a resonant 421-nm slowing beam, focused on the oven exit aperture in order to match the diverging atomic beam. We operate at large detuning of \(\sim 20\Gamma_{421}\) and send about 150mW of 421-nm light, which is reflected from the in-vacuum aluminum coated mirror (cus Figure 6: ZS coils and main set of coils. Each set consists of gradient and offset coils in Helmholtz configuration, concentric with the MOT and QGM \(z\)-axis of symmetry, respectively but sharing a common water-cooled sink. Presented are also the measured individual fields of each ZS coil, total on-axis field and the desired theoretical field (dashed line). tom made with pure aluminum deposited on glass without protective layer, Figs. 3 and 4). We observe a vague spot on the mirror due to the deposited Dy and assume that about 20% of the reflected light is lost, based on experience of other laboratories utilizing a in-vacuum mirror for Dy ZS. The ZS is designed with a safety factor of \(\eta=a_{zs}/a_{max}\approx 0.4\) (here \(a_{zs}\) is the true acceleration and \(a_{max}=5.8\times 10^{5}\)m/s\({}^{2}\) is the max acceleration possible). The length is chosen to achieve a targeted final velocity of \(v_{f}=7\)m/s to match the MOT capture velocity. Given the large distance from the end of the ZS to the MOT center, we may in future adapt a better strategy, letting a higher final velocity to reduce the transverse expansion and then slow the atoms to the MOT trapping velocity right before the MOT [73]. We first tested the ZS performance before assembly, by mapping the magnetic field of each individual coil (\(B_{i}(z)\), \(i\) is the coil index and \(z\) is the longitudinal coordinate) first for a current value of 1A. Subsequently a minimization procedure over this basis is used to find the optimal individual currents \(I_{i}\), realizing a total experimentally optimized magnetic field \(B_{ZS}(z)=\sum_{i}I_{i}B_{i}(z)\), matching the ideal theoretical field of \(B_{th}=B_{b}+B_{0}\sqrt{1-z/z_{0}}\). Here \(B_{b}\) and \(B_{0}\) are the bias and overall height of the field profile and \(z_{0}\) is the ZS length (Fig. 6). Further fine tuning of the coils currents was done experimentally by directly maximizing the number of atoms in the MOT. ### Magneto-optical trap Exiting the ZS the atoms are captured in a 626-nm narrow-line six-beam MOT [70; 57; 71]. We increase the MOT capture velocity by creating a comb structure of sidebands spaced by 110 kHz. We use a resonant EOM (Qubig PM6-VIS) to generate an effective linewidth of 7MHz. During the MOT loading process (Fig. 7b), which takes 2s, the laser detuning is 38T\({}_{626}\). We gain a factor of 2 in MOT loading rate (\(2.85\times 10^{7}\)s\({}^{-1}\)) utilizing the above spectral broadening. Following the loading we enter the CMOT stage, where we reduce the temperature of the sample by extinguishing first the spectral broadening in 2ms and successively ramp down the magnetic field gradient \(1.5\to 1\)G/cm and the total laser intensity from \(625\to 0.02I_{s}\) (saturation intensity \(I_{s}=72\mu\)W/cm\({}^{2}\), MOT beams \(\diameter 45\)mm) in 50ms. In parallel we linearly ramp down the laser detuning toward the resonance \(\Delta_{626}=-38\to-10\Gamma_{626}\). This is not to be confused with the local detuning \(\Delta_{\rm loc}\) relative to the \(m_{J}=-8\to m_{J}=-9\) which remains constant at \(-\Gamma_{626}/2(\sqrt{(\eta-2)s-1})\), where \(\eta=\hbar k\Gamma_{626}/2m_{Dy}g\), \(m_{Dy}\) is the mass of Dy, \(g\) is the gravitational acceleration and \(k\) is the wavenumber for the 626-nm transition [74]. At higher final laser intensities we achieve higher atom number in the CMOT of \(2\times 10^{8}\) but at higher temperature. We further hold the sample for another 300ms to reach a temperature of 5.6\(\mu\)K which is above the limiting Doppler cooling temperature of \(T_{min}=\eta\hbar\Gamma_{626}/(2k_{B}(\eta-2))\simeq 3.4\mu\)K. The 300ms hold time exceeds the time (\(\sim\)50ms) for the sample temperature to reach a plateau but it is necessary for establishing local density equilibrium and therefore is needed for optimum dipole trap loading which follows. To characterize the atom losses in the MOT we take two decay curves at a fixed detuning \(\Delta_{626}=-10\Gamma_{626}\) and saturation parameter \(s=I/I_{s}\) (\(I\) is the laser intensity) for two different final values of \(s\)=0.02 and \(s\)=0.5 (Fig. 7c). We make a leap forward and mention, that we achieve the best phase-space density (PSD) in the dipole trap loading, following the CMOT stage, at \(s\)=0.02\(I_{s}\) Figure 7: MOT characterization. a) Increase of the atom number in the CMOT as a function of total laser power of the TC. b) MOT loading vs time. The atom number is measured after the CMOT stage at \(s=0.02\). c) Atom number vs time in a CMOT for (blue) \(s\)=0.5 and (red) \(s\)=0.02. The PSD is calculated as PSD=\(N(\hbar\bar{\omega}/k_{B}T)^{3}\), where \(N\) is the atom number, \(\bar{\omega}\) is the geometric average of the trap frequencies along orthogonal directions and \(T\) is the temperature. Only for these extremely low \(s\) parameter values we observe a steep initial decay, until a plateau with a very slow decay (\(>\)20s) is reached (Fig. 7c, \(s=0.02\)). This initial decay is caused most probably by diffusion and loss of atoms from the trap, unable to hold the complete cloud, but nevertheless in the next stage, when supplemented with a dipole trap performs best. The background pressure is \(\simeq 1\times 10^{-11}\)mbar and therefore we disregard the slow one-body decay due to collisions with background atoms and attribute this decay to one-body scattering processes due to the 626-nm light. For larger saturation value of \(s\)=0.5 the observed initial loss can be attributed to two-body loss due to light-assisted collisions [74] (Fig. 7c, blue trace). We extract a two-body loss rate from the initial slope of \(\beta=(1/N\bar{n})dN/dt=4\times 10^{-11}\)cm\({}^{3}\)/s, where \(\bar{n}=N/(8\pi^{3/2}\sigma_{x}^{2}\sigma_{z})\) is the average density and \(\sigma_{x,z}\) are the cloud size along \(x\) and \(z\) assuming \(\sigma_{x}=\sigma_{y}\). The obtained loss rate is close to than the predicted one from a theoretical model of light-induced collisions described in [74]. We vary the two most important parameters to achieve the best loading conditions for the dipole trap, namely detuning \(\Delta_{626}\) and saturation parameter \(s\). For the temperature dependence on the saturation parameter we observe a linear slope of \(T/\sqrt{s}=28\mu K\) for \(s\in(0.5,10)\). We do not observe any temperature dependence on \(\Delta_{626}\) for red detuned values \(>\)2\(\pi\times 0.3\)MHz but a large dependence of the spin polarization. To quantify the degree of spin polarization we perform a Stern-Gerlach experiment after the MOT light is switched off, where we briefly turn on a gradient field before imaging. Given the size and the temperature of the MOT we cannot resolve the individual spin components but can use the vertical cloud size \(\sigma_{z}^{\rm sg}\), normalized \(\sigma_{z}^{n}=\sigma_{z}^{\rm sg}/\sigma_{z}^{0}\) to the one without gradient pulse \(\sigma_{z}^{0}\), as a merit for the polarization purity. At detunings \(\Delta_{626}\geq 2\pi\times 1.4\)MHz the normalized size is \(\sigma_{z}^{n}\to 1\). This is consistent with the behavior observed in other experiments [70; 71; 74], explained by the sagging of the cloud under the quadrupole center in order to provide equilibrium between gravity and the radiative forces, subsequently leading to optical pumping due to the predominant \(\sigma^{-}\) transition. We choose the limiting value of \(\Delta_{626}=2\pi\times 1.4\)MHz as a compromise between spin purity and cloud size, given that the horizontal size \(\sigma_{x}\) grows linearly with the detuning and compromises the dipole trap loading efficiency later on. The spin polarization is crucial for the subsequent evaporative cooling in the dipole trap. Here in passing we comment on the importance of spin polarization on the evaporative cooling efficiency, described in Sec. IV.3. Out of scientific curiosity, we performed evaporative cooling starting with unpolarized sample (using detuning \(\Delta_{626}=2\pi\times 0.5\)MHz), which was possible only in a very narrow \(<\)10mG magnetic field region around zero magnetic field. Apart from the technical difficulty, eventually we barely achieve the onset of a Bose-Einstein condensate (BEC) with a tenfold compromised atom number. Eventually the sample becomes polarized during the evaporation ramp but in that process dipolar relaxation heats the sample and impairs the evaporative cooling efficiency. ### Tweezer loading, evaporative cooling and transport In order to load the tweezers, they are already turned on during the MOT stage with a power per beam of 11W. We increase the aspect ratio of the tweezer beams by a factor of 10 during the loading from the CMOT. This dynamic mode-matching to the post laser-cooled cloud size improves our loading efficiency from the CMOT by a factor of 2.5 to a rate of \(3.7\times 10^{7}\)s\({}^{-1}\) and an absolute number of trapped atoms of \(1.3\times 10^{7}\) (Fig. 8). After the tweezers loading, we extinguish the modulation and linearly ramp the powers of the two tweezers to 2W each in 500ms. At this point we have a PSD=0.007 and \(2.5\times 10^{6}\) atoms at 9\(\mu\)K. Next we initiate the first, out of three evaporation ramps, and simultaneously the horizontal transport. As shown in Fig. 9 (magenta solid trace) in 1s we transverse the horizontal gap between the MOT and QGM regions and exponentially lower the power of the tweezers to \(P\)=0.5W (atom number \(9\times 10^{5}\), \(T\)=2\(\mu\)K, PSD=0.03). Until this point we evaporate at the same magnetic field of \(B_{0}=0.8\)G as the one set during the CMOT stage after extinguishing the CMOT gradient field. We set this field with our large offset coils assuring a good spatial uniformity during the transport. Once the cloud is transported above the QGM we engage also the local offset coils, concentric with the QGM (Fig. 4), which allows us to adjust the \(z\)-field in the foothills of the Feshbach resonance at 2.75G (Fig. 10). The field is switched to the new value of 2.4G in 5ms and we initiate the vertical transport (Fig. 9, blue dashed trace). The same Feshbach region is mapped with the magnetic field Figure 8: Tweezer traps loading vs time. pointing along \(y\)-direction. Rotating the the magnetic field in a precise manner, while maintaining a particular contact scattering length is important for future plans to achieve a 2D sample by effectively canceling the DDI and the contact interaction (see VI). During the vertical transport, which takes 2.5s, we execute the second evaporation ramp ending at \(P\)=0.12W. We see the onset of a BEC at the end of evaporation 2 at \(T\)=200nK (Fig. 11a) and b)). The last ramp is executed in 1s and delivers a BEC of a condensate fraction 70% (Fig. 11c) and d)). During the last ramp we have the option to reshape the cloud by including in the evaporation either a vertical 772-nm beam (\(\diameter\) 150\(\mu\)m), or two orthogonal horizontal 1053-nm dipole traps (\(\diameter\)100\(\mu\)m each) that coincide with the X\({}^{\prime}\)Y\({}^{\prime}\) coordinate system (Fig. 4). The horizontal traps can be turned in real time into lattice beams later in the experimental sequence (detailed below). The automatic tuning of the aspect ratio of the tweezers completes the full control over all the trap frequencies. The 772-nm dipole trap goes in the positive \(z\)-direction through the QGM, instead of the more natural choice to propagate the beam in the negative \(z\)-direction. This is due to an undesired lattice formation from the non-perfect AR coating of the front face of the QGM prevents, which prevents proper evaporation. ### Optical lattices characterization The inset of Fig. 12 displays the interference pattern of the FA with 10\(\mu\)m spacing, imaged by a CDD. We load the atomic cloud in a single fringe by simultaneously raising the FA power and the 772-nm vertical beam power in 500ms, so we can barely hold the cloud against gravity. Meanwhile we linearly ramp down the tweezers power to zero. Figure 12 shows an exponential decay of the atom number with time constant of 2.6s in a FA with \(\omega_{z}=2\pi\times 1\)kHz trap frequency along the \(z\)-direction. The atom number is deduced by utilizing fluorescence on the 421-nm atomic transition for 1\(\mu\)s by the QGM. The Figure 11: Phase transition to a BEC. Optical density profile a) and c) and 1D atomic densities b) and d) revealing bimodal structure. Both bi-quadratic and thermal Gaussian fits are displayed in b) and d) to describe the BEC fraction and thermal component respectively. Atom number, temperature and BEC fraction are a), b) 4.5\(\times\)10\({}^{5}\) atoms, 200nK, 15% and c), d) 1.5\(\times\)10\({}^{5}\) atoms, 40nK, 70%. The effective pixel size in the object plane is 4\(\mu\)m. The time-of-flight is 25ms. The imaging is performed in absorption with Camera 2 along \(y^{\prime}\)-axis. Figure 10: Atom-loss spectrum taken at an initial temperature of 200nK in the low magnetic field range of interest. The arrow point at the region where the main evaporation and measurements in the manuscript are performed. The hold time at every point is 500ms. Normalization is done relative to the atom number at zero holding time. Figure 9: Horizontal and vertical transport with time. The accompanying MOT, CMOT and tweezers loading regions together with the three evaporation ramps are bounded by the vertical dashed lines (the state of the cloud on the pictures refers to the end of each period). Top row pictures are absorption images of Dy sample after a) 2.5s MOT + 300ms CMOT, b) tweezers 100ms after CMOT at aspect ratio 2.5 and \(P\)=4W c) tweezers after horizontal transport and first evaporation down to \(P\)=2W, d) tweezers after vertical transport and evaporation 2 and 3 down to \(P\)=0.07W. Onset of a BEC appears after evaporation stage 2. Images a) and b) are taken with Camera 1 along \(x\) at the MOT region, and images c) and d) with camera 2 at the QGM region along \(y^{\prime}\)(see text). polarization of each of the FA beams is in the \(x\)-\(z\) plane. The lifetime is reasonable for future experiments, and we don't observe a detectable shift from the 421-nm resonance, while holding the atoms in the FA during imaging. The expansion after releasing from the FA is anisotropic and in the vertical waist follows the expected linear time dependence of \(R(t)=\sqrt{3.5\hbar\omega_{0}/m_{Dy}t}\), where \(\omega_{0}\) is the trap frequency along the tight vertical \(z\)-direction and \(t\) is the time of flight [75]. To calibrate the short lattices at 1053nm and 532nm, as described in Sec. III.6, we use a standard method, where we pulse the lattice power (\(<\)0.5\(\mu\)s rise, fall times) over a short pulse and vary its time duration [76]. The BEC plane wave is then projected onto Bloch states, which evolve during the time of the pulse, and subsequently are projected back onto plane waves. The actual momentum peaks of this matter wave interference pattern, for both horizontal 532nm and vertical 1053-nm lattices is shown as an example in Fig. 13a,b. We detect oscillations between the population in the 0 and \(\pm 2\hbar k_{1053}\) momentum states (\(k\) is the wave-vector for the 1053-nm lattice), with a frequency corresponding to the energy difference between the zeroth and second energy bands. An example of such oscillation in the vertical 1053 lattice at \(\sim 30E_{R}\) (\(E_{R}=h^{2}/(2m_{Dy}\lambda_{1053}^{2})\), is the photon recoil energy, \(h\) is the Plank constant and \(\lambda_{1053}\) is the wavelength at 1053nm) is shown on (Fig. 13c). We have calibrated similarly our 532-nm lattices. The decay of the oscillation is possibly due to the inhomogeneity of our lattice beams and the present both contact interaction with scattering length of about 100\(a_{0}\) and DDI. The lifetime of a sample adiabatically loaded into the vertical lattice, at powers barely holding the atoms against gravity, reveals a dual exponential decay with the longer lifetime \(>\)20s (Fig. 13d). ## V Fluorescence imaging with the QGM Typically QGMs with alkali atoms [55] are supplemented with a cooling mechanism that keeps the atoms pinned in the lattice as they fluoresce. Currently we are attempting another approach given the large mass of Dy and the broad 421-nm imaging transition. Our detection efficiency is approximately 20%. This includes the QGM solid angle of 30%, the total transmission of the QGM (\(R\)\(<\)0.3% per surface) and two narrow transmission filters centered at 420-nm (Semrock, \(T\)\(>\)95%) placed in front of Camera 3. Furthermore the fluorescence is guided by two dichroic mirrors (Semrock, \(R\)\(>\)97%) in reflection to the camera, which has quantum efficiency of 83%. Given this Figure 12: Lifetime of the Dy cloud loaded in the 532-nm FA with a trap frequency \(\omega_{z}=2\pi\times 1\)kHz. The inset shows the FA interference pattern. Figure 13: Characterization of optical lattices. Population in the \(0\hbar k\) and \(\pm 2\hbar k\) momentum states for both vertical 1053-nm lattice a) with 10ms time-of-flight, and a horizontal \(x^{\prime}\) lattice beam at 532nm b) with 4ms time-of-flight. The boundary visible on panel b) represents the QGM front surface. c) A lattice depth measurement showing oscillations between \(0\hbar k\) and \(\pm 2\hbar k\) momentum states after sudden loading in a lattice. The inset shows the 0-2 band gap at at E\({}_{R}\). d) Lifetime of the full cloud (BEC and thermal fraction) loaded in the vertical retro-reflected from the QGM lattice at 1053-nm. significant collection efficiency we are currently pursuing a fast illumination of 1-2\(\mu\)s during which we detect 20-40 photons/atom, or \(\sim\) 2-4 photons/pixel/atom given our current magnification of 160 [77]. Given the large mass of Dy for this short time the atom will barely diffuse across a single lattice site and therefore one can even turn off the lattices. For this preliminary work lattices and traps are always kept on during the fluorescence detection through the QGM. For the above strategy to succeed we first need to eliminate the linear net radiative force on the atoms by the excitation beam. This beam is resonant with the 421-nm Dy transition, with an intensity \(>\)100\(I_{421}\), and its focus at the atom position is \(\diameter 100\mu\)m. The tight focus avoids clipping from the front face of the QGM. This beam is practically co-propagating with the \(y^{\prime}\) lattice, making a 3\({}^{\circ}\) angle with it, and will therefore tilt the lattice potential along this direction. As an example, the needed lattice depth for 532-nm lattice to contain the atoms against this tilt, if retro-reflection was not used, is \(V_{min}=\hbar k_{421}\Gamma_{421}/2k_{532}\sim 1.9\)mK. Here \(k_{421}\) and \(k_{532}\) are the wave-vectors of the excitation and lattice light at 421nm and 532nm respectively. This is rather substantial given the power available for our lattices. The small angle of 3\({}^{\circ}\) allows us to deviate the excitation beam from the lattice light after the cloud and retro-reflected it back onto it with an orthogonal polarization and an identical beam parameters. To assure a good overlap with the original illuminating beams, we utilize fluorescence imaging transversely to the QGM using Camera 2. We first block the retro-reflected beam and provide a single excitation pulse of 50\(\mu\)s. The fluorescence image resembles a cone aligned in the direction of the excitation beam, formed by atoms being accelerated along its \(k_{421}\) wave-vector (Fig. 14a). We then carefully align the retro-reflected beam until we see no offset in the original position of the cloud (Fig. 14b). The leftover heating during the excitation is due to diffusion. The expected heating rate due to momentum kicks of absorption and emission is \(dT/dt=\Gamma_{421}\hbar^{2}k_{421}^{2}/2k_{B}m_{\rm Dy}=67\mu\)K/\(\mu\)s. Once the addressing 421-nm beam is balanced we adjust the sample with the tweezers at a specific height above the QGM, in the vicinity of its focal plane and extinguish tweezer 1, while keeping tweezer 2. Simultaneously we raise the 1053-nm lattice beam along \(x^{\prime}\) in a trap mode (Fig. 4). The two beams are derived from the same laser but diffracted trough AOMs driven at different frequency to avoid interference. The purpose of this first positioning procedure is to create a lattice with a crude spacing, in order to be imaged by the QGM handily. Therefore we implemented an option to drive the AOMs at the same frequency, derived from a single DDS and therefore create interference. The switching is automated and can be triggered at any time during the sequence. The horizontal lattice formed this way has a spacing of 2\(\mu\)m and allows for a rough positioning of the cloud above the QGM. A fluorescence image of the crude lattice through the QGM is shown on Fig. 15a after 1\(\mu\)s illumination time. As another diagnostics on the QGM and the overall system, we evaporatively cool the Dy cloud into a droplet state by evaporating down to a trap configuration with anisotropic frequencies (\(f_{x}\), \(f_{y}\), \(f_{z}\))=(11, 76, 160)Hz at scattering length \(\sim 90a_{0}\) (at 1.9G) with a relatively high atom number of 1.5-2\(\times 10^{5}\). The magnetic field is oriented along the \(z\)-axis. At these parameters one expects a condensation into a droplet-array ground state as supported by a simulation shown in Fig. 15b, which is based on a extended Gross-Pitaevskii equation (eGPE) [78; 79; 80; 81; 82; 83]. A in situ 1\(\mu\)s fluorescence image through the QGM reveals the expected droplet formation (Fig. 15c), which remains robust at magnetic fields range of \(1.7-2.1\)G [84]. These patterns imaged with the QGM, prove that the resolution is within 1\(\mu\)m. The collected photons during the short exposure time of 1\(\mu\)s validate our estimation of optical losses in the imaging system and the numerical aperture of the QGM and assure single atom sensitivity. For demonstrating a resolution down to 1/4\(\mu\)m, one must assure a stability of the optical lattices relative to the QGM within a region of several hundred nm from its focal plane. Leaving this region will blur the image and smear the individual atomic images. The oscillator length the 2D sample should be also within the QGM depth of focus. ## VI Conclusions and perspectives We presented a potential experimental platform capable of simulating spin models of quantum magnetism based on highly-magnetic Dy atoms pinned on a optical lattice. Additionally Dy atoms prepared in a specific set of degenerate magnetic sublevels, members of a highly electrically coupled opposite parity doublet, cast a peculiar pseudospin-1/2 system, that permits both eDDI and mDDI to operate within. Due to the degeneracy, the intrinsic to the DDI spin-orbit coupling term is activated additionally to the conventional Ising and spin-exchange (state swapping) DDI terms primarily considered. Unlike those terms the spin-orbit coupling changes the spin angular momentum by \(\pm 2\) and therefore imprints a phase factor of \(\exp(\pm 2i\phi_{ij})\) associated with a change of the or Figure 14: Compensation of excitation beam radiative force effects. a) Side fluorescence imaging of Dy cloud showing an accelerated Dy cloud due to unbalanced excitation beam. b) The same as a) with balanced excitation beam by counter-propagating it with orthogonal polarization. The black line in a) and b) shows the surface of the QGM. Underneath the cloud mirror image from the top QGM surface is visible. bital angular momentum among a atom pair. Once the DDI is projected onto the above pseudospin-1/2 basis and rewritten in terms of a conventional spin-1/2 operator set \(\hat{S}_{x,y,x}\) one arrives at a generalized XYZ spin model where the asymmetry of the rates governing the \(\hat{S}_{x}\hat{S}_{x}\) and \(\hat{S}_{y}\hat{S}_{y}\) interactions is assured by the spin-orbit DDI. Furthermore these rates depend on the lattice geometry though the azimutal phase \(\phi_{ij}\). Quantum simulators, such as this one, present a irreplaceable tool for exploring tunable low-symmetry models, that could not be approached analytically or numerically. Furthermore we described a new apparatus and its basic operation, tailored to achieve the above goals. The apparatus consists of many atypical, or out of the mainstream alternatives to conventional cold atom experimental approaches, which we hope this work validates. Specifically utilizing a large glass cell with a indium seal eliminates the necessity of a long glass to metal transition. Having a single cell serving as a main laser cooling vessel and containing a QGM facilitates the atomic transport. The glass cell nano-coating provides lossless access for any laser wavelength from the mid-UVC region to the NIR even at large angle of incidence. The QGM simplistic miniature design, allows its positioning in-vacuum, and therefore possesses a large NA, field of view and reasonable working distance, while eliminating the need to include glass vacuum ports into the optical design. The last approach suffers from the glass warping due to the atmospheric pressure and the viewport manufacturing process itself. In near future we plan to complement the vertical 1053-nm lattice, currently retro-reflected from the QGM front face with a secondary retro-reflected beam at 1095-nm. The formed beat-note lattice [85] should form the same long wavelength potential envelope as we currently have with the 532-nm FA (with step 13\(\mu\)m), with the difference that this potential will be referenced to the QGM. Preliminary simulations show that one can load a single plane of the short wavelength lattices by allowing a careful adiabatic increase of the beat-note potential and additionally entering a regime where the contact interaction and the DDI compensate each other [86]. The beat-note lattice could be further used to load two nearby 2D layers allowing the engineering of bilayer dipolar systems [87; 88; 89]. ###### Acknowledgements. We are highly indebted to R.N. Bisset for providing us with a eGPE Matlab simulation of the droplet formation and beat-note lattice loading. We thank M. Fattori for discussion regarding the beat-note lattice and its implementation and M. Lepers for providing us with details on the OPS. We thank C. Ravensbergen for providing the ZS design. We thank M. Kreyer, A. Canali and C. Baroni for useful suggestions on this script. We acknowledge a generous financial support by a ESQ discovery grant (306532) and by the Julian Schwinger foundation (JSF-22-05-0010). ## Appendix A Detailed preparation of a degenerate isospin-1/2 system with mDDI and eDDI As shown on the inset of Fig. 1, we depict the rightmost magnetic sublevels of the OPS manifold, as \(|1\rangle=|J=10,m=-10\rangle\) (we drop the J and m notation from here on), \(|\alpha\rangle=|10,-9\rangle\) and \(|\beta\rangle=|9,-9\rangle\). We restrict ourselves to Dy atoms in the \(x-y\) plane, with applied magnetic field \(\vec{B}=B\hat{z}\) and a MW \(\vec{E_{m}}=E_{m}e^{i\omega t}\hat{z}\) with linear polarization directed along the \(\hat{z}\) direction. The MW \({}^{\prime\prime}\)dresses\({}^{\prime\prime}\) the bare states \(|\alpha\rangle\) and \(|\beta\rangle\)[90; 91]. For this geometry and choice of states, the state \(|1\rangle\) has no \({}^{\prime\prime}\)partner\({}^{\prime\prime}\) in the \(J=9\) to be coupled by the MW. Moreover the g-factors of both \(J=9\) and \(J=10\) states are almost identical (\(g_{10}=1.3\), \(g_{9}=1.32\)) therefore we can consider the MW detuning \(\delta\) as unaffected by the Zeeman shift. Then for certain magnetic field \(B\), and MW detuning \(\delta\) and Rabi frequency \(\Omega\) we can induce degeneracy between the bare state \(|1\rangle\) and the MW dressed state \(|2\rangle=\sin\theta|\alpha\rangle+\cos\theta\exp(i\omega t)|\beta\rangle\) (where \(\cos 2\theta=-\delta/\sqrt{\delta^{2}+\Omega^{2}}\)). The crossing will occur at magnetic field \(g\mu_{B}B=\hbar/2(-\delta+\sqrt{\delta^{2}+\Omega^{2}})\) with the induced electric dipole moment in the \(|2\rangle\) state of \(\langle 2|d|2\rangle=(1/\sqrt{210})\langle 10||d|9\rangle(1/\sqrt{1+(\delta/ \Omega)^{2}})\). We now focus on the degenerate two level system (\(|1\rangle\), \(|2\rangle\)) in the limit of small \(\theta\), coupled by both mDDI and eDDI. In the geometry discussed the relevant parts of the electric and magnetic DDI are \(\hat{H}_{0}^{2}=(1/R_{ij}^{3})((\hat{d}_{1}^{\dagger}\hat{d}_{-1}^{\dagger}+ \hat{d}_{-1}^{\dagger}\hat{d}_{1}^{\dagger})/2+\hat{d}_{0}^{\dagger}\hat{d}_{0 }^{\dagger})\) and \(\hat{H}_{+2,-2}^{2}=(-3/2R_{ij}^{3})(\hat{d}_{1}^{\dagger}\hat{d}_{1}^{\dagger} \hat{d}_{1}^{\dagger}-2i\hat{d}_{0ij}+\hat{d}_{-1}^{\dagger}\hat{d}_{1}^{ \dagger})\). Figure 15: Fluorescence imaging with the QGM. a) A 2\(\mu\)m optical lattice for initial QGM alignment imprinted on the atoms. b) eGPE simulation of a Dy cloud with the experimental parameters (see text) c) in situ image of a Dy cloud condensed in a droplet state taken with the QGM. \(h.c.\)). Here \(\phi_{ij}\) is the azimuthal angle of the vector connecting the atoms, positioned at lattice sites \(i,j\), \(R_{ij}\) is the distance between them, and the polar angle is automatically substituted with \(\pi/2\); superscripts refer to the rank of the spherical tensor and the subscripts to its components. Like in proposals based on magnetic atoms [41] or polar molecules [28; 92] both DDI are projected in the \(|1\rangle\),\(|2\rangle\) basis one generates a spin model of the type \(\hat{H}_{spin}=\Sigma_{i,j}(A_{i,j}\hat{S}_{x}^{i}\hat{S}_{x}^{j}+B_{i,j}\hat{S }_{y}^{i}\hat{S}_{y}^{j}+C_{i,j}\hat{S}_{z}^{i}\hat{S}_{z}^{j}+D_{i,j}\{\hat{S}_ {x}^{i},\hat{S}_{y}^{j}\})+\Sigma_{i}H_{i}\hat{S}_{z}^{i}\) (\(\{,\}\) is a anti-commutator). Here spin operators are defined in the standard way: \(\hat{S}_{z}^{i}=1/2(|1\rangle_{i}\langle 1|_{i}-|2\rangle)_{i}\langle 2|_{i}\), \(\hat{S}_{x}^{i}=1/2(|1\rangle_{i}\langle 2|_{i}+|2\rangle)_{i}\langle 1|_{i}\) and \(\hat{S}_{y}^{i}=1/2i(|1\rangle_{i}\langle 2|_{i}-|2\rangle)_{i}\langle 1|_{i}\)). We have omitted terms proportional to \(\hat{n}_{i}\hat{n}_{j}\), which are constant for pinned dipoles (\(\hat{n}_{i}\), the molecular density is a substitute for the identity operator and is present in \(H_{i}\)). One unique feature of the current system is the asymmetry between the \(A_{i,j}\) and \(B_{i,j}\) exchange rates coming from the added/subtracted contribution respectively of the spin-orbit part of the DDI \(\propto\cos(2\phi_{ij})\) to the otherwise symmetric \(A_{i,j}\) and \(B_{i,j}\) rates. The spin-orbit coupling adds also the anti-commutator term with a lattice index dependent rate \(D_{i,j}\propto\sin(2\phi_{ij})\), which vanishes in 1D or on 2D square lattice. Since we envision a regime where both magnetic and electric DDI are of the same order \((|d_{e}|\tan(\theta)c^{2}/g_{L}\mu_{B}\sqrt{J(J+1)(2J+1)})^{2}\propto 1\) (neglecting Wigner 3-j coefficients ratios which are very similar in magnitude among the states of interest) one needs only a small admixture (\(\theta_{0}\sim\pi/180\) rad). In this limit the corresponding rates can be expressed \((A_{i,j};B_{i,j};C_{i,j};D_{i,j})=1/r_{i,j}^{3}(D_{m}+D_{e}\theta^{2}+3D_{m} \cos(2\phi_{ij});D_{m}+D_{e}\theta^{2}-3D_{m}\cos(2\phi_{ij});P_{m}+Q_{e} \theta^{2};D_{m}\sin(2\phi_{ij}))\) and the effective magnetic field \(H_{i}=\Sigma_{j}(R_{m}+S_{e}\theta^{2})(1/r_{i,j}^{3})\hat{n}_{j}\), where we use the indexes \(e,m\) to refer to terms generated by the magnetic and electric DDI respectively and \(\hat{n}_{j}=\Sigma_{\alpha=|1\rangle,|2\rangle}\hat{a}_{j\alpha}^{\dagger} \hat{a}_{j\alpha}\) is the molecular density operator. One can vary the above rates by changing the mixing angle \(\theta\) around \(\theta_{0}\) which depends only on the ratio \(\Omega/\delta\) and then adjusting the B-field to reach again degeneracy. On a typical optical lattice at 532-nm the above rates can be tuned in the vicinity of 50Hz (the static mDDI rate for two atoms prepared in \(|1\rangle\) is 150Hz). Preparing neighboring degenerate states with lower angular momentum projection provides solely mDPI rates in the vicinity of 300Hz. Moreover for such choice of states, unlike the example discussed above, there will be also a spin-orbit eDDI. Therefore overall rates in the vicinity of 700Hz are achievable with mixing angles of \(\theta\sim 3\theta_{0}\). We expect a maximally mixed eigenvector in the degenerate basis, with (\(\theta\sim\theta_{0}\)) to have lifetime exceeding 250ms limited by the theoretically estimated lifetime of the \(|J=9\rangle\) state.
2302.07679
On graph-based reentrancy-free semantic parsing
We propose a novel graph-based approach for semantic parsing that resolves two problems observed in the literature: (1) seq2seq models fail on compositional generalization tasks; (2) previous work using phrase structure parsers cannot cover all the semantic parses observed in treebanks. We prove that both MAP inference and latent tag anchoring (required for weakly-supervised learning) are NP-hard problems. We propose two optimization algorithms based on constraint smoothing and conditional gradient to approximately solve these inference problems. Experimentally, our approach delivers state-of-the-art results on Geoquery, Scan and Clevr, both for i.i.d. splits and for splits that test for compositional generalization.
Alban Petit, Caio Corro
2023-02-15T14:14:09Z
http://arxiv.org/abs/2302.07679v1
# On graph-based reentrancy-free semantic parsing ###### Abstract We propose a novel graph-based approach for semantic parsing that resolves two problems observed in the literature: (1) seq2seq models fail on compositional generalization tasks; (2) previous work using phrase structure parsers cannot cover all the semantic parses observed in treebanks. We prove that both MAP inference and latent tag anchoring (required for weakly-supervised learning) are NP-hard problems. We propose two optimization algorithms based on constraint smoothing and conditional gradient to approximately solve these inference problems. Experimentally, our approach delivers state-of-the-art results on GeoQuery, Scan and Clevr, both for i.i.d. splits and for splits that test for compositional generalization. + Footnote †: This work has been accepted for publication in TACL. This version is a pre-MIT Press publication version. ## 1 Introduction Semantic parsing aims to transform a natural language utterance into a structured representation that can be easily manipulated by a software (for example to query a database). As such, it is a central task in human-computer interfaces. Andreas et al. (2013) first proposed to rely on machine translation models for semantic parsing, where the target representation is linearized and treated as a foreign language. Due to recent advances in deep learning and especially in sequence-to-sequence (seq2seq) with attention architectures for machine translation Bahdanau et al. (2015), it is appealing to use the same architectures for standard structured prediction problems Vinyals et al. (2015). This approach is indeed common in semantic parsing Jia and Liang (2016); Dong and Lapata (2016); Wang et al. (2020), _inter alia_. Unfortunately, there are well known limitations to seq2seq architectures for semantic parsing. First, at test time, the decoding algorithm is typically based on beam search as the model is autoregressive and does not make any independence assumption. In case of prediction failure, it is therefore unknown if this is due to errors in the weighting function or to the optimal solution failing out of the beam. Secondly, they are known to fail when compositional generalization is required Lake and Baroni (2018); Finegan-Dollak et al. (2018); Keysers et al. (2020). In order to bypass these problems, Herzig and Berant (2021) proposed to represent the semantic content associated with an utterance as a phrase structure, _i.e._ using the same representation usually associated with syntactic constituents. As such, their semantic parser is based on standard span-based decoding algorithms Hall et al. (2014); Stern et al. (2017); Corro (2020) with additional well-formedness constraints from the semantic formalism. Given a weighting function, MAP inference is a polynomial time problem that can be solved via a variant of the CYK algorithm Kasami (1965); Younger (1967); Cocke (1970). Experimentally, Herzig and Berant (2021) show that their approach outperforms seq2seq models in terms of compositional generalization, therefore effectively bypassing the two major problems of these architectures. The complexity of MAP inference for phrase structure parsing is directly impacted by the search space considered Kallmeyer (2010). Importantly, (ill-nested) discontinuous phrase structure parsing is known to be NP-hard, even with a bounded block-degree Satta (1992). Herzig and Berant (2021) explore two restricted inference algorithms, both of which have a cubic time complexity with respect to the input length. The first one only considers continuous phrase structures, _i.e._ derived trees that could have been generated by a context-free grammar, and the second one also considers a specific type of discontinuities, see Corro (2020, Section 3.6). Both algorithms fail to cover the full set of phrase structures observed in semantic treebanks, see Figure 1. In this work, we propose to reduce semantic parsing without reentrancy (_i.e._ a given predicate or entity cannot be used as an argument for two different predicates) to a bi-lexical dependency parsing problem. As such, we tackle the same semantic content as aforementioned previous work but using a different mathematical representation (Rambow, 2010). We identify two main benefits to our approach: (1) as we allow crossing arcs, _i.e._ "non-projective graphs", all datasets are guaranteed to be fully covered and (2) it allows us to rely on optimization methods to tackle inference intractability of our novel graph-based formulation of the problem. More specifically, in our setting we need to jointly assign predicates/entities to words that convey a semantic content and to identify arguments of predicates via bi-lexical dependencies. We show that MAP inference in this setting is equivalent to the maximum generalized spanning arborescence problem (Myung et al., 1995) with supplementary constraints to ensure well-formedness with respect to the semantic formalism. Although this problem is NP-hard, we propose an optimization algorithm that solves a linear relaxation of the problem and can deliver an optimality certificate. Our contributions can be summarized as follows: * We propose a novel graph-based approach for semantic parsing without reentrancy; * We prove the NP-hardness of MAP inference and latent anchoring inference; * We propose a novel integer linear programming formulation for this problem together with an approximate solver based on conditional gradient and constraint smoothing; * We tackle the training problem using variational approximations of objective functions, including the weakly-supervised scenario; * We evaluate our approach on GeoQuery, Scan and Clevr and observe that it outperforms baselines on both i.i.d. splits and splits that test for compositional generalization. Code to reproduce the experiments is available online.1 Footnote 1: [https://github.com/alban-petit/semantic-dependency-parser](https://github.com/alban-petit/semantic-dependency-parser) ## 2 Graph-based semantic parsing We propose to reduce semantic parsing to parsing the abstract syntax tree (AST) associated to a semantic program. We focus on semantic programs Figure 1: Example of a semantic phrase structure from GeoQuery. This structure is outside of the search space of the parser of Herzig and Berant (2021) as the constituent in red is discontinuous and also has a discontinuous parent (in red+green). Figure 2: **(top)** The semantic program corresponding to the sentence “What rivers do not run through Tennessee?” in the GeoQuery dataset. **(middle)** The associated AST. **(bottom)** Two examples illustrating the intuition of our model: we jointly assign predicates/entities and identify argument dependencies. As such, the resulting structure is strongly related to a syntactic dependency parse, but where the dependency structure do not cover all words. whose ASTs do not have any reentrancy, _i.e._ a single predicate or entity cannot be the argument of two different predicates. Moreover, we assume that each predicate or entity is anchored on exactly one word of the sentence and each word can be the anchor of at most one predicate or entity. As such, the semantic parsing problem can be reduced to assigning predicates and entities to words and identifying arguments via dependency relations, see Figure 2. In order to formalize our approach to the semantic parsing problem, we will use concepts from graph theory. We therefore first introduce the vocabulary and notions that will be useful in the rest of this article. Notably, the notions of cluster and generalized arborescence will be used to formalize our prediction problem. **Notations and definitions.** Let \(G=\langle V,A\rangle\) be a directed graph with vertices \(V\) and arcs \(A\subseteq V\times V\). An arc in \(A\) from a vertex \(u\in V\) to a vertex \(v\in V\) is denoted either \(a\in A\) or \(u\to v\in A\). For any subset of vertices \(U\subseteq V\), we denote \(\sigma^{+}_{G}(U)\) (resp. \(\sigma^{-}_{G}(U)\)) the set of arcs leaving one vertex of \(U\) and entering one vertex of \(V\setminus U\) (resp. leaving one vertex of \(V\setminus U\) and entering one vertex of \(U\)) in the graph \(G\). Let \(B\subseteq A\) be a subset of arcs. We denote \(V[B]\) the cover set of \(B\), _i.e._ the set of vertices that appear as an extremity of at least one arc in \(B\). A graph \(G=\langle V,A\rangle\) is an arborescence2 rooted at \(u\in V\) if and only if (iff) it contains \(|V|-1\) arcs and there is a directed path from \(u\) to each vertex in \(V\). In the rest of this work, we will assume that the root is always vertex \(0\in V\). Let \(B\subseteq A\) be a set of arcs such that \(G^{\prime}=\langle V[B],B\rangle\) is an arborescence. Then \(G^{\prime}\) is a spanning arborescence of \(G\) iff \(V[B]=V\). Footnote 2: In the NLP community, arborescences are often called (directed) trees. We stick with the term arborescence as it is more standard in the graph theory literature, see for example Schrijver (2003). Using the term tree introduces a confusion between two unrelated algorithms, Kruskal’s maximum spanning tree algorithm Kruskal (1956) that operates on undirected graphs and Edmond’s maximum spanning arborescence algorithm Edmonds (1967) that operates on directed graphs. Moreover, this prevents any confusion between the graph object called arborescence and the semantic structure called AST. Let \(\pi=\{V_{0},...,V_{n}\}\) be a partition of \(V\) containing \(n+1\) clusters. \(G^{\prime}\) is a generalized not-necessarily-spanning arborescence (resp. generalized spanning arborescence) on the partition \(\pi\) of \(G\) iff \(G^{\prime}\) is an arborescence and \(V[B]\) contains at most one vertex per cluster in \(\pi\) (resp. contains exactly one). Let \(W\subseteq V\) be a set of vertices. Contracting \(W\) consists in replacing in \(G\) the set \(W\) by a new vertex \(w\notin V\), replacing all the arcs \(u\to v\in\sigma^{-}(W)\) by an arc \(u\to w\) and all the arcs \(u\to v\in\sigma^{+}(W)\) by an arc \(w\to v\). Given a graph with partition \(\pi\), the contracted graph is the graph where each cluster in \(\pi\) has been contracted. While contracting a graph may introduce parallel arcs, it is not an issue in practice, even for weighted graphs. ### Semantic grammar and AST. The semantic programs we focus on take the form of a functional language, _i.e._ a representation where each predicate is a function that takes other predicates or entities as arguments. The semantic language is typed in the same sense than in "typed programming languages". For example, in Geo Figure 3: **(a)** Example of a sentence and its associated AST (solid arcs) from the GeoQuery dataset. The dashed edges indicate predicates and entities anchors (note that this information is not available in the dataset). **(b)** The corresponding generalized valency-constrained not-necessarily-spanning arborescence (red arcs). The root is the isolated top left vertex. Adding \(\emptyset\) tags and dotted orange arcs produces a generalized spanning arborescence. Query, the predicate capital_2 expects an argument of type city and returns an object of type state. In the datasets we use, the typing system disambiguates the position of arguments in a function: for a given function, either all arguments are of the same type or the order of arguments is unimportant -- an example of both is the predicate intersection_river in GeoQuery that takes two arguments of type river, but the result of the execution is unchanged if the arguments are swapped.3 Footnote 3: There are a few corner cases like exclude_river, for which we simply assume arguments are in the same order as they appear in the input sentence. Formally, we define the set of valid semantic programs as the set of programs that can be produced with a semantic grammar \(\mathcal{G}=\langle E,T,f_{\textsc{TYPE}},f_{\textsc{ARGS}}\rangle\) where: * \(E\) is the set of predicates and entities, which we will refer to as the set of tags -- w.l.o.g. we assume that Root\(\notin E\) where Root is a special tag used for parsing; * \(T\) is the set of types; * \(f_{\textsc{TYPE}}:E\to T\) is a typing function that assigns a type to each tag; * \(f_{\textsc{ARGS}}:E\times T\rightarrow\mathbb{N}\) is a valency function that assigns the numbers of expected arguments of a given type to each tag. A tag \(e\in E\) is an entity iff \(\forall t\in T:f_{\textsc{ARGS}}(e,t)=0\). Otherwise, \(e\) is a predicate. A semantic program in a functional language can be equivalently represented as an AST, a graph where instances of predicates and entities are represented as vertices and where arcs identify arguments of predicates. Formally, an AST is a labeled graph \(G=\langle V,A,l\rangle\) where function \(l:V\to E\) assigns a tag to each vertex and arcs identify the arguments of tags, see Figure 2. An AST \(G\) is well-formed with respect to the grammar \(\mathcal{G}\) iff \(G\) is an arborescence and the valency and type constraints are satisfied, _i.e._\(\forall u\in V,t\in T\): \[f_{\textsc{ARGS}}(l(u),t)=|\sigma_{G}^{+}(\{u\},t)|\] where: \[\sigma_{G}^{+}(\{u\},t)=\left\{\begin{array}{l}u\to v\in\sigma_{G}^{+}( \{u\})\\ \text{s.t. }f_{\textsc{TYPE}}(l(v))=t\end{array}\right\}.\] ### Problem reduction and complexity In our setting, semantic parsing is a joint sentence tagging and dependency parsing problem (Bohnet and Nivre, 2012; Li et al., 2011; Corro et al., 2017): each content word (_i.e._ words that convey a semantic meaning) must be tagged with a predicate or an entity, and dependencies between content words identify arguments of predicates, see Figure 2. However, our semantic parsing setting differs from standard syntactic analysis in two ways: (1) the resulting structure is not-necessarily-spanning, there are words (_e.g._ function words) that must not be tagged and that do not have any incident dependency -- and those words are not known in advance, they must be identified jointly with the rest of the structure; (2) the dependency structure is highly constrained by the typing mechanism, that is the predicted structure must be a valid AST. Nevertheless, similarly to aforementioned works, our parser is graph-based, that is for a given input we build a (complete) directed graph and decoding is reduced to computing a constrained subgraph of maximum weight. Given a sentence \(\mathbf{w}=w_{1}...w_{n}\) with \(n\) words and a grammar \(\mathcal{G}\), we construct a clustered labeled graph \(G=\langle V,A,\pi,\bar{l}\rangle\) as follows. The partition \(\pi=\{V_{0},...,V_{n}\}\) contains \(n+1\) clusters, where \(V_{0}\) is a root cluster and each cluster \(V_{i}\), \(i\neq 0\), is associated to word \(w_{i}\). The root cluster \(V_{0}=\{0\}\) contains a single vertex that will be used as the root and every other cluster contains \(|E|\) vertices. The extended labeling function \(\bar{l}:V\to E\cup\{\textsc{Root}\}\) assigns a tag in \(E\) to each vertex \(v\in V\setminus\{0\}\) and Root to vertex \(0\). Distinct vertices in a cluster \(V_{i}\) cannot have the same label, _i.e._\(\forall u,v\in V_{i}:u\neq v\implies\bar{l}(u)\neq\bar{l}(v)\). Let \(B\subseteq A\) be a subset of arcs. The graph \(G^{\prime}=\langle V[B],B\rangle\) defines a 0-rooted generalized valency-constrained not-necessarily-spanning arborescence iff it is a generalized arborescence of \(G\), there is exactly one arc leaving 0 and the arborescence rooted at the destination of that arc is a valid AST with respect to the grammar \(\mathcal{G}\). As such, there is a one-to-one correspondence between ASTs anchored on the sentence \(\mathbf{w}\) and generalized valency-constrained not-necessarily-spanning arborescences in the graph \(G\), see Figure 2(b). For any sentence \(\mathbf{w}\), our aim is to find the AST that most likely corresponds to it. Thus, after building the graph \(G\) as explained above, the neural network described in Appendix B is used to pro duce a vector of weights \(\mathbf{\mu}\in\mathbb{R}^{|V|}\) associated to the set of vertices \(V\) and a vector of weights \(\mathbf{\phi}\in\mathbb{R}^{|A|}\) associated to the set of arcs \(A\). Given these weights, graph-based semantic parsing is reduced to an optimization problem called the maximum generalized valency-constrained not-necessarily-spanning arborescence (MGVCNNSA) in the graph \(G\). **Theorem 1**.: _The MGVCNNSA problem is NP-hard._ The proof is in Appendix A. ### Mathematical program Our graph-based approach to semantic parsing has allowed us to prove the intrinsic hardness of the problem. We follow previous work on graph-based parsing (Martins et al., 2009; Koo et al., 2010), _inter alia_, by proposing an integer linear programming (ILP) formulation in order to compute (approximate) solutions. Remember that in the joint tagging and dependency parsing interpretation of the semantic parsing problem, the resulting structure is not-necessarily-spanning, meaning that some words may not be tagged. In order to rely on well-known algorithms for computing spanning arborescences as a subroutine of our approximate solver, we first introduce the notion of extended graph. Given a graph \(G=\langle V,A,\pi,\bar{l}\rangle\), we construct an extended graph \(\overline{G}=\langle\overline{V},\overline{A},\overline{\pi},\bar{l}\rangle^{4}\) containing \(n\) additional vertices \(\{\mathbb{T},...,\overline{n}\}\) that are distributed along clusters, _i.e._\(\overline{\pi}=\{V_{0},V_{1}\cup\{\mathbb{T}\},...,V_{n}\cup\{\mathbb{T}\}\}\), and arcs from the root to these extra vertices, _i.e._\(\overline{A}=A\cup\{0\rightarrow\tilde{i}|1\leq i\leq n\}\). Let \(B\subseteq A\) be a subset of arcs such that \(\langle V[B],B\rangle\) is a generalized not-necessarily-spanning arborescence on \(G\). Let \(\overline{B}\subseteq\overline{A}\) be a subset of arcs defined as \(\overline{B}=B\cup\{0\rightarrow\tilde{i}|\sigma^{-}_{\langle V[B],B\rangle}( V_{i})=\emptyset\}\). Then, there is a one-to-one correspondence between generalized not-necessarily-spanning arborescences \(\langle V[B],B\rangle\) and generalized spanning arborescences \(\langle\overline{V[B]},\overline{B}\rangle\), see Figure 2(b). Let \(\mathbf{x}\in\{0,1\}^{|\overline{V}|}\) and \(\mathbf{y}\in\{0,1\}^{|\overline{A}|}\) be variable vectors indexed by vertices and arcs such that a vertex \(v\in V\) (resp. an arc \(a\in A\)) is selected iff \(x_{v}=1\) (resp. \(y_{a}=1\)). The set of 0-rooted generalized valency-constrained spanning arborescences on \(\overline{G}\) can be written as the set of variables \(\langle\mathbf{x},\mathbf{y}\rangle\) satisfying the following linear constraints. First, we restrict \(\mathbf{y}\) to structures that are spanning arborescences over \(\overline{G}\) where clusters have been contracted: \[\sum_{a\in\sigma_{\overline{G}}^{-}(V_{0})}y_{a}=0 \tag{1}\] \[\sum_{a\in\sigma_{\overline{G}}^{-}\left(\bigcup_{\overline{U}_{ \overline{G}}\in\overline{\pi}}\overline{U}\right)}y_{a}\geq 1\quad\forall \overline{\pi^{\prime}}\subseteq\overline{\pi}\setminus\{V_{0}\}\] (2) \[\sum_{a\in\sigma_{\overline{G}}^{-}(V_{t})}y_{a}=1\quad\quad\quad \forall\overline{V_{i}}\in\overline{\pi}\setminus\{V_{0}\} \tag{3}\] Constraints (2) ensure that the contracted graph is weakly connected. Constraints (3) force each cluster to have exactly one incoming arc. The set of vectors \(\mathbf{y}\) that satisfy these three constraints are exactly the set of \(0\)-rooted spanning arborescences on the contracted graph, see Schrijver (2003, Section 52.4) for an in-depth analysis of this polytope. The root vertex is always selected and other vertices are selected iff they have one incoming selected arc: \[x_{0} =1 \tag{4}\] \[x_{u} =\sum_{a\in\sigma_{\overline{G}}^{-}(\{u\})}y_{a}\quad\quad\quad \quad\forall u\in\overline{V}\setminus\{0\} \tag{5}\] Note that constraints (1)-(3) do not force selected arcs to leave from a selected vertex as they operate at the cluster level. This property will be enforced via the valency constraints: \[\sum_{u\in V\setminus\{0\}}y_{0\to u}=1\] (6) \[\sum_{a\in\sigma_{G}^{+}(\{u\},t)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! and \(\mathcal{C}=\mathcal{C}^{\text{(sa)}}\cap\mathcal{C}^{\text{(val)}}\). Given vertex weights \(\mathbf{\mu}\in\mathbb{R}^{|\mathcal{V}|}\) and arc weights \(\mathbf{\phi}\in\mathbb{R}^{|\mathcal{A}|}\), computing the MGVCNNSA is equivalent to solving the following ILP: \[\max_{\mathbf{x},\mathbf{y}} \mathbf{\mu}^{\top}\mathbf{x}+\mathbf{\phi}^{\top}\mathbf{y}\] \[\text{s.t.} \langle\mathbf{x},\mathbf{y}\rangle\in\mathcal{C}^{\text{(sa)}}\text{ and }\langle\mathbf{x},\mathbf{y}\rangle\in\mathcal{C}^{\text{(val)}} \tag{11}\] Without constraint \(\langle\mathbf{x},\mathbf{y}\rangle\in\mathcal{C}^{\text{(val)}}\), the problem would be easy to solve. The set \(\mathcal{C}^{\text{(sa)}}\) is the set of spanning arborescences over the contracted graph, hence to maximize over this set we can simply: (1) contract the graph and assign to each arc in the contracted graph the weight of its corresponding arc plus the weight of its destination vertex in the original graph; (2) run the the maximum spanning arborescence algorithm (MSA, Edmonds, 1967; Tarjan, 1977) on the contracted graph, which has a \(\mathcal{O}(n^{2})\) time-complexity. This process is illustrated on Figure 5 (top). Note that the contracted graph may have parallel arcs, which is not an issue in practice as only the one of maximum weight can appear in a solution of the MSA. We have established that MAP inference in our semantic parsing framework is a NP-hard problem. We proposed an ILP formulation of the problem that would be easy to solve if some constraints were removed. This property suggests the use of an approximation algorithm that introduces the difficult constraints as penalties. As a similar setting arises from our weakly supervised loss function, the presentation of the approximation algorithm is deferred until Section 4. ## 3 Training objective functions ### Supervised training objective We define the likelihood of a pair \(\langle\mathbf{x},\mathbf{y}\rangle\in\mathcal{C}\) via the Boltzmann distribution: \[p_{\mathbf{\mu},\mathbf{\phi}}(\mathbf{x},\mathbf{y})=\exp(\mathbf{\mu}^{\top}\mathbf{x}+\mathbf{\phi}^{ \top}\mathbf{y}-c(\mathbf{\mu},\mathbf{\phi})),\] where \(c(\mathbf{\mu},\mathbf{\phi})\) is the log-partition function: \[c(\mathbf{\mu},\mathbf{\phi})=\log\sum_{\langle\mathbf{x}^{\prime},\mathbf{y}^{\prime}\rangle \in\mathcal{C}}\exp(\mathbf{\mu}^{\top}\mathbf{x}^{\prime}+\mathbf{\phi}^{\top}\mathbf{y}^{ \prime}).\] During training, we aim to maximize the log-likelihood of the training dataset. The log-likelihood of an observation \(\langle\mathbf{x},\mathbf{y}\rangle\) is defined as: \[\ell(\mathbf{\mu},\mathbf{\phi};\mathbf{x},\mathbf{y}) =\log p_{\mathbf{\mu},\mathbf{\phi}}(\mathbf{x},\mathbf{y})\] \[=\mathbf{\mu}^{\top}\mathbf{x}+\mathbf{\phi}^{\top}\mathbf{y}-c(\mathbf{\mu},\mathbf{\phi }).\] Unfortunately, computing the log-partition function is intractable as it requires summing over all feasible solutions. Instead, we rely on a surrogate lower-bound as an objective function. To this end, we derive an upper bound (because it is negated in \(\ell\)) to the second term: a sum of log-sum-exp functions that sums over each cluster of vertices independently and over incoming arcs in each cluster independently, which is tractable. This loss can be understood as a generalization of the head selection loss used in dependency parsing (Zhang et al., 2017). We now detail the derivation and prove that it is an upper bound to the log-partition function. Let \(\mathbf{U}\) be a matrix such that each row contains a pair \(\langle\mathbf{x},\mathbf{y}\rangle\in\mathcal{C}\) and \(\Delta^{|\mathcal{C}|}\) be the simplex of dimension \(|\mathcal{C}|-1\), _i.e._ the set of all stochastic vectors of dimension \(|\mathcal{C}|\). The log-partition function can then be rewritten using its variational formulation: \[c(\mathbf{\mu},\mathbf{\phi})=\max_{\mathbf{p}\in\Delta^{|\mathcal{C}|}}\mathbf{p}^{\top} \left(\mathbf{U}\begin{bmatrix}\mathbf{\mu}\\ \mathbf{\phi}\end{bmatrix}\right)+H[\mathbf{p}],\] where \(H[\mathbf{p}]=-\sum_{i}p_{i}\log p_{i}\) is the Shannon entropy. We refer the reader to Boyd and Vandenberghe (2004, Example 3.25), Wainwright and Jordan (2008, Section 3.6) and Beck (2017, Section 4.4.10). Note that this formulation remains impractical as \(\mathbf{p}\) has an exponential size. Let \(\mathcal{M}=\mathrm{conv}(\mathcal{C})\) be the marginal polytope, _i.e._ the convex hull of the feasible integer solutions, we can rewrite the above variational formulation as: \[=\max_{\mathbf{m}\in\mathcal{M}}\mathbf{m}^{\top}\begin{bmatrix}\mathbf{\mu}\\ \mathbf{\phi}\end{bmatrix}+H_{\mathcal{M}}[\mathbf{m}]\] where \(H_{\mathcal{M}}\) is a joint entropy function defined such that the equality holds. The maximization in this reformulation acts on the marginal probabilities of parts (vertices and arcs) and has therefore a polynomial number of variables. We refer the reader to Wainwright and Jordan (2008, 5.2.1) and Blondel et al. (2020, Section 7) for more details. Unfortunately, this optimization problem is hard to solve as \(\mathcal{M}\) cannot be caracterized in an explicit manner and \(H_{\mathcal{M}}\) is defined indirectly and lacks a polynomial closed form (Wainwright and Jordan, 2008, Section 3.7). However, we can derive an upper bound to the log-partition function by decomposing the entropy term \(H_{\mathcal{M}}\)(Cover, 1999, Property 4 on page 4.1, _i.e._\(H\) is an upper bound) and by using an outer approximation to the marginal polytope \(\mathcal{L}\supseteq\mathcal{M}\) (_i.e._ increasing the search space): \[\leq\max_{\mathbf{m}\in\mathcal{L}}\mathbf{m}^{\top}\begin{bmatrix}\mathbf{\mu}\\ \mathbf{\phi}\end{bmatrix}+H[\mathbf{m}]\] In particular, we observe that each pair \(\langle\mathbf{x},\mathbf{y}\rangle\in\mathcal{C}\) has exactly one vertex selected per cluster \(\overline{V_{i}}\in\overline{\pi}\) and one incoming arc selected per cluster \(\overline{V_{i}}\in\overline{\pi}\setminus\{V_{0}\}\). We denote \(\mathcal{C}^{\text{(one)}}\) the set of all the pairs \(\langle\mathbf{x},\mathbf{y}\rangle\) that satisfy these constraints. By using \(\mathcal{L}=\operatorname{conv}(\mathcal{C}^{\text{(one)}})\) as an outer approximation to the marginal polytope (see Figure 4) the optimization problem can be rewritten as a sum of independent problems. As each of these problems is the variational formulation of a log-sum-exp term, the upper bound on \(c(\mathbf{\mu},\mathbf{\phi})\) can be expressed as a sum of log-sum-exp functions, one over vertices in each cluster \(\overline{V_{i}}\in\overline{\pi}\setminus\{V_{0}\}\) and one over incoming arcs \(\sigma^{-}_{\overline{G}}(\overline{V_{i}})\) for each cluster \(\overline{V_{i}}\in\overline{\pi}\setminus\{V_{0}\}\). Although this type of approximation may not result in a Bayes consistent loss (Corro, 2023), it works well in practice. ### Weakly-supervised training objective Unfortunately, training data often does not include gold pairs \(\langle\mathbf{x},\mathbf{y}\rangle\) but instead only the AST, without word anchors (or word alignment). This is the case for the three datasets we use in our experiments. We thus consider our training signal to be the set of all structures that induce the annotated AST, which we denote \(\mathcal{C}^{*}\). The weakly-supervised loss is defined as: \[\widetilde{\ell}(\mathbf{\mu},\mathbf{\phi};\mathcal{C}^{*})=\log\sum_{\langle\mathbf{x}, \mathbf{y}\rangle\in\mathcal{C}^{*}}p_{\mathbf{\mu},\mathbf{\phi}}(\mathbf{x},\mathbf{y}),\] _i.e._ we marginalize over all the structures that induce the gold AST. We can rewrite this loss as: \[=\left(\log\sum_{\langle\mathbf{x},\mathbf{y}\rangle\in\mathcal{C}^{*}} \exp(\mathbf{\mu}^{\top}\mathbf{x}+\mathbf{\phi}^{\top}\mathbf{y})\right)-c(\mathbf{\mu},\mathbf{\phi}).\] The two terms are intractable. We approximate the second term using the bound defined in Section 3.1. We now derive a tractable lower bound to the first term. Let \(q\) be a proposal distribution such that \(q(\mathbf{x},\mathbf{y})=0\) if \(\langle\mathbf{x},\mathbf{y}\rangle\notin\mathcal{C}^{*}\). We derive the following lower bound via Jensen's inequality: \[\log\sum_{\langle\mathbf{x},\mathbf{y}\rangle\in\mathcal{C}^{*}}\exp(\bm {\mu}^{\top}\mathbf{x}+\mathbf{\phi}^{\top}\mathbf{y})\] \[\qquad=\log\mathbb{E}_{q}\left[\frac{\exp(\mathbf{\mu}^{\top}\mathbf{x}+ \mathbf{\phi}^{\top}\mathbf{y}))}{q(\mathbf{x},\mathbf{y})}\right]\] \[\qquad\geq\mathbb{E}_{q}\left[\mathbf{\mu}^{\top}\mathbf{x}+\mathbf{\phi}^{ \top}\mathbf{y}\right]+H[q].\] This bound holds for any distribution \(q\) satisfying the aforementioned condition. We choose to maximize this lower bound using a distribution that gives a probability of one to a single structure, as in "hard" EM (Neal and Hinton, 1998, Section 6). For a given sentence \(\mathbf{w}\), let \(G=\langle V,A,\pi,\tilde{l}\rangle\) be a graph defined as in Section 2.2 and \(G^{\prime}=\langle V^{\prime},A^{\prime},l^{\prime}\rangle\) be an AST defined as in Section 2.1. We aim to find the GVCNNSA in \(G\) of maximum weight whose induced AST is exactly \(G^{\prime}\). This is equivalent to aligning each vertex in \(V^{\prime}\) with one vertex of \(V\setminus\{0\}\) s.t. there is at most one vertex per cluster of \(\pi\) appearing in the alignment and where the weight of an alignment is defined as: 1. for each vertex \(u^{\prime}\in V^{\prime}\), we add the weight of the vertex \(u\in V\) it is aligned to -- moreover, if \(u^{\prime}\) is the root of the AST we also add the weight of the arc \(0\to u\); 2. for each arc \(u^{\prime}\to v^{\prime}\in A^{\prime}\), we add the weight of the arc \(u\to v\) where \(u\in V\) (resp. \(v\in V\)) is the vertex \(u^{\prime}\) (resp. \(v^{\prime}\)) it is aligned with. Note that this latent anchoring inference consists in computing a (partial) alignment between vertices of \(G\) and \(G^{\prime}\), but the fact that we need to take into account arc weights forbids the use of the Kuhn-Munkres algorithm (Kuhn, 1955). **Theorem 2**.: _Computing the anchoring of maximum weight of an AST with a graph \(G\) is NP-hard._ The proof is in Appendix A. Therefore, we propose an optimization-based approach to compute the distribution \(q\). Note that Figure 4: Polyhedrons illustration. The solid lines represent the convex hull of feasible solutions of 1, denoted \(\mathcal{M}\), whose vertices are feasible integer solutions (black vertices). The dashed lines represent the convex hull of feasible solutions of the linear relaxation of 1, which has non integral vertices (in white). Finally, the dotted lines represent the polyhedron \(\mathcal{L}\) that is used to approximate \(c(\mathbf{\mu},\mathbf{\phi})\). All its vertices are integral, but some of them are not feasible solutions of 1. the problem has a constraint requiring each cluster \(V_{i}\in\pi\) to be aligned with at most one vertex \(v^{\prime}\in V^{\prime}\), _i.e._ each word in the sentence can be aligned with at most one vertex in the AST. If we remove this constraint, then the problem becomes tractable via dynamic programming. Indeed, we can recursively construct a table \(\textsc{Chart}[u^{\prime},u]\), \(u^{\prime}\in V^{\prime}\) and \(u\in V\), containing the score of aligning vertex \(u^{\prime}\) to vertex \(u\) plus the score of the best alignment of all the descendants of \(u^{\prime}\). To this end, we simply visit the vertices \(V^{\prime}\) of the AST in reverse topological order, see Algorithm 1. The best alignment can be retrieved via back-pointers. ``` functionDPalignment(\(G,G^{\prime}\)) for\(u^{\prime}\in V^{\prime}\) in reverse topological order do for\(u\in\{v\in V|\bar{l}(v)=l^{\prime}(u^{\prime})\}\)do\(\triangleright\) We can only map \(u^{\prime}\) to vertices \(u\) if they have the same tag. \(\textsc{Chart}[u^{\prime},u]\leftarrow\mu_{u}+\sum_{v^{\prime}\to v^{\prime} \in\sigma^{+}_{G^{\prime}}(\{u^{\prime}\})}\left(\max_{v\in V\textsc{Chart}[v ^{\prime},v]+\phi_{u\to v}}\right)\)return\(\max_{u\in V\textsc{Chart}[r^{\prime},u]}+\phi_{0\to u}\)\(\triangleright\) Where \(r^{\prime}\in A^{\prime}\) is the root of the AST. ``` **Algorithm 1** Unconstrained alignment of maximum weight between a graph \(G\) and an AST \(G^{\prime}\) Computing \(q\) is therefore equivalent to solving the following ILP: \[\textsc{(Ilp2)}\quad\max_{\mathbf{x},\mathbf{y}} \mathbf{\mu}^{\top}\mathbf{x}+\mathbf{\phi}^{\top}\mathbf{y}\] s.t. \[\langle\mathbf{x},\mathbf{y}\rangle\in\mathcal{C}^{\text{*(relax)}},\] \[\sum_{u\in V_{i}}x_{u}\leq 1\qquad\forall V_{i}\in\pi.\] The set \(\mathcal{C}^{\text{*(relax)}}\) is the set of feasible solutions of the dynamic program in Algorithm 1, whose convex hull can be described via linear constraints (Martin et al., 1990). ## 4 Efficient inference In this section, we propose an efficient way to solve the linear relaxations of MAP inference (Ilp1) and latent anchoring inference (Ilp2) via constraint smoothing and the conditional gradient method. We focus on problems of the following form: \[\max_{\mathbf{z}} f(\mathbf{z})\] s.t. \[\mathbf{z}\in\operatorname{conv}(\mathcal{C}^{\text{(easy)}})\] \[\mathbf{A}\mathbf{z}=\mathbf{b}\quad\text{or}\quad\mathbf{A}\mathbf{z}\leq\mathbf{b}\] where the vector \(\mathbf{z}\) is the concatenation of the vectors \(\mathbf{x}\) and \(\mathbf{y}\) defined previously and \(\operatorname{conv}\) denotes the convex hull of a set. We explained previously that if the set of constraints of form \(\mathbf{A}\mathbf{z}=\mathbf{b}\) for (Ilp1) or \(\mathbf{A}\mathbf{z}\leq\mathbf{b}\) for (Ilp2) was absent, the problem would be easy to solve under a linear objective function. In fact, there exists an efficient linear maximization oracle (LMO), _i.e._ a function that returns the optimal integral solution, for the set \(\operatorname{conv}(\mathcal{C}^{\text{(easy)}})\). This setting covers both (Ilp1) and (Ilp2) where we have \(\mathcal{C}^{\text{(easy)}}=\mathcal{C}^{\text{(sa)}}\) and \(\mathcal{C}^{\text{(easy)}}=\mathcal{C}^{\text{*(relax)}}\), respectively. An appealing approach in this setting is to introduce the problematic constraints as penalties in the objective: \[\max_{\mathbf{z}} f(\mathbf{z})-\delta_{S}(\mathbf{A}\mathbf{z})\] s.t. \[\mathbf{z}\in\operatorname{conv}(\mathcal{C}^{\text{(easy)}})\] where \(\delta_{S}\) is the indicator function of the set \(S\): \[\delta_{S}(\mathbf{A}\mathbf{z})=\begin{cases}0&\text{if }\mathbf{A}\mathbf{z}\in S,\\ +\infty&\text{otherwise.}\end{cases}\] In the equality case, we use \(S=\{\mathbf{b}\}\) and in the inequality case, we use \(S=\{\mathbf{u}|\mathbf{u}\leq\mathbf{b}\}\). ### Conditional gradient method Given a proper, smooth and differentiable function \(g\) and a nonempty, bounded, closed and convex set \(\operatorname{conv}(\mathcal{C}^{\text{(easy)}})\), the conditional gradient method (a.k.a. Frank-Wolfe, Frank and Wolfe, 1956; Levitin and Polyak, 1966; Lacoste-Julien and Jaggi, 2015) can be used to solve optimization problems of the following form: \[\max_{\mathbf{z}}\quad g(\mathbf{z})\quad\text{s.t.}\quad\mathbf{z}\in\operatorname{conv} (\mathcal{C}^{\text{(easy)}})\] Contrary to the projected gradient method, this approach does not require to compute projections onto the feasible set \(\operatorname{conv}(\mathcal{C}^{\text{(easy)}})\) which is, in most cases, computationally expensive. Instead, the conditional gradient method only relies on a LMO: \[\operatorname{lm}_{\mathcal{C}^{\text{(easy)}}}(\psi)\in\operatorname*{arg \,max}_{\mathbf{z}\in\operatorname{conv}(\mathcal{C}^{\text{(easy)}})}\mathbf{\psi}^{ \top}\mathbf{z}.\] The algorithm constructs a solution to the original problem as a convex combination of elements returned by the LMO. The pseudo-code is given in Algorithm 2. An interesting property of this method is that its step size range is bounded. This allows for simple linesearch techniques. ### Smoothing Unfortunately, the function \(g(\mathbf{z})=f(\mathbf{z})-\delta_{S}(\mathbf{A}\mathbf{z})\) is non-smooth due to the indicator function term, preventing the use of the conditional gradient method. We propose to rely on the framework proposed by Yurtsever et al. (2018) where the indicator function is replaced by a smooth approximation. The indicator function of the set \(S\) can be rewritten as: \[\delta_{S}(\mathbf{A}\mathbf{z})=\delta_{S}^{**}(\mathbf{A}\mathbf{z})=\sup_{\mathbf{u}}\mathbf{u}^{ \top}\mathbf{A}\mathbf{z}-\sigma_{S}(\mathbf{u}),\] where \(\delta_{S}^{**}\) denotes the Fenchel biconjugate of the indicator function and \(\sigma_{S}(\mathbf{u})=\sup_{\mathbf{t}\in S}\mathbf{u}^{\top}\mathbf{t}\) is the support function of \(S\). More details can be found in Beck (2017, Section 4.1 and 4.2). In order to smooth the indicator function, we add a \(\beta\)-parameterized convex regularizer \(-\frac{\beta}{2}\|\cdot\|_{2}^{2}\) to its Fenchel biconjugate: \[\delta_{S,\beta}^{**}(\mathbf{A}\mathbf{z})=\max_{\mathbf{u}}\mathbf{u}^{\top}\mathbf{A}\mathbf{z}- \sigma_{S}(\mathbf{u})-\frac{\beta}{2}\|\mathbf{u}\|_{2}^{2}\] where \(\beta>0\) controls the quality and the smoothness of the approximation (Nesterov, 2005). **Equalities.** In the case where \(S=\{\mathbf{b}\}\), with a few computations that are detailed by Yurtsever Figure 5: Illustration of the approximate inference algorithm on the two-word sentence “List states”, where we assume the grammar has one entity state_all and one predicate loc_1 that takes exactly one entity as argument. The left graph is the extended graph for the sentence, including vertices and arcs weights (in black). If we ignore constraints (6)–(7), inference is reduced to computing the MSA on the contracted graph (solid arcs in the middle column). This may lead to solutions that do not satisfy constraints (6)–(7) on the expanded graph (top example). However, the gradient of the smoothed constraint (7) will induce penalties (in red) to vertex and arc scores that will encourage the loc_1 predicate to either be dropped from the solution or to have an outgoing arc to a state_all argument. Computing the MSA on the contracted graph with penalties results in a solution that satisfies constraints (6)–(7) (bottom example). et al. (2018), we obtain: \[\delta^{**}_{\{\mathbf{b}\},\beta}(\mathbf{A}\mathbf{z})=\frac{1}{2\beta}\|\mathbf{A}\mathbf{z}-\mathbf{b} \|_{2}^{2}.\] That is, we have a quadratic penalty term in the objective. Note that this term is similar to the term introduced in an augmented Lagrangian (Nocedal and Wright, 1999, Equation 17.36), and adds a penalty in the objective for vectors \(\mathbf{z}\) s.t. \(\mathbf{A}\mathbf{z}\neq\mathbf{b}\). **Inequalities.** In the case where \(S=\{\mathbf{u}|\mathbf{u}\leq\mathbf{b}\}\), similar computations lead to: \[\delta^{**}_{\mathbf{\leq b},\beta}(\mathbf{A}\mathbf{z})=\frac{1}{2\beta}\|[\mathbf{A}\mathbf{z} -\mathbf{b}]_{+}\|_{2}^{2}\] where \([.]_{+}\) denotes the Euclidian projection into the non-negative orthant (_i.e._ clipping negative values). Similarly to the equality case, this term introduces a penalty in the objective for vectors \(\mathbf{z}\) s.t. \(\mathbf{A}\mathbf{z}>\mathbf{b}\). This penalty function is also called the Courant-Beltrami penalty function. Figure 5 (bottom) illustrates how the gradient of the penalty term can "force" the LMO to return solutions that satisfy the smoothed constraints. ### Practical details **Smoothness.** In practice, we need to choose the smoothness parameter \(\beta\). We follow Yutsever et al. (2018) and use \(\beta^{(k)}=\frac{\beta^{(0)}}{\sqrt{k+1}}\) where \(k\) is the iteration number and \(\beta^{(0)}=1\). **Step size.** Another important choice in the algorithm is the step size \(\gamma\). We show that when the smoothed constraints are equalities, computing the optimal step size has a simple closed form solution if the function \(f\) is linear, which is the case for (1), _i.e._ MAP decoding. The step size problem formulation at iteration \(k\) is defined as: \[\operatorname*{arg\,max}_{\gamma\in[0,1]}f(\mathbf{z}^{(k)}+\gamma\mathbf{d})-\frac{ \|\mathbf{A}(\mathbf{z}^{(k)}+\gamma\mathbf{d})-\mathbf{b}\|^{2}}{2\beta}\] By assumption, \(f\) is linear and can be written as \(f(\mathbf{z})=\mathbf{\theta}^{\top}\mathbf{z}\). Ignoring the box constraints on \(\gamma\), by first order optimality conditions, we have: \[\gamma=\frac{-\beta\theta^{\top}\mathbf{d}+(\mathbf{A}\mathbf{d})^{\top}\mathbf{b}-(\mathbf{A}\bm {d})^{\top}(\mathbf{A}\mathbf{z}^{(k)})}{\|\mathbf{A}\mathbf{d}\|^{2}}\] We can then simply clip the result so that it satisfies the box constraints. Unfortunately, in the inequalities case, there is no simple closed form solution. We approximate the step size using 10 iterations of the bisection algorithm for root finding. **Non-integral solutions.** As we solve the linear relaxation of original ILPs, the optimal solutions may not be integral. Therefore, we use simple heuristics to construct a feasible solution to the original ILP in these cases. For MAP inference, we simply solve the ILP5 using CPLEX but introducing only variables that have a non-null value in the linear relaxation, leading to a very sparse problem which is fast to solve. For latent anchoring, we simply use the Kuhn-Munkres algorithm using the non-integral solution as assignment costs. Footnote 5: We use the multi-commodity flow formulation of Martins et al. (2009) instead of the cycle breaking constraints (2). ## 5 Experiments We compare our method to baseline systems both on i.i.d. splits (1) and splits that test for compositional generalization for three datasets. The neural network is described in Appendix B. **Datasets.**Scan(Lake and Baroni, 2018) contains natural language navigation commands. We use the variant of Herzig and Berant (2021) for semantic parsing. The 1 split is the _simple_ split (Lake and Baroni, 2018). The compositional splits are _primitive right_ (Right) and _primitive around right_ (ARight)(Loula et al., 2018). GeoQuery(Zelle and Mooney, 1996) uses the FunQL formalism (Kate et al., 2005) and contains questions about the US geography. The 1 split is the standard split and compositional generalization is evaluated on two splits: Length where the examples are split by program length and Template(Finegan-Dollak et al., 2018) where they are split such that all semantic programs having the same AST are in the same split. Clevr(Johnson et al., 2017) contains synthetic questions over object relations in images. Closure(Bahdanau et al., 2019) introduces additional question templates that require compositional generalization. We use the original split as our 1 split and the Closure split as a compositional split where the model is evaluated on Closure. **Baselines.** We compare our approach against the architecture proposed by Herzig and Berant (2021) (SpanBasedSP) as well as the seq2seq baselines they used. In Seq2Seq(Jia and Liang, 2016), the encoder is a bi-LSTM over pre-trained GloVe embeddings (Pennington et al., 2014) or ELMo(Peters et al., 2018) and the decoder is an attention-based LSTM (Bahdanau et al., 2015). BERT2Seq replaces the encoder with BERT-base. GRAM MAR is similar to Seq2Seq but the decoding is constrained by a grammar. BART Lewis et al. (2020) is pre-trained as a denoising autoencoder. **Results.** We report the denotation accuracies in Table 1. Our approach outperforms all other methods. In particular, the seq2seq baselines suffer from a significant drop in accuracy on splits that require compositional generalization. While SpanBasedSP is able to generalize, our approach outperforms it. Note that we observed that the GeoQuery execution script used to compute denotation accuracy in previous work contains several bugs that overestimate the true accuracy. Therefore, we also report denotation accuracy with a corrected executor6 (see Appendix C) for fair comparison with future work. Footnote 6: [https://github.com/alban-petit/gequery-funql-executor](https://github.com/alban-petit/gequery-funql-executor) We also report exact match accuracy, with and without the heuristic to construct integral solutions from fractional ones. The exact match accuracy is always lower or equal to the denotation accuracy. This shows that our approach can sometimes provide the correct denotation even though the prediction is different from the gold semantic program. Importantly, while our approach outperforms baselines, its accuracy is still significantly worse on the split that requires to generalize to longer programs. ## 6 Related work **Graph-based methods.** Graph-based methods have been popularized by syntactic dependency parsing McDonald et al. (2005) where MAP inference is realized via the maximum spanning arborescence algorithm Chu and Liu (1965); Edmonds (1967). A benefit of this algorithm is that it has a \(\mathcal{O}(n^{2})\) time-complexity Tarjan (1977), _i.e._ it is more efficient than algorithms exploring more restricted search spaces Eisner (1997); Gomez-Rodriguez et al. (2011); Pitler et al. (2012, 2013). In the case of semantic structures, Kuhlmann and Jonsson (2015) proposed a \(\mathcal{O}(n^{3})\) algorithm for the maximum non-necessarily-spanning acyclic graphs with a noncrossing arc constraint. Without the noncrossing constraint, the problem is known to be NP-hard Grotschel et al. (1985). To bypass this computational complexity, Dozat and Manning (2018) proposed to handle each dependency as an independent binary classification problem, that is they do not enforce any constraint on the output structure. Note that, contrary to our work, these approaches allow for reentrancy but do not enforce well-formedness of the output with respect to the semantic grammar. Lyu and Titov (2018) use a similar approach for AMR parsing where tags are predicted first, followed by arc predictions and finally heuristics are used to ensure the output graph is valid. On the contrary, we do not use a pipeline and we focus on joint decoding where validity of \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{3}{c}{Scan} & \multicolumn{3}{c}{GeoQuery} & \multicolumn{2}{c}{Clevr} \\ \cline{2-7} & lid & Right & Right & Did & Template & Length & lid & Closure \\ \hline \multicolumn{7}{l}{**Baselines (denotation accuracy only)**} \\ \hline Seq2Seq & 99.9 & 11.6 & 0 & 78.5 & 46.0 & 24.3 & **100** & 59.5 \\ + ELMo & **100** & 54.9 & 41.6 & 79.3 & 50.0 & 25.7 & **100** & 64.2 \\ BERT2Seq & **100** & 77.7 & 95.3 & 81.1 & 49.6 & 26.1 & **100** & 56.4 \\ GRAMMAR & **100** & 0.0 & 4.2 & 72.1 & 54.0 & 24.6 & **100** & 51.3 \\ BART & **100** & 50.5 & **100** & 87.1 & 67.0 & 19.3 & **100** & 51.5 \\ SpanBasedSP & **100** & **100** & **100** & 86.1 & 82.2 & 63.6 & 96.7 & 98.8 \\ \hline \multicolumn{7}{l}{**Our approach**} \\ \hline Denotation accuracy & **100** & **100** & **100** & **92.9** & **89.9** & **74.9** & **100** & **99.6** \\ \(\downarrow\) Corrected executor & & & & 91.8 & 88.7 & 74.5 & \\ Exact match & **100** & **100** & **100** & 90.7 & 86.2 & 69.3 & **100** & **99.6** \\ \(\downarrow\) w/o CPLEX heuristic & **100** & **100** & **100** & 90.0 & 83.0 & 67.5 & **100** & 98.0 \\ \hline \hline \end{tabular} \end{table} Table 1: Denotation and exact match accuracy on the test sets. All the baseline results were taken from Herzig and Berant (2021). For our approach, we also report exact match accuracy, _i.e._ the percentage of sentences for which the prediction is identical to the gold program. The last line reports the exact match accuracy without the use of CPLEX to round non integral solutions (Section 4.3). the output is directly encoded in the search space. Previous work in the literature has also considered reduction to graph-based methods for other problems, _e.g._ for discontinuous constituency parsing Fernandez-Gonzalez and Martins (2015); Corro et al. (2017), lexical segmentation Constant and Le Roux (2015) and machine translation Zaslavskiy et al. (2009), _inter alia_. **Compositional generalization.** Several authors observed that compositional generalization insufficiency is an important source of error for semantic parsers, especially ones based on seq2seq architectures Lake and Baroni (2018); Finegan-Dollak et al. (2018); Herzig and Berant (2019); Keysers et al. (2020). Wang et al. (2021) proposed a latent re-ordering step to improve compositional generalization, whereas Zheng and Lapata (2021) relied on latent predicate tagging in the encoder. There has also been an interest in using data augmentation methods to improve generalization Jia and Liang (2016); Andreas (2020); Akyurek et al. (2021); Qiu et al. (2022); Yang et al. (2022). Recently, Herzig and Berant (2021) showed that span-based parsers do not exhibit such problematic behavior. Unfortunately, these parsers fail to cover the set of semantic structures observed in English treebanks, and we hypothesize that this would be even worse for free word order languages. Our graph-based approach does not exhibit this downside. Previous work by Jambor and Bahdanau (2022) also considered graph-based methods for compositional generalization, but their approach predicts each part independently without any well-formedness or acyclicity constraint. ## 7 Conclusion In this work, we focused on graph-based semantic parsing for formalisms that do not allow reentrancy. We conducted a complexity study of two inference problems that appear in this setting. We proposed ILP formulations of these problems together with a solver for their linear relaxation based on the conditional gradient method. Experimentally, our approach outperforms comparable baselines. One downside of our semantic parser is speed (we parse approximately 5 sentences per second for GeoQuery). However, we hope this work will give a better understanding of the semantic parsing problem together with baseline for faster methods. Future research will investigate extensions for (1) ASTs that contain reentrancies and (2) prediction algorithms for the case where a single word can be the anchor of more than one predicate or entity. These two properties are crucial for semantic representations like Abstract Meaning Representation Banarescu et al. (2013). Moreover, even if our graph-based semantic parser provides better results than previous work on length generalization, this setting is still difficult. A more general research direction on neural architectures that generalize better to longer sentences is important. ## Acknowledgments We thank Francois Yvon and the anonymous reviewers for their comments and suggestions. We thank Jonathan Herzig and Jonathan Berant for fruitful discussions. This work benefited from computations done on the Saclay-IA platform and on the HPC resources of IDRIS under the allocation 2022-AD011013727 made by GENCI.
2304.03791
ASAS-SN Sky Patrol V2.0
The All-Sky Automated Survey for Supernovae (ASAS-SN) began observing in late-2011 and has been imaging the entire sky with nightly cadence since late 2017. A core goal of ASAS-SN is to release as much useful data as possible to the community. Working towards this goal, in 2017 the first ASAS-SN Sky Patrol was established as a tool for the community to obtain light curves from our data with no preselection of targets. Then, in 2020 we released static V-band photometry from 2013--2018 for 61 million sources. Here we describe the next generation ASAS-SN Sky Patrol, Version 2.0, which represents a major progression of this effort. Sky Patrol 2.0 provides continuously updated light curves for 111 million targets derived from numerous external catalogs of stars, galaxies, and solar system objects. We are generally able to serve photometry data within an hour of observation. Moreover, with a novel database architecture, the catalogs and light curves can be queried at unparalleled speed, returning thousands of light curves within seconds. Light curves can be accessed through a web interface (http://asas-sn.ifa.hawaii.edu/skypatrol/) or a Python client (https://asas-sn.ifa.hawaii.edu/documentation). The Python client can be used to retrieve up to 1 million light curves, generally limited only by bandwidth. This paper gives an updated overview of our survey, introduces the new Sky Patrol, and describes its system architecture. These results provide significant new capabilities to the community for pursuing multi-messenger and time-domain astronomy.
K. Hart, B. J. Shappee, D. Hey, C. S. Kochanek, K. Z. Stanek, L. Lim, S. Dobbs, M. Tucker, T. Jayasinghe, J. F. Beacom, T. Boright, T. Holoien, J. M. Joel Ong, J. L. Prieto, T. A. Thompson, D. Will
2023-04-07T18:00:07Z
http://arxiv.org/abs/2304.03791v1
# ASAAS-SN Sky Patrol V2.0 ###### Abstract The All-Sky Automated Survey for Supernovae (ASAS-SN) began observing in late-2011 and has been imaging the entire sky with nightly cadence since late 2017. A core goal of ASAS-SN is to release as much useful data as possible to the community. Working towards this goal, in 2017 the first ASAS-SN Sky Patrol was established as a tool for the community to obtain light curves from our data with no preselection of targets. Then, in 2020 we released static \(V\)-band photometry from 2013-2018 for \(\sim 61\) million sources. Here we describe the next generation ASAS-SN Sky Patrol, Version 2.0, which represents a major progression of this effort. Sky Patrol 2.0 provides continuously updated light curves for \(\sim 111\) million targets derived from numerous external catalogs of stars, galaxies, and solar system objects. We are generally able to serve photometry data within an hour of observation. Moreover, with a novel database architecture, the catalogs and light curves can be queried at unparalleled speed, returning thousands of light curves within seconds. Light curves can be accessed through a web interface ([http://asas-sn.ifa.hawaii.edu/skypatrol/](http://asas-sn.ifa.hawaii.edu/skypatrol/)) or a Python client ([https://asas-sn.ifa.hawaii.edu/documentation](https://asas-sn.ifa.hawaii.edu/documentation)). The Python client can be used to retrieve up to 1 million light curves, generally limited only by bandwidth. This paper gives an updated overview of our survey, introduces the new Sky Patrol, and describes its system architecture. These results provide significant new capabilities to the community for pursuing multi-messenger and time-domain astronomy. ## 1. Introduction The All-Sky Automated Survey for Supernovae (ASAS-SN; Shappee et al., 2014; Kochanek et al., 2017) began observing in 2011 with the mission of identifying bright transients across the whole sky with minimal observational bias. With its initial two-camera station at the Haleakala Observatory (Hawaii), ASAS-SN was able to image the visible sky roughly once every 5 days. By late 2013, our team had installed two additional cameras on the Haleakala station, by April 2014 we had installed another two-camera station at the Cerro Tololo International Observatory (CTIO, Chile), and by mid-2015 we had installed two more cameras at the CTIO station. These additional cameras increased our survey cadence and allowed us to image the entire sky every few nights. In late 2017, we added three additional 4-camera stations, one at McDonald Observatory (Texas), one at the South African Astrophysical Observatory (SAAO, South Africa), and an additional station at CTIO. This geographic redundancy in both the northern and southern hemispheres ensures that we are able to observe the entire sky on a nightly basis. Our stations are hosted by the Las Cumbres Observatory Global Telescope Network (LCOGT; Brown et al., 2013) and we have the ability to download and reduce images within minutes of their exposures. ASAS-SN units use cooled, back-illuminated FLI Pro-Line \(2K\times 2K\) CCD cameras with 14-cm aperture Nikon telephoto lenses. The units' field-of-view is 4.47 degrees on a side (20 degrees2) with a pixel size of 8.0 arcsec. Ideally, each observation epoch consists of three dithered 90 second exposures, though we are currently averaging 2.7 exposures per epoch due to scheduling and weather events. The two original units used \(V\)-band filters until late 2018, after which we switched to \(g\)-band filters. The three newer units have used \(g\)-band filters from the start. The limiting \(V\)-band and \(g\)-band magnitudes are \(m\sim 17.5\) and \(m\sim 18.5\), respectively. Footnote 2: affiliation: Department of Astronomy, The Ohio State University, 191 W. Woodruff Avenue, Columbus, OH 43210 Footnote 3: affiliation: Department of Astronomy, The Ohio State University, 140 West 18\({}^{th}\) Avenue, Columbus, OH 43210 Footnote 4: affiliation: Center for Cosmology and AstroParticle Physics, The Ohio State University, 191 W. Woodruff Avenue, Columbus, OH 43210 Footnote 5: affiliation: Department of Astronomy, University of California Berkeley, Berkeley CA 94720 Footnote 6: affiliation: NASA Hubble Fellow Footnote 7: affiliation: Department of Physics, The Ohio State University, 191 W Woodruff Ave, Columbus, OH 43210 Footnote 8: affiliation: Techohore LLC, 1150 El Cajon, CA, 92021 Footnote 9: affiliation: Carnegie Observatories, 813 Santa Barbara Street, Pasadena, CA 91101 Footnote 10: affiliation: The Astronomy Nucleus, Universidad Diego Portales, 441 Ejercito Libertor Avenue, Santiago, Chile In June 2017, we launched Sky Patrol V1.0; [https://asas-sn.osu.edu/](https://asas-sn.osu.edu/). The goal of Sky Patrol V1.0 was to allow users to request light curves from our image archive. Rather than pre-computing light curves for a set number of targets, Sky Patrol V1.0 provides uncensored light curves for any user-specified coordinates. Aperture photometry is performed in real time at the user's requested coordinates using local, nearby stars to calibrate the photometry. Though the flexibility of that tool is admirable, its need to compute light curves in real time restricts both the size and frequency of users' queries. To partially alleviate these limitations, we pre-computed static \(V\)-band light curves and released the ASAS-SN Catalog of Variable Stars ([https://asas-sn.osu.edu/variables](https://asas-sn.osu.edu/variables); Jayasinghe et al., 2018, 2019, 2019). We then expanded these catalogs to include 61.5 million stars across the sky used in the variable star search ([https://asas-sn.osu.edu/photometry](https://asas-sn.osu.edu/photometry); Jayasinghe et al., 2019). We have also created a citizen science program using the Zooniverse12 platform named Citizen ASAS-SN (Christy et al., 2021, 2022) where, as of Jan 1 2023, \(\sim 5,300\) volunteers have classified over 1,400,000 ASAS-SN \(g\)-band light curves through searches for unusual variable stars. Footnote 12: Zooniverse: [https://www.zooniverse.org/](https://www.zooniverse.org/) Finally, in September 2021, we further expanded Sky Patrol V1.0 to not only perform forced-aperture photometry on our reduced images but also to enable users to run aperture photometry on the coadded, image-subtracted data for each epoch with or without the flux of the source on the reference image added. This paper outlines Sky Patrol V2.0, which seeks to maintain the flexibility of its predecessor (which remains available) while massively improving on it in both speed and scale. V2.0 not only serves pre-computed light curves for a select list of \(\sim 111\) million targets, but it also continuously updates the light curves in real time. Like Sky Patrol V1.0, they are uncensored light curves with no deliberate delays in the updates. To do this, we have built a system on top of our automated image processing pipeline that performs photometry and updates our public database as new observations are obtained, calibrated, and reduced. In SS2, we describe our imaging pipeline and a number of science products resulting from our survey. In SS3, we discuss our collated target list, their catalogs of origin, and how they can be queried. In SS4, we introduce the Python and web interfaces to our dataset, and provide several usage examples.In SS5, we discuss the photometric properties and precision of this survey tool. In SS6, we discuss the design and metrics of our database. In SS7, we give a short summary and explore the applications of this tool in upcoming research. ## 2. Survey Overview To date, our high-cadence network of telescopes has produced millions of images. By repeatedly conducting observations in an all-sky fixed grid, we are able to leverage deeply stacked reference images to perform high-precision photometry and optical analysis for a number of different science cases. This section describes our imaging and photometry pipeline, as well as a variety of notable science products resulting from our survey's data. ### Photometry To achieve full coverage of the sky the ASAS-SN observations are scheduled on 2,824 tessellated fields. Each field matches the 20 degree\({}^{2}\) camera field-of-view and has at least a 0.5 degree overlap with adjacent fields. The fields are divided into 706 pointings, where each of the 4 cameras in an ASAS-SN unit observes a specific field. Figure 1 shows the our field map and the number of images taken for each field up to the time of writing. The data are analysed using a fully-automated pipeline based on the ISIS image-subtraction package (Alard and Lupton, 1998; Alard, 2000). Each photometric epoch (usually) combines three dithered 90-second image exposures with a 4.47\(\times\)4.47 degree field-of-view of the field that is subtracted from a reference image. Typically, within an hour of observation our pipeline reduces and co-adds the subtracted images together into observation epochs. We then perform photometry on each of these co-added subtractions and the reference image. The reference images are calibrated using isolated stars from the ATLAS Reference Catalog (Tonry et al., 2018) and by fitting a 2D polynomial to find the zero point as a function of XY position on each reference image. This step reduces any leftover zeropoint issues not removed by flat fielding.13 After calibration, we perform photometry on the reference image using IRAF apphot with a fixed aperture of 2 pixels, or approximately 16 arcsec, radius and a background annulus spanning 7-10 pixels in radius. We perform photometry on all targets from our input catalogs that fall within each field and are more than 0.2 degrees away from the image boundaries. We then use apphot on the same apertures to perform photometry on each subtracted image, generating a differential light curve to which the reference flux is added. Finally, the photometry for each target in a given co-add is appended to the corresponding light curves in our database. Footnote 13: This calibration scheme is different than the image-subtraction light curves available from Sky Patrol V1.0, which calibrate the reference only using stars near the source. This means that the light curves provided from Sky Patrol V1.0 and Sky Patrol V2.0 will not be identical. For each epoch in a given light curve, we record the Julian date, camera filter, FWHM, measured flux, flux error, magnitude, magnitude error, and 5 \(\sigma\) limiting magnitude. For any epoch where the source is not detected at 5 \(\sigma\), we still give the forced aperture flux but report the limiting magnitude instead of the magnitude and record the magnitude error as 99.99. Typically, we publish new photometric measurements within an hour of observation. On the timescale of a day, images are reviewed either manually or by additional quality checks for cirrus, clouds, or other image quality issues. Images are then determined to be either "good" or "bad." When this occurs, the corresponding photometry is then flagged as such in the Sky Patrol V2.0 database. We advise caution when using and interpreting photometry that has not been flagged as "good." Because we are constantly appending new observations to our light curves and periodically rebuilding reference images--which triggers our pipeline to re-run all photometry on previous co-adds--our light curves are consistently growing in length and improving in quality. ### Science Products The ASAS-SN survey was initially designed to create a local census of nearby, bright extra galactic transients and has been used to discover and study supernovae (e.g., Shappee et al., 2016; Holoien et al., 2016, 2017, 2017, 2017; Rochanek et al., 2017; Bose et al., 2019; Brown et al., 2019; Holoien et al., 2019; Shappee et al., 2019; Vallely et al., 2019; Neumann et al., 2022; Desai et al. in prep.), superluminous supernovae (e.g., Dong et al., 2016; Godoy-Rivera et al., 2017; Bose et al., 2018), rapidly evolving blue transients (e.g., Prentice et al., 2018), tidal disruption events (TDEs; e.g., Holoien et al., 2014, 2016, 2018, 2019c, 2020; Brown et al., 2016, 2018; Hinkle et al., 2020, 2021b, 2021a, 2020; ambiguous nuclear transients (ANTs; e.g., Neustadt et al., 2020; Hinkle et al., 2021; Holoien et al., 2022; Hinkle et al., 2022), active galactic nuclei variability (e.g., Yuk et al., 2022), orphan blazar flares (de Jaeger et al., 2022), large outbursts from active galactic nuclei (e.g., Shappee et al., 2014; Trakht-enprot et al., 2019; Hinkle et al., 2022; Neustadt et al., 2022), changing-look blazars (e.g., Mishra et al., 2021), and even a potential repeating partial TDE (e.g., Payne et al., 2021; Tucker et al., 2021; Payne et al., 2022, 2022). ASAS-SN has also been used to search for optical counterparts for multi-messenger events such as high-energy neutrinos observed by Icecube (e.g., Icecube Collaboration et al., 2017; Garrappa et al., 2019; IceCube Collaboration et al., 2018; Franckowiak et al., 2020; Necker et al., 2022) and LIGO/VIRGO gravitational wave events (e.g., Abbott et al., 2020; Shappee et al., 2019, 2019; c,d; de Jaeger et al., 2022). Moreover, the ASAS-SN data have been used in hundreds of publications to study Galactic and solar system objects. This includes large variable star studies (Jayasinghe et al., 2018, 2019, 2019; Aarwalak et al., 2019; Jayasinghe et al., 2019; O'Grady et al., 2020; Jayasinghe et al., 2021; Rowan et al., 2021; Christy et al., 2022, 2022), deriving period-luminosity relationships for \(\delta\) Scuti stars (Jayasinghe et al., 2020), fitting detached eclipsing binaries parameters (Way et al., 2022; Rowan et al., 2022, 2022), studying contact binaries (Jayasinghe et al., 2020), observing accreting white dwarf systems (e.g., Campbell et al., 2015; Littlefield et al., 2016), studying young stellar object variability (e.g., Holoien et al., 2014; Herczeg et al., 2016; Rodriguez et al., 2016; Sicilia-Aguilar et al., 2017; Gully-Santiago et al., 2017; Osborn et al., 2017; Rodriguez et al., 2017; Bredall et al., 2020). Furthermore, ASAS-SN was used to examine the long-term variability of Boyajian's Star (Simon et al., 2018) and has even recently identified a potential new "Boyajian's Star" analog (the source has been nicknamed Zachy's Star), exhibiting similar rapid dimming events (Way et al., 2019). ASAS-SN data have also been used to identify M-dwarf flares (e.g., Schmidt et al., 2014, 2016, 2019; Rodriguez Martinez et al., 2020; Zeldes et al., 2021), Novae and CVs (e.g., Kato et al., 2014, 2016, 2017; Li et al., 2017; Aydi et al., 2019, 2020, 2020; Kato et al., 2020; Kawash et al., 2021, 2021, 2022), X-ray binaries (e.g., Tucker et al., 2018), and R Coronae Borealis stars (e.g., Shields et al., 2019). ASAS-SN data have also been used to study stars and exoplanets by observing microlensing events (e.g., Dong et al., 2019; Wyrzykowski et al., 2020), determining asteroseismic distances for M giants (e.g., Auge et al., 2020), deriving gyrochronologic ages for exoplanet host stars (e.g., Gilbert et al., 2020), and vetting exoplanet candidates for background eclipsing binaries (e.g., Rodriguez et al., 2019). Finally, ASAS-SN data have even been useful for solar system studies through asteroid shape modeling (e.g., Hanus et al., 2020), discovering 2 new comets in outburst (Prieto et al., 2017; van Buitenen et al., 2018), and recovering the near-Earth asteroid 2019OK (Jacques et al., 2019). These studies demonstrate the wide utility of the ASAS-SN network and its dataset and why we have created a variety of tools which the community can benefit from. Figure 1.— Equatorial projection of ASAS-SN fields and number of images taken as of Feb 25, 2023. There are, on average, 2.7 images for each epoch. ## 3. Input Catalogs Unlike ASAS-SN Sky Patrol V1.0, which allowed users to request light curves at arbitrary points on the sky, Sky Patrol V2.0 by construction only serves pre-computed light curves for objects in our input catalogs. Our input catalogs consist of stellar sources, external catalog sources, and solar system sources. Objects in our stellar source table and catalog source tables have all been cross-matched to a precision of 2 arcseconds and given unique ASAS-SN Sky Patrol Identifiers (asas_sn_id14). Footnote 14: Not to be confused with ASAS-SN Identifiers given to objects discovered by our survey—e.g., ASASSN-15lh, ASASSN-14li, etc. ### Stellar Sources The stellar source table was constructed with the goal of providing an unbiased sample that maximizes the number of targets while maintaining the overall quality of our light curves. We used the ATLAS Reference Catalog (REFCAT2) to build the target list. Given the pixel size and magnitude sensitivity of our instruments, we chose to include objects with mean \(g<18.5\) mag and where r1 \(>\) 20 arcsec, where r1 is the radius where the cumulative flux exceeds that of the target star. Because REFCAT2 was compiled to include at least 99% of objects with \(m\leq 19\) mag and curated to exclude non-stellar objects, this stellar source table should be considered an exhaustive list of all stars observable by ASAS-SN that are not heavily crowded. Because REFCAT2 used Gaia Data Release 2 (Gaia DR2) for its astrometric solutions, we were able to directly match Gaia source IDs from the given RA and DEC coordinates. We then used the best-neighbour tables in the Gaia Archive to cross-match with AllWISE, SDSS, and 2MASS. Finally, we used the same Gaia source IDs to get TESS Input Catalog Version 8.0 (TIC) IDs. This means that users can query our stellar source table using either the original columns provided by REFCAT2 or by providing external catalog identifiers for any of these other cross-matched surveys. We note that some of our external source catalog tables contain stellar objects that do not occur in the stellar table. However, given the completeness of REFCAT2, these objects typically fall outside our sensitivity or suffer from significant crowding. ### Additional Catalog Sources In addition to our stellar sources, Sky Patrol V2.0 provides light curves for a number of specialized catalogs from NASA's High Energy Astrophysics Science Archive Research Center (HEASARC), as shown in Table 1. With the explicit goal of aiding multi-messenger astronomy we have included the entire source catalogs from Fermi LAT (Abdollahi et al., 2020), Chandra (Evans et al., 2010), and Swift (Nasa and Heasarc, 2018). For researchers interested in specific object and variable types we have included the AllWISE AGN catalog (Secrest et al., 2015), the Million Quasar Catalog (Milliquas; Flesch, 2021), the Brown Dwarf Catalog (Lepine and Gaidos, 2011), the AAVSO International Variable Star Index (Watson et al., 2006) and the Galaxy List for the Advanced Detector Era (GLADE; Dalya et al., 2018). Whereas our stellar sources were selected from REFCAT2 to ensure detections given our magnitude limits, these catalogs were not pruned based on flux. Photometry is performed at all target coordinates in these catalogs without bias. As with the stellar sources, null detections will be timestamped and reported in our light curves with \(\sigma\sim 99.99\). Our interface allows the user to query sources using all of the original columns of the input catalogs and, we have maintained the original naming conventions of the catalogs' columns with few exceptions. Because many sources appear in more than one of these catalogs and we have already done the cross-matching, users can perform JOIN operations on these tables using asas_sn_ids. In SS5, we provide several examples of such queries in ADQL. Finally, for a few of the catalogs, ASAS-SN spatial resolution is better than the accuracy of the catalog's coordinates for some of its objects. In these cases, we make no effort to identify the corresponding optical counterpart and simply perform forced aperture photometry at the catalog's reported coordinates with our normal aperture size. Thus, users must be cautious when using the ASAS-SN Sky Patrol V2.0 photometry for poorly localized sources. In these cases we recommend first searching for the corresponding optical counterpart's photometry in Sky Patrol V2.0, and if not present, to use the slower Sky Patrol V1.0 to compute forced aperture photometry at the exact coordinates desired. ### Solar System Sources To provide photometry for asteroids and comets, we rely on updates from the Minor Planet Center Orbit Database (MPCORB Database) every 10 days. For each object in the most recent MPCORB Database relative to our observation, we calculate the position and run aperture photometry on all objects within that image. Unlike our other sources, we do not add flux from the reference images. Photometry on asteroids is run with the same aperture as our extra-solar sources, while comets are run in a different manner. Comet photometry is computed using 2-8 pixel apertures and a 15-20 pixel annulus. This means that each comet's light curve will have 7 different magnitude values for each observation epoch. The goal of this regime is to provide researchers a detailed picture of the growth and decay of gas and dust tails as these comets traverse our solar system. ## 4. Interface We have designed a simple, yet powerful interface for users to query objects from our input catalog tables and to seamlessly retrieve their corresponding light curves. This interface can be accessed through our Python client or our web portal. While these have similar functionality, queries on the web portal will have stricter limits on the number of sources returned in a single query. The web portal limits users to 10,000 sources per query, whereas the Python client allows up to 1 million sources per query. The latest version of the Python tool and documentation can be found at [https://asas-sn.ifa.hawaii.edu/documentation](https://asas-sn.ifa.hawaii.edu/documentation). The web portal can be accessed at [https://asas-sn.ifa.hawaii.edu/skypatrol](https://asas-sn.ifa.hawaii.edu/skypatrol). ASAS-SN Sky Patrol light curves can be queried in a number of ways: cone searches, catalog IDs, random samples, and ADQL queries. Further statistical and plotting tools are also provided by the Python client. We detail the available methods of querying light curves below. ### Cone Search The cone search is the most basic operation that we allow. Users can provide RA, DEC, and a search radius, and obtain catalog entries and light curves for all sources within that cone. Our system is unique in that there is no limit on the radius of a cone search and that the speed of the query will not be affected by the size of the cone. Users can also filter the desired sources in their cone search by catalog. Moreover, our ADQL function set includes utilities for running cone searches in conjunction with more complex queries. ``` 1>>>frompyassassn.clientimportSkyPatrolClient{>>>client=SkyPatrolClient() 2<client.cone_search(ra_deg=27.0,dec_deg=12.1,radius=5.0,uniting="arcmin",catalog="anavosvax") 3ass_sn_idra_degdec_degname 4292059085873276.84665-12.01138 ODS_J1827232326417718321277.36058-12.37650 ASASSN-VJ18 395137087890280.42779-10.01075 NSVS 1675161 5...... 6594rowsx4columns] ``` ### Catalog IDs Searching by catalog ID is the simplest way to access our data. To do this, the user can provide target IDs for any catalog that we have cross-matched against and our interface will return light curves for all the sources in our data that match that list. Our stellar source table can be queried with TESS, GAIA DR2, AllWISE, SDSS, REFCAT2, and 2MASS identifiers. Sources from the other catalogs can be queried directly by their survey identifiers (e.g., Swift, Chandra, or Fermi LAT). ``` 1>>>my_tic_ids=[6658326,46783395,1021890] 2>>>client.query_list(my_tic_ids,catalog="stellar_main",id_col="tic_id') 3ass_sn_idra_degdec_degtic_id 309238124040329.2603778.0358641021890 33500769908397.045798.1241838467833953 33500769370181.16442218.2221476658326 ``` This utility is meant for users that already have a source list from a specific survey and that hope to use our light curves to supplement their research. This is also the fastest way to query our database. ### Random Samples This utility has been included for data scientists looking to train or test models on large unbiased samples of light curves. The random sample function allows users to pull arbitrary numbers of light curves along with their corresponding catalog data. The following example shows a random sample from the MORX catalog. This utility returns a new random sample each time. If users want to perform multiple experiments on the same random sample, they will need to save the returned assas_sn_ids. ``` 1>>>client.random_sample(10000,catalog="morx") 2ass_sn_idra_degdec_degname 5661431605389348.28629-89.26930IRAS22416-661431261661209.096434.016277 SDSSJ13562 77310593719037.3022221061305NORXJ12029117181160891257.6716-87.50250TCKO9530-... 10000rowsx4columns] ``` ### ADQL Queries While many other survey utilities, such as the Gaia DR2 Archive and VizieR, run interfaces compliant with the International Virtual Observatory Alliance's (IVOA) Astronomical Data Query Language (ADQL) specification, we have chosen to use a custom grammar that both adds functionality and simplifies geometric queries15. Footnote 15: The full query syntax can be found at [https://assas-sn.ifa.hawaii.edu/documentation/additional.html#syntax](https://assas-sn.ifa.hawaii.edu/documentation/additional.html#syntax) In addition to the query functionality of traditional ADQL, we have included support for Common Table Expressions, **WINDOW** functions, correlated subqueries, and **UNIONS**. We have removed the common geometry functions such as **BOX**, **CIRCLE**, **AREA**, **POINT**, and **CONTAINS**. Instead of building predicates out of these functions, users can perform geometric searches either using the **DISTANCE** function or by specifying range conditions on RA and DEC. Below is an example cone search16. Footnote 16: Additional query examples can be found at [https://assas-sn.ifa.hawaii.edu/documentation/additional.html#example](https://assas-sn.ifa.hawaii.edu/documentation/additional.html#example) ``` 1>>>query=""" 2ass_sn_idra_degdec_deg 3WORKISTANCE(rs_deg,dec_deg,270,-88)<-ARCHIN(30) ``` \begin{table} \begin{tabular}{||l|l|c||} \hline Source Catalog & Type & \(n\) sources \\ \hline \hline ASAS-SN Stellar Source Table & Stellar & 98,602,587 \\ \hline Fermi LAT 10-Year Point Sources & Gamma Ray & 5,788 \\ \hline Chandra Sources v2.0 & X-Ray & 317,224 \\ \hline Swift Master Catalog & Optical/UV/X-Ray/Gamma Ray & 254,936 \\ \hline AllWISE AGN Catalog & Mid-IR/AGN & 1,354,775 \\ \hline Million Optical/Radio/X-Ray Associations Catalog (MORX) & Optical/Radio/X-Ray & 3,262,883 \\ \hline Million Quasars Catalog (MILLIQUAS) & QSO & 1,979,676 \\ \hline Bright M-Dwarf All Sky Catalog & Stellar & 8,927 \\ \hline AAVSO International Variable Star Index & Stellar & 1,437,528 \\ \hline Galaxy List for the Advanced Detector Era (GLADE) & Galaxy & 3,263,611 \\ \hline \end{tabular} \end{table} Table 1A breakdown of our input catalogs, with the numbers and types of sources included. #### 4.5.1 Collections Once the user has found a set of targets through any of the 4 catalog query functions, they can now download their light curves. The cone search, catalog id, random sample, and ADQL query functions all have a boolean download parameter. If set, then the query function will return a **LightCurveCollection** object. This object provides the user with aggregate statistics on the collected light curves. ``` #Wideanglecomesearchoftargetsmexthepole>>>les=client.cone_search('18:54:11.1','-88:02:55.22',radius=2.0,units='-deg',download=True,threads=8)>>>lcs.stats()assa_sn_idmean.msgstd_magepochs 44309311.133392.028958841644327516.3607470.129060394 44327615.8267930.097284412 5...... 693rowsx4columns] ``` ### Individual Light Curves Once the user has downloaded a collection, they can view the individual light curves, as well as their meta data, periodograms, and plotted light curves. Moreover, because light curve data is held in memory as a pandas Figure 2.— Sample light curve generated by the Sky Patrol V2.0 Python client for the long-period variable star, BP Cha. The plot includes basic metadata as well as photometry for both g- and V-band filters. DataFrame, they can easily be saved to disk in csv format. Individual curves are retrieved from the collection using their asas_sn_id. ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` source, we use only the data returned by the query, with simple cuts on the quality flags as described above. Fig. 4 shows the photometry of SV Vul, a bright (\(g\approx 8.2\) mag; Tonry et al., 2018) classical Cepheid variable. While there are clear outliers that are not accurately captured by the data quality flags, there is no issue in recovering the known periodicity of around 45 days despite its brightness. Figure. 5 demonstrates the exceptional baseline provided by ASAS-SN for an RR Lyrae variable star. Many RR Lyrae are known to undergo a quasi-periodic modulation of their variability over time, referred to as the Tseraskaya-Blazhko effect (Buchler and Kollath, 2011). Even in just the ASAS-SN \(g\)-band since 2017, this effect is easily visible. Finally, Fig. 6 presents a sample of 10 randomly selected eclipsing binaries found by Rowan et al. (2022). ## 6. Database Metrics The construction of this database presented us with a unique set of problems. Traditional survey data releases have been of static data sets that lend themselves to indexing and partitioning schemes that allow for fast queries. However, Sky Patrol V2.0 is not a static data release. Our light curves are updated in near-real time as we gather new images from our stations. This has forced us to decouple our lookup tables from their corresponding light curves. The lookup tables are kept as in-memory distributed dataframes and the light curves are kept on disk in a document store. Each of these architectures has its own contribution to the speed of our database. Our in-memory lookup tables do not have pre-calculated indexes on any of their columns. This is a restriction of the software stack we are running. However, this means that we can load new catalogs on-the-fly with little penalty. Also, it means that each one of our queries results in a full table scan. While this would be untenable on disk-based storage, it is trivial for in-memory storage. The major benefit of this is that the speed of our queries is not dictated by the complexity or breadth of their filters, but rather by the number of results they return (i.e., bandwidth limitations). In other words, users can run cone searches with arbitrary radius with little to no penalty. A major design goal of this architecture was to return all large queries to the lookup table--around 1 million targets--in under a minute, and all small queries-- thousands of targets--in under 5 seconds. Figure 3 shows the cone search lookup speeds as a function of radii. The document store uses our unique internal identifiers as hash keys for each corresponding light curve. This means we can retrieve light curves in \(\mathcal{O}(1)\) time, so that retrieval time grows roughly with the number of sources. The speed of these retrievals is only limited by bandwidth. Because the light curve documents contain far more data than their corresponding rows in the lookup tables, retrieval speeds will vary depending on the users bandwidth and latency from our servers at the University of Hawaii. In testing, downloads to Ohio State University ran at a rate of 1,000 light curves per minute per core. Figure 5.— Long period Blazhko modulation in the RR Lyrae variable CD Scl observed by ASAS-SN in \(g\)-band. The long baseline of g-band data allows for multiple cycles to be observed. Figure 6.— A selection of several random eclipsing binaries with g-band photometry. ## 7. Conclusion ASAS-SN Sky Patrol V2.0 represents a major step forward for survey data releases. While previous tools and data releases have given users the ability to pull live photometry data or run complex searches across a collection of input catalogs, few, if any, have managed to do both at the speed and scale of this project. While Sky Patrol V1.0 still allows users to run small numbers of light curves anywhere on the sky, Sky Patrol V2.0 enables users to perform studies using millions of light curves that are continuously updated. It is our hope that researchers can leverage our input catalogs and light curves not only in their multi-messenger and time-domain applications, but also in service of standalone science applications. In regards to the latter, this system will serve as the foundation for the ASAS-SN team's upcoming "Patrol" projects. Each of these patrols will monitor incoming photometry data for certain classes of objects--such as active galactic nuclei, young stellar objects, cataclysmic variables, etc.--in order to detect and study anomalous events in real time. Finally, we are also investigating the possibility of allowing individual users to create custom patrols where the target list and triggering function would be created by community members and run on the ASAS-SN data stream in real time. ## Acknowledgements We thank Las Cumbres Observatory and its staff for their continued support of ASAS-SN. ASAS-SN is funded in part by the Gordon and Betty Moore Foundation through grants GBMF5490 and GBMF10501 to the Ohio State University, and also funded in part by the Alfred P. Sloan Foundation grant G-2021-14192. Development of ASAS-SN has been supported by NSF grant AST-0908816, the Mt. Cuba Astronomical Foundation, the Center for Cosmology and AstroParticle Physics at the Ohio State University, the Chinese Academy of Sciences South America Center for Astronomy (CAS- SACA), and the Villum Foundation. We thank Roberto Assef, David Bersier, Laura Chomiuk, Xinyu Dai, Anna Franckowiak, JJ Hermes, Ondrej Pejcha, Sarah J. Schmidt, Jay Strader, and Maximilian Stritzinger for discussions during external catalog construction. We also thank Chris Ashall, Thomas de Jaeger, Aaron Do, Jason Hinkle, and Anna Payne for other useful discussions during development. KH and BJS are supported by NASA grant 80NSSC19K1717. KH was also supported by the University of Hawaii Data Science Institute. BJS, CSK, and KZS are supported by NSF grant AST-1907570. BJS is also supported by NSF grants AST-1920392 and AST-1911074. CSK and KZS are supported by NSF grant AST-1814440. KZS is also supported by the 2022 Guggenheim Fellowship, and his stay at the UCSB KITP was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. Support for TJ was provided by NASA through the NASA Hubble Fellowship grant HF2-51509 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. KAA is supported by the Danish National Research Foundation (DNRF132). MAT acknowledges support from the DOE CSGF through grant DE-SC0019323. JFB was supported by NSF grant No. PHY-2012955. Support for JLP is provided in part by FONDECYT through the grant 1151445 and by the Ministry of Economy, Development, and Tourism's Millennium Science Initiative through grant IC120009, awarded to The Millennium Institute of Astrophysics, MAS. TAT acknowledges support from Sci\(\ddot{\rm o}\)log Scholar grant 24215 from the Research Corporation, a Simons Foundation Fellowship, and an IBM Einstein Fellowship from the Institute for Advanced Study, Princeton, while a portion of this work was completed. Parts of this research were supported by the Figure 7.— Cone search lookup speed as a function of radius. Cone centers are sampled at random points across the sky. Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013.
2308.12549
Synchronize Feature Extracting and Matching: A Single Branch Framework for 3D Object Tracking
Siamese network has been a de facto benchmark framework for 3D LiDAR object tracking with a shared-parametric encoder extracting features from template and search region, respectively. This paradigm relies heavily on an additional matching network to model the cross-correlation/similarity of the template and search region. In this paper, we forsake the conventional Siamese paradigm and propose a novel single-branch framework, SyncTrack, synchronizing the feature extracting and matching to avoid forwarding encoder twice for template and search region as well as introducing extra parameters of matching network. The synchronization mechanism is based on the dynamic affinity of the Transformer, and an in-depth analysis of the relevance is provided theoretically. Moreover, based on the synchronization, we introduce a novel Attentive Points-Sampling strategy into the Transformer layers (APST), replacing the random/Farthest Points Sampling (FPS) method with sampling under the supervision of attentive relations between the template and search region. It implies connecting point-wise sampling with the feature learning, beneficial to aggregating more distinctive and geometric features for tracking with sparse points. Extensive experiments on two benchmark datasets (KITTI and NuScenes) show that SyncTrack achieves state-of-the-art performance in real-time tracking.
Teli Ma, Mengmeng Wang, Jimin Xiao, Huifeng Wu, Yong Liu
2023-08-24T04:28:08Z
http://arxiv.org/abs/2308.12549v1
# Synchronize Feature Extracting and Matching: A Single Branch Framework for 3D Object Tracking ###### Abstract Siamese network has been a de facto benchmark framework for 3D LiDAR object tracking with a shared-parametric encoder extracting features from template and search region, respectively. This paradigm relies heavily on an additional matching network to model the cross-correlation/similarity of the template and search region. In this paper, we forsake the conventional Siamese paradigm and propose a novel single-branch framework, **SyncTrack**, synchronizing the feature extracting and matching to avoid forwarding encoder twice for template and search region as well as introducing extra parameters of matching network. The synchronization mechanism is based on the dynamic affinity of the Transformer, and an in-depth analysis of the relevance is provided theoretically. Moreover, based on the synchronization, we introduce a novel Attentive Points-Sampling strategy into the Transformer layers (APST), replacing the random/Farthest Points Sampling (FPS) method with sampling under the supervision of attentive relations between the template and search region. It implies connecting point-wise sampling with the feature learning, beneficial to aggregating more distinctive and geometric features for tracking with sparse points. Extensive experiments on two benchmark datasets (KITTI and NuScenes) show that SyncTrack achieves state-of-the-art performance in real-time tracking. ## 1 Introduction With recent advances in autonomous driving, 3D vision tasks based on LiDAR are becoming increasingly popular in the visual community. Among these tasks, 3D LiDAR single object tracking (SOT) aims to track a specific target object in a 3D video with the knowledge of 3D bounding box in the initial frame. This task meets numerous challenges, such as LiDAR point cloud sparsity, occlusions, and fast motions. Most existing methods [33, 51, 19, 12, 34, 53, 20, 9, 40] of 3D SOT mainly adopt a Siamese-like backbone and incorporate an additional matching network to cope with the tracking challenges as shown in Fig. 1(a). Trackers based on the Siamese-like backbone separate feature extraction of template and search region, forwarding the two kinds of features with shared model parameters, respectively. Subsequently, an extra matching network is introduced to fuse Figure 1: The comparison of (a) Siamese network based trackers, (b) previous single-branch, two-stage framework M2Track [52] with (c) our SyncTrack, which is a single-branch and single-stage framework.
2303.07550
Blockchain Based Spectrum Sharing
Spectrum sharing has long been considered as method to improve spectrum resource utilization. Centralized geolocation database approach has been accepted globally for commercial applications. Recently blockchain has been considered as a platform to support spectrum sharing in a distributed manner. Like other commodities, spectrum or right of spectrum usage can be exchanged through blockchain. However, this leads to changes of the location where a particular frequency channel is utilized which brings potential interference. In this paper, we present the interference based consensus and smart contract for blockchain based spectrum sharing. We will also introduce the other aspects of blockchain such as cross train transaction, cross tier spectrum coordination and incentive mechanism that need to be studied for spectrum sharing applications.
Chen Sun, Shuo Wang
2023-03-14T00:42:40Z
http://arxiv.org/abs/2303.07550v1
# Blockchain based Spectrum Sharing # Blockchain based Spectrum Sharing Chen SUN\({}^{\dagger}\), Shuo WANG\({}^{\dagger}\) \({}^{\dagger}\) Wireless Network Research Department Beijing Laboratory, R&D Center China, Sony (China) Limited Room 301, 3/F, Commercial Podium, Ping An international Financial Center No.1 South Xinyuan Rd., Chaoyang District, Beijing. 100027 P. R. C E-mail: [email protected] **Abstract** Spectrum sharing has long been considered as method to improve spectrum resource utilization. Centralized geolocation database approach has been accepted globally for commercial applications. Recently blockchain has been considered as a platform to support spectrum sharing in a distributed manner. Like other commodities, spectrum or right of spectrum usage can be exchanged through blockchain. However, this leads to changes of the location where a particular frequency channel is utilized which brings potential interference. In this paper, we present the interference-based consensus and smart contract for blockchain based spectrum sharing. We will also introduce the other aspects of blockchain such as cross-chain transaction, cross-tier spectrum coordination and incentive mechanism that need to be studied for spectrum sharing applications. **Keyword** Cognitive radio system (CRS), dynamic spectrum access (DSA), coexistence, blockchain, consensus, smart contract ## 1 Introduction With the increasing bandwidth demands of future wireless services, spectrum regulators face great challenge in allocating finite spectrum resources. However, the licensed spectrum is often underutilized temporally and spatially. To tackle this problem, a novel paradigm called spectrum sharing is proposed, which give the opportunity for secondary systems to use unoccupied spectrum resources owned by a primary system. The existing spectrum management technologies such as TVWS and CBRS use centralized GLDB to control spectrum assignment for wireless devices and protect incumbent users [1][2]. Although the centralized solutions are effective for dynamic spectrum access, they require high computation capability for spectrum managers because all the interference calculation and spectrum assignment decisions are made by a central entity. Recently, blockchain based decentralized approaches are proposed for spectrum management especially in China and Europe. Blockchain is a type of distributed ledger technology that organizes data into blocks, which are chained together in an append-only mode. A consensus mechanism is executed to validate transactions among peer nodes and keep the ledger on each node updated. It has the characteristics of decentralization, anonymity, and auditability. Some characteristics of spectrum sharing are consistent with the application scenario of blockchain. Firstly, spectrum usage is real-time and requires timely processing, which can be enabled by localized spectrum ledger. Secondly, spectrum sharing involves multiple stakeholders that may not trust each other, which can be solved by blockchain. Thirdly, with blockchain, spectrum sharing can be conducted among wireless devices directly without the need for intermediaries. However, there are uniqueness of spectrum trading comparing to trading of other commodities. The most important aspect is that transaction of spectrum means changes of locations of the radio transmitter. This brings potential interference to those "innocent" wireless networks nearby the sellers and buyers. In a tiered system, spectrum usage is bounded by the interference budget at the primary system. Trading might involve the consumption of interference budget other than direct exchanging frequency bands. Another aspect is that the variations of wireless channels used for block propagation may affect the performance of blockchain systems compared with applications in wired environment. This paper introduces various aspects of blockchain based spectrum sharing such as the interference-based consensus, block design, cross-chain transaction, cross-tier spectrum coordination and incentive mechanism. The paper is organized as follows. Section 2 presents the standardization activities of the spectrum sharing and blockchain technology. The technical aspects of blockchain based spectrum sharing are given in Sections 3. Finally, Section 4 concludes the paper. ## 2 Standardization Activities of Spectrum Sharing and Blockchain ### Standardization of Spectrum Sharing The Television white space (TVWS) technology is firstly standardized for spectrum sharing. Several IEEE working groups such as IEEE 802.11af, 802.22, 802.19.1 and P1900.6 have developed standards on different aspects of TVWS including system requirements, architecture, sensor information exchange and so forth. [3][4]. In the United States, the wireless innovation forum (WinnForum) has developed standards for spectrum sharing in CBRS band which is 3550-3700 MHz. In Europe, ETSI has developed the standards of mobile broadband services under LSA in the 2300-2400 MHz band. ### Standardization of Blockchain Technology in Communications Multiple study groups in the international telecommunication union (ITU), such as FG DLT, SG13 and SG20, have been working on the standardization of blockchain technology including the concepts, use cases and reference architecture of DLT-based applications and services. ETSI established an industry specification group (ISG) on permissioned distributed ledger (PDL) in 2018. The IEEE standard association has been making efforts on blockchain standardization with various activities such as DLT use in Internet of things (IoT) data management, connected automated vehicles, blockchain access control and blockchain interoperability. The China Communications Standards Association (CCSA) has been conducting blockchain related study items and work items in multiple working groups. The technical report on blockchain and applications in next generation wireless network identified spectrum sharing as one of the important use cases of blockchain technology [5]. Then, the research on blockchain based solutions for wireless network architecture further discussed the key issues and technical requirements of blockchain based dynamic spectrum sharing [6]. ## 3 Key Issues and Potential solutions for Blockchain based Spectrum Sharing ### Consensus mechanism considering interference One of the most important differences between spectrum sharing and other blockchain based data sharing is the impact of interference. In wireless networks, the validation speed needs to be improved to ensure quality of services (QoS). Our proposed approach in [7] is to select a few nodes in a validation area based on the interference relationship. The transaction is considered valid and will be executed only if all the nodes in the validation area approve it showing that they accept the interference that will be incurred due to the transaction. The small validation zone also allows to save consensus time comparing to traditional scheme, such as Bitcoin, which populates the transaction data to all nodes in the network. ### Incentive Mechanism In a scenario where multiple mobile network operators (MNO) and service providers (SP) utilize a blockchain based spectrum trading platform to manage spectrum usage and trading information. MNOs (as leaders) can sell spectrum to SPs to achieve economic benefits while SPs (as followers) can provide better services for end users. How to reach a balance among these stakeholders? Due to the competition relation among SPs and the selfishness of maximizing their own benefits, we model the spectrum trading process as a hierarchical Stackelberg game and propose a compensation-incentive mechanism to maximize the utility of spectrum buyer and ensure fairness among competitive SPs [8]. ### Cross-chain Transaction The throughput of single-chain structure limits the performance of large-scale spectrum trading. We propose a multi-blockchain architecture and a corresponding cross-chain mechanism to improve the speed of spectrum trading [9]. As is mentioned before, spectrum transactions cause the change of interference relationships among incumbents and wireless devices. Therefore, it is necessary to find a trusted and reliable intermedia to ensure the security of cross-chain spectrum trading. we further combine the notary and relay-chain mechanism in this architecture, where coexistence managers play the roles of the notary and the selected CBSDs will maintain a decision blockchain. The decision blockchain is used to share information between different groups and can improve the scalability of notary mechanism. ### Cross-tier Spectrum Coordination Some spectrum sharing frameworks such as CBRS have multiple tiers of different priorities. To protect incumbent users, the aggregated interference from secondary users of all tiers should not exceed an interference threshold. Currently, this interference budget is allocated equally or in a fixed proportion among all secondary users. Blockchain technology provides a promising solution for coordination of spectrum usage among different tiers. Firstly, a spectrum blockchain is established for the devices sharing the same spectrum. The devices in the lower priority tier can access to the spectrum only if they will not cause harmful interference to upper tiers and incumbent users. Through the spectrum blockchain, devices in lower tier can request the devices in upper layer to adjust their interference budget and ensure the aggregated interference to incumbent users does not exceed the protection threshold. This cross-tier spectrum coordination can improve the total number of lower tier users which can have access to the shared spectrum. ## 4 Conclusion In this article, the standardization efforts of spectrum sharing and blockchain technology by various standards developing organizations were summarized. Then, the key issues in blockchain based dynamic spectrum sharing were presented and solutions to these issues were proposed. Through the discussion, we see great potential for the application of blockchain technology in dynamic spectrum sharing, which will pave the way for efficient and fair utilization of valuable spectrum resources for various wireless network such as mobile public networks, Wi-Fi, V2X networks and private networks.
2301.09304
Engineering meter-scale porous media flow experiments for quantitative studies of geological carbon sequestration
This technical note describes the FluidFlower concept, a new laboratory infrastructure for geological carbon storage research. The highly controlled and adjustable system produces a strikingly visual physical ground truth of studied processes for model validation, comparison, and forecasting, including detailed physical studies of the behavior and storage mechanisms of carbon dioxide and its derivative forms in relevant geological settings for subsurface carbon storage. The design, instrumentation, structural aspects and methodology are described. Furthermore, we share engineering insights on construction, operation, fluid considerations, and fluid resetting in the porous media. The new infrastructure enables researchers to study variability between repeated CO2 injections, making the FluidFlower concept a suitable tool for sensitivity studies on a range of determining carbon storage parameters in varying geological formations.
Kristoffer Eikehaug, Malin Haugen, Olav Folkvord, Benyamine Benali, Emil Bang Larsen, Alina Tinkova, Atle Rotevatn, Jan Martin Nordbotten, Martin A. Ferno
2023-01-23T07:41:12Z
http://arxiv.org/abs/2301.09304v1
# Engineering meter-scale porous media flow experiments ###### Abstract This technical note describes the FluidFlower concept, a new laboratory infrastructure for geological carbon storage research. The highly controlled and adjustable system produces a strikingly visual physical ground truth of studied processes for model validation, comparison, and forecasting, including detailed physical studies of the behavior and storage mechanisms of carbon dioxide and its derivative forms in relevant geological settings for subsurface carbon storage. The design, instrumentation, structural aspects and methodology are described. Furthermore, we share engineering insights on construction, operation, fluid considerations, and fluid resetting in the porous media. The new infrastructure enables researchers to study variability between repeated CO\({}_{2}\) injections, making the FluidFlower concept a suitable tool for sensitivity studies on a range of determining carbon storage parameters in varying geological formations. ## 1 Introduction We aim to develop a laboratory research infrastructure dedicated to geological carbon storage (GCS) with methodology for repeatable, meter-scale experiments with sufficient precision to allow investigation of any isolated parameter. The two-phase flow of CO\({}_{2}\)-rich gas and water-rich fluids within complex geological structures, combined with the development of density-driven convective mixing, is difficult to adequately resolve by numerical simulation (Nordbotten et al., 2012; Flemisch et al., this issue). As such, there is a strong need for accurate and reproducible experimental data against which numerical simulation tools can be verified. To this end, this technical note details the construction and operation of a laboratory-scale GSC research infrastructure, which we term "FluidFlower" (see **Figure 1**), with the following a set of characteristics: 1. Meter-scale, quasi-2D experimental systems containing unconsolidated sands that can be arranged to replicate realistic geological structures such as domes, pinch-outs and faults. 2. Operational conditions mimicking real GCS operations can be achieved by rate-controlled injection and localized pressure monitoring. 3. Multiphase flow characteristics such as free gas and CO\({}_{2}\) dissolution and concomitant density-driven fingers can clearly be identified visually and quantified by image analysis tools (Nordbotten et al, this issue). 4. Studied geometries can be flushed and reset to the initial state, allowing for studying reproducibility of experimental results (Ferne et al, this issue) as well as variations of operational conditions. State of the art simulators and experimental methodology builds on a century of global oil and gas exploration and production. While this provides a solid scientific and technological foundation, there remains aspects unique to GCS that require further development. CO\({}_{2}\) is physically and chemically very different from crude oil and gaseous organic hydrocarbons: CO\({}_{2}\) is highly soluble in water, and its derivative dissolved inorganic carbon (DIC) species both acidity and increase the density of the water. The increase to density allows gravitationally induced convective mixing, an essential sequestration mechanism and a topic of much study over the past decades (Pau et al., 2010; Riaz et al., 2006; Elenius et al., 2012; Erfani et al., 2022). The density change depends on total DIC concentration, in turn depending on dissolution rates, reactivity, convection and diffusion rate, and latent or induced flow in the reservoir. The acidification allows a range of reactions to take place in different geochemical environments leading to effects such as mineralization or grain dissolution, potentially altering the physical properties of reservoirs. The technical note is structured as follows: Chapter 2 presents key features of the FluidFlower concept; Chapter 3 describes the FluidFlower laboratory infrastructure, with technical aspects and considerations for the flow cell; Chapter 4 details experimental operation and capabilities; and Chapter 5 provides the rationale for fluids used. Appendix A details the room-scale flow cell, and its operation and laboratory setup for the CO\({}_{2}\) experiments in the 2022 FluidFlower forecasting study (Ferne et al., this issue); Appendix B describes the tabletop flow cells used for methodology development, rapid prototyping and iteration, and smaller meter-scale experiments in (Haugen et al., this issue; Sal6-Salgado et al., this issue; Keilegavelen et al., this issue; Eikehaug et al., in press). Figure 1: FluidFlower flow cell varieties used in the forecasting study. The porous media are built with unconsolidated sands within a vertical, quasi two-dimensional, optically transparent flow cell filled with water. A camera is located on the front side to monitor and record system changes with time-lapsed imaging. Other instrumentation and most operations such as fluid injections occur at the rear side. To achieve the scale of the experiments, operation occurs at standard sea-level pressure and temperature, still preserving the governing porous media physics, relevant displacement processes and trapping mechanisms for subsurface CO\({}_{2}\) storage. The FluidFlower concept serves a dual purpose of a research infrastructure for high-quality experimental data, and a vehicle for public outreach and dissemination. The room-scale (right) shown during CO\({}_{2}\) injection in a faulted geological geometry motivated by typical North Sea reservoirs. Tabletop-version (left) shown containing an idealized folded geometry. The FluidFlower concept produces quantitative meter-scale experimental data that may be scaled to field-scale (Kovscek et al 2023, this issue) to provide new insight relevant for subsurface GCS. Virtual models for illustration. ## 2 Key FluidFlower features Here we describe four essential features of the FluidFlower concept that enable meter-scale porous media flow experiments for quantitative studies of geological carbon sequestration. ### Physical repeatability Repeated CO\({}_{2}\) injection experiments performed on the same porous media geometry is a key capability of the FluidFlower concept. Cycling of fluids allows for investigation of isolated experimental parameters and identification of stochastic elements without the uncertainties and workload associated with rebuilding the geometry for each new experiment. The process of resetting fluids between repeated experiments is designed to keep colors and chemical conditions constant within an experiment series for increased reproducibility (Forecasting study cycling shown in Appendix A and more general example in Appendix B). This technique allows complete tabletop fluid resetting in a few hours and room-scale resetting in a few days. ### Porous media The porous media are constructed by depositing unconsolidated material with known properties to quantify observed processes for model verification, comparison, and forecasting. Sand grains should remain within a known and comparatively narrow size distribution, ideally with minimal shape variation to avoid unwanted packing effects. The sands should also be chemically inert unless grain dissolution or similar processes is the intended subject of study. The depositing process is designed to mimic natural underwater sedimentation. The above considerations, the construction of porous media geometries, and the tools used, are further detailed in (Haugen et al., this issue). Settling of unconsolidated sands may be traced using the open-source software DarSIA (Both et al., 2023; Nordbotten et al., this issue). ### Seeing fluid phases Differentiating between fluid phases is essential to the viability of the method. Both water and gas are normally transparent and provide little optical contrast to one another. For GCS applications, water and CO\({}_{2}\) diffuse into one another, and mixes of two or more fluids are of particular interest. Aqueous concentration of CO\({}_{2}\) and DIC is determined by dissolved gas capacity in the water, distance from gas phase CO\({}_{2}\), diffusion rate, convective mixing, time, and chemical environment. Dissolution of CO\({}_{2}\) into water is an important GCS storage mechanism, and its physical effects in a reservoir has been a topic of interest for some time in the simulation community. To visualize this, we utilized the spontaneous reaction between CO\({}_{2}\) and water producing carbonic acid (detailed in chapter 5). ### Data collection Time lapse images of the flow cells are captured at intervals synchronized to the experiment time steps, acquired by high-resolution cameras with constant settings to improve reproducibility. Imperfections in the camera optics cause spatial distortion in images, and composite grid images of levelling laser lines provide a reference for correction of such lens aberrations. Images captured (by RGB sensors) never fully represent (full spectrum) true colors, and a standard color palette is included in all images as an image processing reference. Temperature, point and ambient pressure, and all fluid injections are measured and logged (detailed in chapter 4). ## 3 Infrastructure Flow cells are functionally similar to flattened aquariums, where vertical transparent plates separate observers and instrumentation from sands and fluids. Key flow cell structural considerations are detailed below. Flow cell depthsFlow cell depth is a compromise between observational and operational aspects, limiting sand packing boundary and three-dimensional flow effects. The distance between the flowing gas and viewing window should be small so that diffusion allows for early detection of displacement processes in the third dimension, while allowing sufficient depth for manipulation of sands using sand manipulation tools described in (Haugen et al 2023, this issue). The hydrophilic wall surfaces encourage gas flow within the sands rather than along the walls to minimize the effect of artificial grain structuring along the boundary. For grain sizes typically less than 2 mm associated with unconsolidated sands (e.g., Freeze et al, 1979), a depth on the order of 8-10 times the maximum grain size is preferred to mitigate poor packing conditions and preferential flow along the walls (Chapuis, R.P., 2012). Internal forces and hydraulic deformationWater exerts significant outward static pressure on the flow cell walls. The hydraulic load scales with the squared height of water in the flow cell, increasing with hydrostatic pressure and surface area (approximately eight tons for room-scale version and % ton for tabletop version). Furthermore, the settling of the unconsolidated sands will lead to additional lateral forces. The large span of the walls argues for the accommodation of measurable and safe elastic deformation, rather than to strive for absolute rigidity to avoid potential brittle failure. Hence, transparent plastics has been the material of choice. The perforations of the rear panel weaken the structural integrity of the plate with unknown localization of material stresses, and a safety margin has been applied to all structural dimensions. The room-scale FluidFlower is curved to further increase rigidity with the front in a state of compression and the rear in a tensile state (see **Figure A1** in Appendix A). Viewing window reflectionsThe reflective surface of the viewing window causes artefacts in images captured. Images are captured by a camera located in the focal point of a curved flow cell, where any horizontal straight line drawn between the camera and the flow cell viewing window is orthogonal to the latter (see **Figure A5** in Appendix A), allows minimum reflections compared to imaging of a flat flow cell such as the tabletop version. With maximum overlap between direct and reflected camera field of view, lamps and other objects not placed directly between the camera and flow cell remain hidden to the camera. This allows illumination with high incident angle and a compact laboratory footprint. Curvature also increases structural rigidity and allows a larger window span without requiring impractically thick walls to withstand internal forces. Film studio standard high-frequency 'flicker-free' LED lamps are located outside of the camera field of view. Construction materials and rear plate perforationsMaterials in contact with the internal volume should have minimum influence on porous media chemistry and flow behavior. The materials must be resistant to the corrosive nature of the salts and fluctuating pH in the system to allow study of the contained system rather than the external. Rear plate perforation limits the instrumentation and tubing required to manipulate and measure the fluids of interest to the rear plate. Hence, the front side remains free of disturbing technical elements. Fluid resetting for repeated experiments occurs through a series of perforations along the lower flow cell boundary where water may be injected to replace the fluids in the system. The rigid flow cell boundary is provided by a double-flange frame for viewing window support. The chassis-like substructures are built to accommodate instrumentation, mobility, and ease of operation. They connect to hinges on the flow cell and are constructed to transfer their load to the chassis bottom where adjustable legs are installed, as well as transport wheels for the room-scale version. ## 4 Experimental operation and capabilities This chapter provides a general overview of technical instrumentation that enables the injection and quantitative monitoring of CO2 flow, trapping and dissolution in the geological geometries in the flow cells. #### Fluid flow systems There are separate control systems for the aqueous and gaseous phase injections (illustrated in **Figure 2**). Water flow is operated with computer-controlled double-piston pumps (e.g., Chandler Engineering Quizix Precision) that connect to the system via gas traps that double as particle traps. Valve manifolds connect gas traps to flow cell ports for controlled injection/production from specific ports, or with a total rate distributed between multiple ports. Manual and/or computer-controlled pneumatic valves may be used. Gas flow is regulated by computer-controlled mass flow controllers (MFC, e.g., Bronhorst El-Flow Prestige), with gas supplied from pressurized gas canisters. Standard pressure regulators connect the gas bottles to the MFCs via a flow restriction needle valve that reduce pressure fluctuations originating from the spring-loaded pressure regulator mechanism. MFC performance must be tested and tuned prior to all experiments to keep fluctuations within listed instrument uncertainty. Aqueous and gaseous operations should be fully scripted to limit the potential for human error. #### Logging volumetric displacement of water A constant water level is maintained due to the passive overflow function of the always-open perforations positioned on the top of the flow cell (cf. **Figure 2**). Water is injected at a constant rate into the free water above the porous media geometry to keep the overflow port water-filled to eliminate surface tension effects. This approach keeps fluctuations in the hydrostatic pressure to a minimum. By logging overflow rates via interval mass measurement, rates of volumetric displacement of water (cf. Appendix A) become detectable and may be coupled to gas phase CO2 volume and its dissolution rate in the system. Figure 2: Conceptual fluid systems: Water (left) and gas (right). Water from supply (A) to computer-controlled cylinder pump (B), and further through a gas trap (C) before it is directed to the system perforations by valve manifolds (D). Displaced water (overflow) (E) exits an open perforation and is collected by a waste canister (F) sitting on a mass logging scale (G). Gas flows from gas canister (H) with pressure regulator (I), and further through a flow restriction needle valve (K) and a computer-controlled mass flow controller (L) connected to the system perforations or the atmosphere via a three-way valve (M). Pressure and temperaturePressure transducers connect to chosen perforations for point logging. Typically, fluctuations of interest in the systems occur in the sub-mbar regime and require sensors of adequate precision for any meaningful measurements. Working at sea-level pressure implies atmospheric fluctuations represent significant uncertainty if unattended. Not only may system response disappear in atmospheric noise, but flow experiments using gas phase CO2 are highly dependent on absolute pressure and temperature for both density and dissolution. Temperature is sought to be kept constant during experiment series, yet a gradient has been observed along the height of the flow cells in cold-floor labs. Point logging of temperature is collected by dual-purpose pressure transducers (e.g., ESI Technology GS54200-USB). Degassing of the aqueous phaseAqueous solutions should be degassed by a vacuum pump (e.g., Edwards RV5) prior to injection to minimize the influence of atmospheric gases dissolved in the water on measured variables. Atmospheric gases in the water affect the CO2 dissolution rates, and Henry's law (see chapter 5) implies that when CO2 dissolves into atmosphere-saturated water, non-negligible quantities of nitrogen and oxygen are expelled from the water. This presents significant challenges for quantitative analysis of time-lapsed image series and resetting of the porous media if unattended. The influence of varying degassing is discussed further in Haugen et al, this issue. ## 5 FluidFlower fluids Differentiating between fluid phases is essential to the viability of the FluidFlower concept. The CO2 is injected as a dry gas and will partially partition into the formation water (aqueous pH-indicator mix), and this is of particular interest for GCS applications as it is one of the trapping mechanisms which makes the injected CO2 less mobile with time. The equilibrium concentration of dissolved gas in water is proportional to the system pressure and inversely proportional to temperature, and varies between types of gases and combinations thereof. This relation is given by Henry's law as species specific Henry solubility HCP translating to aqueous equilibrium concentration \(c_{aq}\) for any type of gas, with \(c_{aq}\) proportional to the partial pressure of a gas species. To visualize dissolution of CO2 into water, we utilized the spontaneous reaction between CO2 and water producing carbonic acid. The acidification allows compounds sensitive to pH changes in the neutral to mildly acidic regime to be an accessible method of aqueous CO2 detection. Common pH indicators are medium-size organic compounds that undergo a configurational change when a proton is added or subtracted from the molecule. The configuration change in turn causes a change in the wavelengths of light absorbed, observed a visible and reversible change of color. Pure water has a theoretical pH of 7.0, with [H3O+] = [OH-] = 10-7M, but with no buffering capacity it is extremely sensitive to impurities. Atmospheric CO2 diffuses into the water and acidification of freshly deionized water can be measured immediately after air exposure. Hence, pure water in equilibrium with atmospheric gasses typically has a pH of approximately 5.8, compared with a pure CO2 atmosphere (emulating conditions inside the flow cells) at approximately pH 4. Several pH indicator compounds in this range exists, typically with transition from a high-absorption color at higher pH towards a lower intensity color (lower pH), complicating precise determination of CO2 concentrations and making the visual appeal mediocre at best. While simple image processing may capture aqueous CO2 contours, the signal strength remains limited. pH indicators typically have a transition range of \(\Delta\)pH 1-2, regardless of specific pH transition range. With pH being a logarithmic measure of acid or base concentrations centered around 7, transition ranges closer to 7 distinguish a narrower range of concentrations. We have opted to work under the assumption that widening the pH range of the water phase with a dilute strong base has a measurable yet limited effect on CO2 dissolution and the overall system behavior. This allows more intense contrast colors and distinct transition ranges (cf. Figure 3), leading to improved signal strength and the possibility of distinguishing multiple concentration levels of CO2. Bromothymol blue (BTB, transition pH 6.0-7.6) and methyl red (MRe, transition pH 4.4-6.2) have been used extensively in our experiments. In acidic environment, however, the protonated form of methyl red has a relatively low solubility in water, and high concentrations result in precipitation at pH 6 and below, like that observed in the CO\({}_{2}\) injections (Ferng et al, this issue; Haugen et al., this issue; Salo-Salgado et al., this issue). ## 6 Concluding remarks The FluidFlower concept, as presented in this technical note, represent a new laboratory infrastructure for experimental research to add to the knowledge base for which decisions regarding GCS is made. Details and engineering insights for constructing and operating these highly controlled and adjustable systems are presented for flow cells of different complexity. Both designs have been proven viable and reliable after several experiment series lasting up to a year or more. The combination of these physical flow cells, together with the in-house developed open-source software DarSIA (used to analyze high-resolution time-lapse images), makes it possible to plan and perform a variety of porous media fluid flow experiments on the meter-scale with quantification of key parameters. This provides a unique opportunity to obtain experimental data for validation and calibration of numerical simulation models. The geological geometries that can be modelled by the FluidFlower are representative of large-scale structures (e.g., large scale faults and folds) and stratigraphic layering (reservoir units, seal units etc.) observed in subsurface reservoir systems, and the FluidFlower concept allows for studying the impact of these geological geometries on CO\({}_{2}\) trapping and flow dynamics. Subsurface CO\({}_{2}\) trapping mechanisms that can be studied with the FluidFlower concept includes; _structural and stratigraphic trapping_ under sealing sand layers; _residual trapping_ is seen in regions with intermediate water saturation, and is temporary because of rapid dissolution; _dissolution trapping_ is observed almost instantaneously when the injected CO\({}_{2}\) dissolves into the water phase; and _convective mixing_ which occurs when the denser CO\({}_{2}\)-saturated water migrates downwards, generating gravitational fingers. Ultimately, our concept allows observation of spatio-temporal interactions of physical processes of multiphase, multi-component flow during CO\({}_{2}\) immobilization in a porous medium at the meter scale with high relevance for GCS applications. We encourage the porous media to explore this experimental method. ## Acknowledgements KE and MH are partly funded by Centre for Sustainable Subsurface Resources, Research Council of Norway (RCN) project no. 331841. MH is also funded by RCN project no. 280341. BB is funded from RCN project no. 324688. The authors would like to acknowledge civil engineering interns Ida Louise Mortensen, Mali Ones, Erlend Moen, and Johannes Salomonsen for their laboratory and workshop contributions to the Forecasting study and legacy experiments. Presented tabletop illustrations use images from experiments by Ingebrigt Lilleas Midthjell, Hakon Kvanli, Hakon Stavang, Maylin Elizabeth Ordonez Obando, and Janner Fernando Galarza Alava. Geology input from and room-scale illustration image from geometry developed in collaboration with Robert Leslie Gawthorpe and Casey William Nixon. Roald Langgen, Charles Thevananth Sebastiampilai, Thomas Husebo, Werner Olsen, Thomas Poulianitis, and Rachid Maad have contributed with technical solutions from parts machining to instrument and software prototyping. Figure 3: Visible CO\({}_{2}\) concentration gradient in a mixture of BTB and MRe. A) Solution with no CO\({}_{2}\), pH 8. B) BTB transition, pH 7. C) Transition overlap, pH 6. D) MRe transition, pH 5. E) CO\({}_{2}\)- saturated solution, pH 4. F) gaseous CO\({}_{2}\) phase
2302.10305
Analyzing Multimodal Objectives Through the Lens of Generative Diffusion Guidance
Recent years have witnessed astonishing advances in the field of multimodal representation learning, with contrastive learning being the cornerstone for major breakthroughs. Latest works delivered further improvements by incorporating different objectives such as masked modeling and captioning into the frameworks, but our understanding on how these objectives facilitate learning remains vastly incomplete. In this paper, we leverage the fact that classifier-guided diffusion models generate images that reflect the semantic signals provided by the classifier to study the characteristics of multimodal learning objectives. Specifically, we compare contrastive, matching and captioning loss in terms of their semantic signals, and introduce a simple baseline that not only supports our analyses but also improves the quality of generative guidance in a straightforward manner.
Chaerin Kong, Nojun Kwak
2023-02-10T11:17:20Z
http://arxiv.org/abs/2302.10305v1
# Analyzing Multimodal Objectives Through the Lens of Generative Diffusion Guidance ###### Abstract Recent years have witnessed astonishing advances in the field of multimodal representation learning, with contrastive learning being the cornerstone for major breakthroughs. Latest works delivered further improvements by incorporating different objectives such as masked modeling and captioning into the frameworks, but our understanding on _how these objectives facilitate learning_ remains vastly incomplete. In this paper, we leverage the fact that classifier-guided diffusion models generate images that reflect the semantic signals provided by the classifier to study the characteristics of multimodal learning objectives. Specifically, we compare contrastive, matching and captioning loss in terms of their semantic signals, and introduce a simple baseline that not only supports our analyses but also improves the quality of generative guidance in a straightforward manner. ## 1 Introduction Vision-Language Pretraining (VLP) has attracted great attention from the community for its wide and robust applications in different downstream tasks. The seminal work of CLIP (Radford et al., 2021) employs the image-text contrastive objective to successfully embed images and text descriptions in a common feature space, inspiring numerous subsequent works that explore different objectives (Li et al., 2022; Yu et al., 2022; Yang et al., 2022; Jang et al., 2023) and architectures (Li et al., 2021; Jang et al., 2022; Wang et al., 2022). Recently, cross-modal generative models (Ramesh et al., 2021; Saharia et al., 2022; Rombach et al., 2022; Kong et al., 2022) are also gaining wide popularity thanks to the powerful capacity of diffusion models (Sohl-Dickstein et al., 2015; Ho et al., 2020) and readily available guidance of vision-language foundation models. These models aim to synthesize or edit images so that the outputs are both realistic and condition-aligned. Conditional diffusion models embody the conditioning information in two ways: classifier guidance (Dhariwal and Nichol, 2021) and classifier-free guidance (Ho and Salimans, 2022). While classifier-guided models (Nichol et al., 2021; Dhariwal and Nichol, 2021; Kong et al., 2023) typically leverage multimodal embeddings of a joint embedding network (_e.g.,_ CLIP) using cosine similarities, it is unclear whether this approach is optimal. For example, recent works (Zhong et al., 2022; Li et al., 2022b) have pointed out that global representations of CLIP learned by the contrastive objective are not suitable for handling fine-grained correspondences between image and text as they represent the image and the text _as a whole_. This indicates that features learned from sequence level contrastive learning may have certain blind spots that can be made visually apparent when directly applied to the diffusion process as the guidance signal. In this paper, we aim to obtain a better understanding of different multimodal learning objectives (_e.g., image-text contrastive, image-text matching, image captioning_) by analyzing their semantic signals as generative guidance. That is, we utilize the classifier-guided diffusion process to study the characteristics of different objectives by carefully inspecting the samples they produce, and further present a straightforward modification to the previous method that both supports our findings and improves the generation quality. We note that the aim of this paper is not to present a high-performing generative model. Rather, we are simply _leveraging_ the diffusion process to visually analyze different objectives and hypothesize about the properties of the accordingly learned representations. ## 2 Multimodal Objective as Generative Guidance ### Classifier-guided Diffusion Dhariwal and Nichol (2021) has introduced classifier guidance as a means to steer the generative diffusion process towards the conditioning class. This requires a _noise-aware_ classifier whose gradient can be used to guide the diffusion process. Formally, denoting the predicted parameters of timestep \(t\) as \(\mu_{\theta}(x_{t})\), \(\Sigma_{\theta}(x_{t})\), the next step diffusion sampling becomes \[x_{t-1}\sim\mathcal{N}(\mu_{\theta}(x_{t})+s\Sigma_{\theta}(x_{t})\nabla_{x_{ t}}\log p_{\phi}(y|x_{t}),\Sigma_{\theta}(x_{t})), \tag{1}\] where \(s\) is the step size, \(y\) indicates the class label and \(p_{\phi}\) refers to the classifier. This formulation was altered by Nichol et al. (2021) to suit text-to-image generation as follows: \[x_{t-1}\sim\mathcal{N}(\mu_{\theta}(x_{t})+s\Sigma_{\theta}(x_{t})\nabla_{x_{ t}}\langle f(x_{t}),g(c)\rangle,\Sigma_{\theta}(x_{t})), \tag{2}\] where \(\langle\cdot,\cdot\rangle\) indicates the inner product and \(f\), \(g\), \(c\) are the image encoder, text encoder, and the text condition, respectively. Overall, the classifier guidance encourages the model to generate samples that are well-aligned with the condition according to a predefined metric. Obviously, the choice of this metric affects the final output sample, revealing how each metric (objective) connects the two modalities (_image and text, in our case_) in multiple levels. ### Pretrained Models As our goal is to study the semantic signals encoded in different objectives, we employ a pretrained diffusion backbone and a pretrained vision-language guidance model for our analysis. **Generative Model** For the image generator, we use a 256\(\times\)256 unconditional diffusion model pre-trained on Imagenet1(Russakovsky et al., 2015). We note that this is _not_ a state-of-the-art text-to-image diffusion model. The unconditional nature of this model renders it well-suited for our purpose, as it solely relies on the classifier signal for condition-aware synthesis, making it possible to analyze the encoded semantic information in a disentangled manner. Employing an excessively powerful generator can similarly obfuscate our analysis, as its generative capacity can compensate for the weakness in the guidance signal and mask its blind spots. Footnote 1: [https://github.com/openai/guided-diffusion](https://github.com/openai/guided-diffusion) **Guidance Model** Among many available candidates, we choose BLIP (Li et al., 2022a) as our main guidance model2. This model is trained on 129M image-text pairs simultaneously optimizing for three objectives: image-text contrastive (**ITC**), image-text matching (**ITM**) and image captioning (**CAP**). The fact that a single model can evaluate these three scores makes it an excellent guidance model for our analysis, as we can safely minimize the compounding effects coming from using different models trained with different dataset, architecture and optimization scheme. We adopt the idea from Avrahami et al. (2022) to first predict the denoised version for guidance signal computation. Footnote 2: [https://github.com/salesforce/BLIP](https://github.com/salesforce/BLIP) ### Objectives and Benchmarks We analyze three commonly used multimodal objectives: **ITC, ITM** and **CAP**. We simply replace the classifier logit in eq.1 with the corresponding loss terms of BLIP (with the sign reversed). For details about the loss computation, please refer to Li et al. (2022a). For systematic evaluation, we mainly follow the benchmark of Saharia et al. (2022), namely DrawBench. We further add prompts from COCO (Lin et al., 2014) captions. Qualitative evaluations as well as quantitative comparisons based on user study are performed to draw insights. ## 3 Analyses In this section, we first present our empirical findings and propose a straightforward modification called **SHIFT** that reflects the insights by employing the coarse-to-fine guidance of **CAP** and **ITC**. We acknowledge that using a more powerful model would relieve some of the issues demonstrated in this section as the data scale can compensate for the blind spots, but we believe the findings we present here nevertheless hold true and will come into play in an increasingly challenging problem setting. ### Findings _1. While **ITC** focuses on the fine details of the salient object, **CAP** tends to reason about the global scene composition._ Looking at Figure 7 row, we can clearly see that contrastive loss is more effective in forming fine details of the main object while it often leaves out _less important_ objects or attributes in the given prompt. We hypothesize the former is due to the core dynamic of contrastive learning that aims to learn relative distances by _distinguishing_ objects. At the same time, as the objective only compares fully abstracted representations (_i.e.,_ [cls] tokens) that dominate each entity as a whole, it fails to densely parse the scene. On the other hand, captioning objective forces to understand the scene structure in a deeper level, making captioning-guided samples more faithful to complex texts. _2. **ITC** commonly fuses visual semantics together to forcefully form a global semantic._ **1** in Figure 7 illustrates a more extreme case where **ITC** not only omits semantic components but arbitrarily mixes them. From these examples, we can diagnose that the contrastive objective does reflect semantic attributes (_e.g., blue_) but fails to _relate_ them to the correct object (_e.g., dog_). This can be another side effect of simplified distance learning. Captioning, in contrast, requires the model to reason about both objects and their relations, deepening the scene-level understanding. _3. Patch-token cross-attention plays a key role in fine-grained visual understanding._ We now widen the scope of our analysis by further looking at **ITM**. In contrary to **ITC** that simply compares two [cls] tokens, **ITM** involves lower-level cross-attention between image patch tokens and text tokens (as in **CAP**) to output a matching score between 0 and 1. We discovered that this operation plays an important role in fine-grained visual understanding and representation robustness. To our surprise, **ITM**, more or less an auxiliary loss to polish multimodal representations, encodes strong semantic signals that involves dense scene understanding. Looking at,, cross-attention-based objectives (**ITM**, **CAP** and **SHIFT**) demonstrate capacity for fine-grained visual reasoning, and **ITM** signal successfully materializes objects with corresponding attributes and relations, though at a lower visual quality. _4. Dense supervision makes the representations more robust to noise perturbations._ Last two rows of Figure 7 show the impact of noise in the text prompt. depicts that as opposed to **ITC** that generates random objects under mild typo, other losses render relatively consistent outputs. Figure 1: Text-to-image samples using each objective as the classifier-guidance. We only present preliminary examples for illustrative purpose. Please refer to the Appendix for more cases. * delivers a similar insight, where the phrase 'camel in the _desert_' not '_desert_' is likely mistaken by the text provider. These cases are very probable in the typical setting where massive noisy image-text data are crawled from the web, and we observe that dense supervision that involves low-level patch-token cross-attention shows better robustness against textual perturbations, as perturbed text inputs attend to not only themselves but also visual information to form a more robust representation. _5. **CAP** is a more indirect if not challenging form of supervision than **ITC** or **ITM**._ Lastly, we inspect the optimization complexity of each objective by differing diffusion sampling steps. As each diffusion step corresponds to an update using the loss gradient, we regard an objective that generates reasonable sample with fewer steps to have lower optimization complexity. Referring to Figure 2, we see that **ITC** and **ITM** clearly take less steps to output realistic samples compared to caption-based losses. This observation coincides with Radford et al. (2021) and Yu et al. (2022), where the former explicitly chose contrastive loss for training efficiency and the latter has been reported to take much more resources to converge due to captioning. We conclude that as captioning demands a more semantic visual understanding, learning becomes trickier compared to the simple distance learning. ### New Baseline: Guidance Shift Based on the above findings, we propose a simple yet effective baseline that takes advantage from both ends, _i.e.,_ contrastive learning and captioning. To leverage the strengths from both, we introduce guidance shift, where we start with captioning loss and gradually shift to contrastive loss for the generative guidance. Formally, our **SHIFT** loss can be written as: \[\mathcal{L}_{SHIFT}=t\mathcal{L}_{ITC}+(1-t)\mathcal{L}_{CAP}, \tag{3}\] where \(t\) is the normalized time step, progressing from 0 to 1. The idea is to first outline the overall structure with **CAP** and then refine the details with **ITC**. To study its effectiveness, we conduct quantitative user study as well as qualitative evaluations presented in the Appendix. Fig.3 delivers the result. Compared to simple **ITC** baseline, **SHIFT** outperforms in both fidelity and alignment. Although **CAP**-only shows better condition alignment, **SHIFT** clearly outputs better quality samples, which is apparent from qualitative results as well. **BLEND**, a naive baseline that simply mixes **CAP** and **ITC** without gradual transition, performs significantly worse as these two signals can often be conflicting and difficult to optimize simultaneously. ## 4 Conclusion In this paper, we have studied the semantic information encoded in different multimodal objectives by visually analyzing their properties as generative diffusion guidance. We hope it provides useful insights for ensuing works and sparks further advances in the field. Figure 3: Human evaluation for photo-realism and condition-alignment. Figure 2: Generated samples for each objective and the number of diffusion sampling steps.
2301.12042
The gauge-invariant formulation of the local expansion rate driven by the local average density in an inhomogeneous universe
The Hubble tension casts a blight on the standard cosmology. As a possible solution to the problem, the local variation of the expansion rate has been proposed where the spatial averaging over a finite domain was introduced in order to restore the local Friedmannian behavior in an inhomogeneous cosmology. So far, however, the approaches are limited to the particular choices of the gauges, and it has been unclear whether the results are gauge-invariant. In this paper, we present the gauge-invariant formulation of the local expansion rate which is driven by the spatial average of the gauge-invariant inhomogeneous density. We show that the local cosmological parameters in the finite domain may change from the global parameters, and the relations between them are expressed by the gauge-invariant averaged density.
Masanori Tomonaga, Masumi Kasai, Toshifumi Futamase
2023-01-28T01:28:11Z
http://arxiv.org/abs/2301.12042v2
The gauge-invariant formulation of the local expansion rate driven by the local average density in an inhomogeneous universe ###### Abstract The Hubble tension casts a blight on the standard cosmology. As a possible solution to the problem, the local variation of the expansion rate has been proposed where the spatial averaging over a finite domain was introduced in order to restore the local Friedmannian behavior in an inhomogeneous cosmology. So far, however, the approaches are limited to the particular choices of the gauges, and it has been unclear whether the results are gauge-invariant. In this paper, we present the gauge-invariant formulation of the local expansion rate which is driven by the spatial average of the gauge-invariant inhomogeneous density. We show that the local cosmological parameters in the finite domain may change from the global parameters, and the relations between them are expressed by the gauge-invariant averaged density. ## 1 Introduction The Hubble constant \(H_{0}\) is one of the most important cosmological parameter since it characterizes the global properties of our universe. The standard cosmology is based on the assumption of the homogeneity and isotropy. Thus, the Hubble parameter \(H_{0}\) is regarded as a constant over at least the horizon scale which is also the prediction of the inflationary scenario. However recent observations suggest a non negligible difference between local and global (or recent and old) Hubble parameter [1, 2]. There has been a large number of studies which try to resolve the discrepancy [3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. We regard that the difference of the local and global Hubble parameter is real and be explained by the inhomogeneous distribution of the matter. In fact, the observation of the K-band luminosity density seems to suggest that region with several hundred Mpc around us has low density with density contrast \(\delta_{K}\sim-0.5\) compared with the globally averaged density [19]. Furthermore, there is some indication that the voids are actually low density by weak lensing observation. Thus, it will be meaningful to pursue the indication of the cosmological inhomogeneity. The homogeneous and isotropic universe (here we call Friedmann Universe) appears as the result of some kind of averaging procedure since the universe is actually very inhomogeneous. There are various ways to averaging inhomogeneous universe (such as the light-corn averaging that is directly related with observational quantities). In this paper, we only consider the scalar perturbations in the linear order and the spatial averaging [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35]. Our purpose is not studying the averaging itself in general inhomogeneous spacetimes, but rather the gauge dependence of the relationship between locally averaged and globally averaged spacetime in the linearly perturbed universe using the spatial averaging. By adopting the spatial averaging defined below, we were able to derive a locally averaged Friedmann universe and have obtained the following relation between the locally average Hubble parameter and the globally averaged Hubble parameter within the framework of the general relativistic perturbation theory[36, 37] \[H_{D0}=H_{0}\left(1-\frac{1}{3}f(t_{0})\langle\Delta\rangle_{Dt_{0}}\right), \tag{1}\] where \(H_{D0}\) is the averaged Hubble parameter at the present time \(t_{0}\) over a finite domain \(D\), and \(H_{0}\) is the global, or the horizon scale Hubble parameter, and \(\langle\Delta\rangle_{Dt_{0}}\) is the present density contrast average over the domain \(D\). \(f(t)=d\log\Delta/d\log a\) is the growth function of the density contrast. However, the treatment is carried out in the comoving synchronous and Newtonian gauge, and there is some question if the averaging and the result are gauge invariant or not. In order to answer the question, we study the spatial averaging in the framework of the gauge invariant cosmological perturbation theory, and find that local Hubble parameter (and local cosmological parameter) can be describe used by gauge-invariant physical quantities averaged in the local region \(D\). ## 2 Gauge-invariant linear perturbation theory In this section, we briefly summarize the gauge-invariant perturbation theory [38, 39]. We assume the flat background with dust fluid. Then the background metric is \[ds^{2}=-dt^{2}+a^{2}(t)\delta_{ij}dx^{i}dx^{j} \tag{2}\] and the energy momentum tensor is \[T^{\mu}=\rho_{b}(t)\,u^{\mu}u^{\nu},\quad u^{\mu}=(1,0,0,0)\,. \tag{3}\] From the Einstein equations, we obtain the following Friedmann equation \[\left(\frac{\dot{a}}{a}\right)^{2}=\frac{8\pi G}{3}\rho_{b}+\frac{\Lambda}{3} \tag{4}\] and from \(T^{\mu\nu}_{\quad\quad;\nu}=0\), we obtain the energy conservation equation \[\dot{\rho}_{b}+3\frac{\dot{a}}{a}\rho_{b}=0\,. \tag{5}\] Next, we write the metric and the energy-momentum tensor in the perturbed universe as follows: \[ds^{2} =g_{\mu\nu}dx^{\mu}dx^{\nu} \tag{6}\] \[g_{00} =-(1+2A)\] (7) \[g_{0i} =-B_{,i}\] (8) \[g_{ij} =a^{2}(t)\left(\delta_{ij}+2E_{,ij}+2F\delta_{ij}\right)\] (9) \[T^{\mu\nu} =\rho\,u^{\mu}u^{\nu}\] (10) \[\rho =\rho_{b}\left(1+\delta\right)\] (11) \[u^{\mu} =\left(u^{0},u^{i}\right)=\left(1-A,a^{-2}\delta^{ij}v_{,j}\right), \tag{12}\] where we consider only scalar perturbations and the scalar perturbation variables \(A,B,E,F,\delta\) and \(v\) are arbitrary functions of \(t\) and \(x^{i}\), and assumed to be small quantities. Now consider the scalar type infinitesimal gauge transformation \[\bar{t} =t+\alpha\,, \tag{13}\] \[\bar{x}^{i} =x^{i}+\delta^{ij}\beta_{,j}\, \tag{14}\] where \(\alpha\) and \(\beta\) are arbitrary functions of \(t\) and \(x^{i}\), which are regarded as small as the perturbation variables. The gauge dependence of the perturbed quantities are \[\bar{A} =A-\dot{\alpha}\,, \tag{15}\] \[\bar{B} =B-\alpha+a^{2}\dot{\beta}\,,\] (16) \[\bar{E} =E-\beta\,,\] (17) \[\bar{F} =F-\frac{\dot{a}}{a}\alpha\,,\] (18) \[\bar{\delta} =\delta+3\frac{\dot{a}}{a}\alpha\,,\] (19) \[\bar{v} =v+a^{2}\dot{\beta}\,. \tag{20}\] Then, the following gauge invariant quantities are defined in the usual manner. \[\Phi \equiv A-\left(B+a^{2}\dot{E}\right)^{\cdot}\,, \tag{21}\] \[\Psi \equiv F-\frac{\dot{a}}{a}\left(B+a^{2}\dot{E}\right)\,,\] (22) \[\varDelta \equiv\delta-3\frac{\dot{a}}{a}\left(v-B\right)\,,\] (23) \[V \equiv v+a^{2}\dot{E}\,. \tag{24}\] Using these quantities, we can obtain the first-order equations in terms of the gauge invariant quantities of linearized Einstein equation as follows: \[-\frac{1}{a^{2}}\nabla^{2}\Psi =4\pi G\rho_{b}\varDelta \tag{25}\] \[\frac{\dot{a}}{a}\Phi-\dot{\Psi} =-4\pi G\rho_{b}V\] (26) \[\Psi+\Phi =0. \tag{27}\] Using (27), the equations (25) and (26) are re-written as \[\frac{1}{a^{2}}\nabla^{2}\Phi =4\pi G\rho_{b}\varDelta, \tag{28}\] \[\dot{\Phi}+\frac{\dot{a}}{a}\Phi =-4\pi G\rho_{b}V. \tag{29}\] From \(T^{\mu\nu}_{\ \ ;\nu}=0\), we obtain \[\dot{\varDelta}+\frac{1}{a^{2}}\nabla^{2}V =0, \tag{30}\] \[\dot{V}+\Phi =0. \tag{31}\] Differentiating (30) with respect to \(t\) and using (28) and (31), we obtain \[\ddot{\varDelta}+2\frac{\dot{a}}{a}\dot{\varDelta}-4\pi G\rho_{b}\varDelta=0. \tag{32}\] The solution of the second-order differential equation (32) generally has two independent modes as follows: \[\varDelta(t,x^{i})=\mathcal{D}_{+}(t)Q_{+}(x^{i})+\mathcal{D}_{-}(t)Q_{-}(x^{i })\,, \tag{33}\] where \[\mathcal{D}_{+}(t) =H\int^{t}\frac{dt^{\prime}}{\left(aH\right)^{2}}\,, \tag{34}\] \[\mathcal{D}_{-}(t) =H=\frac{\dot{a}}{a}\,, \tag{35}\] \(Q_{+}(x)\) and \(Q_{-}(x)\) represent the spatially dependent part of the growing and decaying mode of the density contrast, respectively. In summary, from (4) and (28) multiplied by \(2/3\), we obtain the following equation \[\left(\frac{\dot{a}}{a}\right)^{2}+\frac{2}{3}\frac{1}{a^{2}}\nabla^{2}\Phi= \frac{8\pi G}{3}\left(\rho_{b}+\rho_{b}\Delta\right)+\frac{\Lambda}{3} \tag{36}\] as the perturbed version of the Friedmann equation, and from (5) including \(\rho_{b}\Delta\), \[\frac{\partial}{\partial t}\left(\rho_{b}+\rho_{b}\Delta\right)+3\frac{\dot{a }}{a}\left(\rho_{b}+\rho_{b}\Delta\right)-\rho_{b}\dot{\Delta}=0 \tag{37}\] as the perturbed version of the energy conservation equation. ## 3 Spatial averaging over a local domain in the perturbed universe In the previous section, we have employed the standard assumption that the inhomogeneous matter density \(\rho\) can be decomposed into the homogeneous background part \(\rho_{b}(t)\) and the small perturbed part \(\delta\). In the actual inhomogeneous universe, however, we need to extract the homogeneous part through the averaging procedure. We define the spatial volume \(V_{D}\) of a finite small domain \(D\) in the \(t=\) const. hypersurface \(\Sigma_{t}\) as \[V_{D}\equiv\int_{D}\sqrt{\det(g_{ij})}\,d^{3}x\,. \tag{38}\] \(D\) is sufficiently smaller than the horizon scale but more than the scale at which the picture of the Hubble expansion is valid, e.g. more than several 10 Mpc. Using the metric described in the previous section, \(\Sigma_{t}\) is specified by the normal vector \[n^{\mu}=\left(1-A,\frac{1}{a^{2}}\delta^{ij}B_{,j}\right)\,. \tag{39}\] Contrary to those in [36] and [37], no gauge-fixing is made in (38) in order to specify the \(t=\) const. hypersurface \(\Sigma_{t}\). Fixing the gauge \(A=B=v=0\) reproduces the results in [36], and another gauge \(B=E=0\) leads to those in [37]. The spatial average of a scalar quantity \(Q(t,x^{i})\) over the domain \(D\) is in general \[\langle Q\rangle\equiv\frac{1}{V_{D}}\int_{D}Q\sqrt{\det(g_{ij})}\,d^{3}x\,. \tag{40}\] Therefore, the average density in this domain is \[\langle\rho\rangle\equiv\frac{1}{V_{D}}\int_{D}\rho\sqrt{\det(g_{ij})}\,d^{3} x\,. \tag{41}\] Since we can observe only a finite nearby region of the entire space, it is likely that the average density \(\langle\rho\rangle\) in the nearby region does not always coincides with the background density \(\rho_{b}\). Spatially averaging (37) over a local domain \(D\), we obtain \[\left\langle\frac{\partial\varrho}{\partial t}\right\rangle+3\frac{\dot{a}}{a} \langle\varrho\rangle-\rho_{b}\langle\dot{A}\rangle=0\,, \tag{42}\] where we have defined the gauge-invariant inhomogeneous density \[\varrho\equiv\rho_{b}+\rho_{b}\varDelta \tag{43}\] in order to distinguish it from \(\rho=\rho_{b}+\rho_{b}\delta\). Note that the time derivative does not commute with the spatial averaging in general. In fact, for a physical quantity \(Q\) we have \[\left\langle\frac{\partial Q}{\partial t}\right\rangle-\frac{d}{dt}\left\langle Q \right\rangle=\left\langle\frac{1}{2}g^{ij}\dot{g}_{ij}\right\rangle\langle Q \rangle-\left\langle\frac{1}{2}g^{ij}\dot{g}_{ij}Q\right\rangle\,. \tag{44}\] However, if we consider the case \(Q\rightarrow\rho\) up to the linear order, we obtain \[\left\langle\frac{\partial\varrho}{\partial t}\right\rangle- \frac{d}{dt}\left\langle\varrho\right\rangle =\left\langle\frac{1}{2}g^{ij}\dot{g}_{ij}\right\rangle\langle \rho_{b}+\rho_{b}\varDelta\rangle-\left\langle\frac{1}{2}g^{ij}\dot{g}_{ij}( \rho_{b}+\rho_{b}\varDelta)\right\rangle \tag{45}\] \[=\left\langle 3\frac{\dot{a}}{a}\right\rangle\langle\rho_{b} \varDelta\rangle-\left\langle 3\frac{\dot{a}}{a}\rho_{b}\varDelta\right\rangle\] \[=0.\] Therefore, using the relation (45), it is straightforward to show from (42) the following equation holds up to the linear order of the perturbations: \[\frac{d}{dt}\left\langle\varrho\right\rangle+3\frac{\dot{a}_{D}}{a_{D}}\left \langle\varrho\right\rangle=0\,, \tag{46}\] where \[\frac{\dot{a}_{D}}{a_{D}}\equiv\frac{\dot{a}}{a}-\frac{1}{3}\langle\dot{A}\rangle \tag{47}\] can be regarded as the local expansion rate driven by the local average density \(\langle\varrho\rangle\). In order to express (36) in terms of \(a_{D}\), we rewrite as \[\left(\frac{\dot{a}}{a}\right)^{2}-\frac{2}{3}\frac{\dot{a}}{a}\dot{\varDelta} +\frac{2}{3}\frac{1}{a^{2}}\left(\nabla^{2}\varPhi+a^{2}\frac{\dot{a}}{a} \dot{\varDelta}\right)=\frac{8\pi G}{3}\left(\rho_{b}+\rho_{b}\varDelta\right) +\frac{\Lambda}{3}. \tag{48}\] Spatially averaging (48) and substituting (47), we obtain \[\left(\frac{\dot{a}_{D}}{a_{D}}\right)^{2}+\frac{K_{\rm eff}}{a_{D}^{2}}= \frac{8\pi G}{3}\langle\varrho\rangle+\frac{\Lambda}{3}\,, \tag{49}\] where \[K_{\rm eff} \equiv\frac{2}{3}\left\langle\nabla^{2}\Phi+a^{2}\frac{\dot{a}}{a} \dot{A}\right\rangle \tag{50}\] \[=\frac{2}{3}\left\langle-a^{2}\dot{H}\Delta+a^{2}H\dot{A}\right\rangle\] (51) \[=\frac{2}{3}a^{2}H^{2}\left(\frac{\mathcal{D}_{+}}{H}\right)^{ \cdot}\left\langle Q_{+}(x_{i})\right\rangle\] (52) \[=\frac{2}{3}\langle Q_{+}(x_{i})\rangle \tag{53}\] is a constant which can be regarded as the effective curvature constant on the local domain in the averaged sense. Although (48) looks similar to that obtained in [36], we emphasize the following advantages of our analysis in this paper over that in [36]. 1. [36]'s result is heavily dependent on the solution of \(\delta\propto a\) in the Einstein-de Sitter background. In particular all of the averaged quantities are defined and calculated directly using the growing mode solution in the Einstein-de Sitter background described in eqs.(14-17) in [36]. So it is unclear whether it holds in any other background. In this paper, we explicitly showed that this averaged picture holds backgrounds other than the Einstein-de Sitter background, especially even if \(\Lambda\neq 0\) background. 2. If we don't ignore the decaying mode of \(\delta\), [36] does not work. However, our discussion has no problem even if we consider the decaying mode. 3. It was unclear that [36]'s result is valid gauges other than comoving synchronous gauge. Therefore, we explicitly showed that we can describe using the spatial average of gauge-invariant variables all the averaged density, expansion rate, and (effective) curvature constant in an inhomogeneous universe. ## 4 The cosmological parameters in the nearby regions expressed by the gauge-invariant variables We define the global Hubble parameter as \[H_{0}\equiv\frac{\dot{a}}{a}\Big{|}_{t_{0}} \tag{54}\] and the global density parameters as \[\Omega_{m}\equiv\frac{8\pi G\rho_{b}(t_{0})}{3H_{0}^{2}} \tag{55}\] \[\Omega_{\Lambda}\equiv\frac{\Lambda}{3H_{0}^{2}}\,, \tag{56}\] where \(\Omega_{m}+\Omega_{\Lambda}=1\) since we have assumed the flat background. These global parameters are supposed to be determined by the very large-scale and distant observations such as the cosmic microwave background. On the other hand, the cosmological parameters which are obtained from the observations in the local nearby regions are certainly determined by the local average density \(\langle\varrho\rangle\), rather than by the background density \(\rho_{b}\). We define the local Hubble parameter as \[\tilde{H}_{0}\equiv\frac{\dot{a}_{D}}{a_{D}}\Big{|}_{t_{0}}=H_{0}\left(1-\frac {1}{3}f(t_{0})\langle\Delta\rangle_{t_{0}}\right)\,, \tag{57}\] where \[f(t)\equiv\frac{d\ln{\cal D}_{+}}{d\ln a} \tag{58}\] is the growth function of the gauge-invariant density perturbation \(\Delta\), and the local density parameters as \[\tilde{\Omega}_{m}\equiv\frac{8\pi G\langle\varrho\rangle}{3\tilde{H}_{0}^{2 }}=\Omega_{m}\left\{1+\left(1+\frac{2}{3}f(t_{0})\right)\langle\Delta\rangle_ {t_{0}}\right\} \tag{59}\] and \[\tilde{\Omega}_{\Lambda}\equiv\frac{\Lambda}{3\tilde{H}_{0}^{2}}=\Omega_{ \Lambda}\left(1+\frac{2}{3}f(t_{0})\langle\Delta\rangle_{t_{0}}\right)\,, \tag{60}\] which are valid up to the linear order in the gauge-invariant variable \(\Delta\). The local cosmological parameters coincide with the global ones if and only if \(\langle\Delta\rangle=0\). Otherwise, the local parameters may change. Let us show a simple estimation in the case \(\Lambda=0\), where \(f(t)=1\). If the local nearby region is, say, 30% under dense, namely \(\langle\Delta\rangle_{t_{0}}=-0.3\), the local Hubble parameter \(\tilde{H}_{0}\) can be 10% larger than the global \(H_{0}\). ## 5 Conclusion and Discussion Motivated by the Hubble tension, there have been many studies on the possible resolutions. One of them is the local variation of the cosmological parameters due to inhomogeneous matter distribution. We have also studied the inhomogeneous universe by spatial averaging and obtained an interesting result on the relation between the local and global Hubble parameters which might explain the Hubble tension. However, the question of the gauge invariance of the result is not fully understood. In this paper we address this question. We employ the gauge-invariant linear cosmological perturbation theory to show that the relationship between local and global cosmological parameter can be describe used by the gauge-invariant physical quantities that averaged in the local region. It is of some interest to develop this treatment to the second order since the density contrast report by the observation of the K-band luminosity density is of the order \(-0.5\). Although we gave an argument based on the order of magnitude discussion of the cosmological Poisson equation, it is clearly not sufficient. Another direction of this study is to consider the possible interpretation by the inhomogeneity of the observation of m-z relation of Type Ia supernovae and CMB Power spectrum. We hope to study this possibility in future. ## Acknowledgment This work is supported by a Grant-in-Aid for Scientific Research from JSPS(No. 20K03937) for TF.
2306.16283
Schwinger poles of the three-gluon vertex: symmetry and dynamics
The implementation of the Schwinger mechanism endows gluons with a nonperturbative mass through the formation of special massless poles in the fundamental QCD vertices; due to their longitudinal character, these poles do not cause divergences in on-shell amplitudes, but induce detectable effects in the Green's functions of the theory. Particularly important in this theoretical setup is the three-gluon vertex, whose pole content extends beyond the minimal structure required for the generation of a gluon mass. In the present work we analyze these additional pole patterns by means of two distinct, but ultimately equivalent, methods: the Slavnov-Taylor identity satisfied by the three-gluon vertex, and the nonlinear Schwinger-Dyson equation that governs the dynamical evolution of this vertex. Our analysis reveals that the Slavnov-Taylor identity imposes strict model-independent constraints on the associated residues, preventing them from vanishing. Approximate versions of these constraints are subsequently recovered from the Schwinger-Dyson equation, once the elements responsible for the activation of the Schwinger mechanism have been duly incorporated. The excellent coincidence between the two approaches exposes a profound connection between symmetry and dynamics, and serves as a nontrivial self-consistency test of this particular mass generating scenario.
A. C. Aguilar, M. N. Ferreira, B. M. Oliveira, J. Papavassiliou, L. R. Santos
2023-06-28T15:04:10Z
http://arxiv.org/abs/2306.16283v1
# Schwinger poles of the three-gluon vertex: ###### Abstract The implementation of the Schwinger mechanism endows gluons with a nonperturbative mass through the formation of special massless poles in the fundamental QCD vertices; due to their longitudinal character, these poles do not cause divergences in on-shell amplitudes, but induce detectable effects in the Green's functions of the theory. Particularly important in this theoretical setup is the three-gluon vertex, whose pole content extends beyond the minimal structure required for the generation of a gluon mass. In the present work we analyze these additional pole patterns by means of two distinct, but ultimately equivalent, methods: the Slavnov-Taylor identity satisfied by the three-gluon vertex, and the nonlinear Schwinger-Dyson equation that governs the dynamical evolution of this vertex. Our analysis reveals that the Slavnov-Taylor identity imposes strict model-independent constraints on the associated residues, preventing them from vanishing. Approximate versions of these constraints are subsequently recovered from the Schwinger-Dyson equation, once the elements responsible for the activation of the Schwinger mechanism have been duly incorporated. The excellent coincidence between the two approaches exposes a profound connection between symmetry and dynamics, and serves as a nontrivial self-consistency test of this particular mass generating scenario. Introduction The emergence of a gluon mass [1; 2; 3; 4; 5; 6; 7; 8; 9] through the action of the Schwinger mechanism [10; 11] represents a prime example of how mass may emanate from interaction [12]. Indeed, the most appealing attribute of this mechanism is that it arises entirely from the underlying dynamics, without the slightest modification of the fundamental Lagrangian that defines the theory, and, most importantly, leaving the local gauge symmetry intact [13; 14]. The cornerstone of the Schwinger mechanism is the nonperturbative formation of colored composite excitations with vanishing mass in the vertices of the theory [15; 16; 17; 18; 19; 20; 21], and especially in the three-gluon vertex, \(\mathrm{I\!\Gamma}_{\alpha\mu\nu}(q,r,p)\)[1; 9; 22; 23; 24]; for a variety of different approaches, see [25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. A special subset of these massless poles is transmitted to the gluon propagator, \(\Delta(q)\), through the coupled dynamical equations of motion, _i.e._, Schwinger-Dyson equations (SDEs) [13; 14; 35; 36; 37; 38; 39; 40; 41; 42], triggering finally its saturation at the origin, \(\Delta^{-1}(0)=m^{2}>0\)[22; 23; 24; 43; 44]. Due to the special dynamical details governing their formation, the massless poles of the three-gluon vertex are _longitudinally coupled_[15; 16; 17; 18; 19; 20; 21], _i.e._, they correspond to tensorial structures of the general form \(q_{\alpha}/q^{2}\), \(r_{\mu}/r^{2}\), and \(p_{\nu}/p^{2}\). As a result, they are not directly detectable in on-shell amplitudes, nor in lattice simulations of the corresponding correlation functions [45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62]; nonetheless, their effects are. Thus, in addition to causing the infrared saturation of the gluon propagator, the form factor \(\mathbb{C}(q)\) associated with the pole induces a smoking-gun modification ("displacement") to the Ward identity of the three-gluon vertex [14; 63; 64; 65]. Most importantly, the nonvanishing of \(\mathbb{C}(q)\) has been unequivocally confirmed in [64], through the suitable combination of key inputs obtained from lattice QCD [57; 59; 66; 67]. This encouraging result motivates the further detailed scrutiny of the key features that the Schwinger mechanism induces in the three-gluon vertex. The main purpose of the present work is to carry out an extensive study of the full pole content of this vertex, determine the structure and role of the main components, and expose the delicate interplay between symmetry and dynamics that prompts their appearance. In that sense, our analysis provides a nontrivial confirmation of the internal consistency of this rather elaborate mass generating approach. The dynamics of the pole formation are encoded in the nonlinear SDE that controls the evolution of \(\mathds{I}\Gamma_{\alpha\mu\nu}(q,r,p)\). In their primordial manifestation, the massless poles arise as bound states of a particular kernel appearing in the skeleton expansion of this SDE [43; 22; 23; 24; 44; 22]; they are simple, of the type \(1/q^{2}\), \(1/r^{2}\), and \(1/p^{2}\). When inserted into the SDE for \(\Delta(q)\), only the pole in the direction of \(q\) is relevant for the generation of the gluon mass, which is expressed as an integral over the residue of this pole. However, due to the nonlinear nature of the vertex SDE, these "primary" poles give rise to additional "secondary" structures, corresponding to mixed double poles, of the general type \(1/q^{2}r^{2},1/q^{2}p^{2},1/r^{2}p^{2}\). In the Landau gauge, these poles are inert as far as mass generation is concerned; however, their presence is instrumental for the self-consistency of the entire approach, and in particular for preserving the fundamental relations that arise from the Becchi-Rouet-Stora-Tyutin (BRST) symmetry [68; 69] of the gauge-fixed Yang-Mills Lagrangian. Indeed, the emergence of mixed poles finds its most compelling justification when the Slavnov-Taylor identity (STI) [70; 71] of the three-gluon vertex [72; 73; 74; 75] is invoked. In its abelianized version, with the contributions of the ghost sector switched off, this STI states that \(q^{\alpha}\mathds{I}\Gamma_{\alpha\mu\nu}(q,r,p)=P_{\mu\nu}(p)\Delta^{-1}(p)- P_{\mu\nu}(r)\Delta^{-1}(r)\), where \(P_{\mu\nu}(q)=g_{\mu\nu}-q_{\mu}q_{\nu}/q^{2}\) is the standard projection operator. Let us now assume that the gluon propagator is infrared finite, _i.e._, \(\Delta^{-1}(0)=m^{2}\). Then, in the limit \(p^{2}\to 0\) or \(r^{2}\to 0\), the r.h.s. of the STI displays longitudinally coupled massless poles, \(p_{\mu}p_{\nu}/p^{2}\) and \(r_{\mu}r_{\nu}/r^{2}\), whose residue is \(m^{2}\). Consequently, self-consistency requires that, in the same kinematic limits, the l.h.s. should exhibit the exact same pole structure, _i.e._, \(\mathds{I}\Gamma_{\alpha\mu\nu}(q,r,p)\) must contain mixed poles, of the type \(q_{\alpha}p_{\mu}p_{\nu}/q^{2}p^{2}\) and \(q_{\alpha}r_{\mu}r_{\nu}/q^{2}r^{2}\), precisely as predicted by the vertex SDE. The exact matching of pole contributions on both sides of the STI (with the ghost contributions duly restored) gives rise to a nontrivial relation, which expresses the form factors associated with the mixed poles in terms of components that appear on the r.h.s. of the STI. Quite interestingly, an approximate form of this special relation may be recovered from a truncated version of the vertex SDE. Moreover, an analogous construction reveals that the presence of a genuine triple mixed pole, of the type \(1/q^{2}p^{2}r^{2}\) is _excluded_ by both the STI and the SDE, being effectively reduced to a divergence weaker than a double mixed pole. These two exercises are especially illuminating, exposing a powerful synergy between symmetry and dynamics: whereas the STI (BRST symmetry) imposes relations that are valid regardless of the dynamical details, the SDE (nonlinear dynamics) reproduces them thanks to the distinct pole content induced by the Schwinger mechanism. The article is organized as follows. In Sec. II we summarize the most salient features of the Schwinger mechanism in QCD, commenting on some of its most recent advances. Then, in Sec. III we discuss in detail the pole structure induced to the three-gluon vertex when the Schwinger mechanism is activated, and in particular the appearance of mixed double and triple poles. In Sec. IV we construct a tensor basis for the pole part of the vertex, which makes its Bose symmetry and longitudinal nature manifest, and will be used throughout this work. In Sec. V we consider the STI satisfied by the three-gluon vertex, and derive a crucial relation for a special kinematic limit of the form factor associated with the mixed double poles, denominated "residue function". Then, in Sec. VI, we turn to the SDE of the three-gluon vertex, and derive, under certain simplifying assumptions, an approximate version of the aforementioned relation for the residue function. In continuation, in Sec. VII we compute the residue function using as inputs all the components entering in that relation. Then, in Sec. VIII, we demonstrate that both the STI and the detailed dynamics reduce substantially the strength of the triple mixed pole. Finally, in Sec. IX we present our discussion and conclusions. ## II Schwinger mechanism in QCD: general concepts In this section we present a brief overview of the implementation of the Schwinger mechanism in the context of a Yang-Mills theory; for further details, the reader is referred to two recent review articles [13; 14]. The natural starting point of the discussion is the gluon propagator, \(\Delta^{ab}_{\mu\nu}(q)=-i\delta^{ab}\Delta_{\mu\nu}(q)\). In the _Landau gauge_ that we employ throughout, \(\Delta_{\mu\nu}(q)\) assumes the completely transverse form \[\Delta_{\mu\nu}(q)=\Delta(q)P_{\mu\nu}(q)\,,\qquad P_{\mu\nu}(q):=g_{\mu\nu}- q_{\mu}q_{\nu}/q^{2}\,. \tag{1}\] In the continuum, the momentum evolution of the function \(\Delta(q)\) is determined by the corresponding SDE (Minkowski space), \[\Delta^{-1}(q)=q^{2}+i\Pi(q)\,, \tag{2}\] where \(\Pi(q)\) is the scalar form factor of the gluon self-energy, \[\Pi_{\mu\nu}(q)=\Pi(q)P_{\mu\nu}(q)\,, \tag{3}\] depicted diagrammatically in Fig. 1. Note that the fully-dressed vertices, \(\mathds{I}\Gamma\), of the theory enter in the diagrams defining \(\Pi_{\mu\nu}(q)\). In addition, it is convenient to introduce the dimensionless vacuum polarization, \(\mathbf{\Pi}(q)\), defined as \(\Pi(q)=q^{2}\mathbf{\Pi}(q)\), such that \(\Delta^{-1}(q)=q^{2}[1+\mathbf{\Pi}(q)]\). The basic premise underpinning the Schwinger mechanism may be expressed as follows: if \(\mathbf{\Pi}(q)\) develops a pole with positive residue at \(q^{2}=0\) (_massless pole_), the gauge boson (gluon) acquires a mass, even if the symmetries of the theory do not admit a mass term at the level of the fundamental Lagrangian [10; 11]. In particular, the appearance of such a pole triggers the basic sequence (Euclidean space) \[\lim_{q\to 0}\mathbf{\Pi}(q)=m^{2}/q^{2}\ \Longrightarrow\ \lim_{q\to 0}\,\Delta^{-1}(q)=\lim_{q\to 0}\,(q^{2}+m^{2})\ \Longrightarrow\ \Delta^{-1}(0)=m^{2}\,, \tag{2.4}\] where the residue of the pole acts as the effective squared gluon mass, \(m^{2}\). The pivotal result captured by Eq. (2.4) invites the natural question of what makes \(\mathbf{\Pi}(q)\) exhibit massless poles in the first place. In the case of four-dimensional Yang-Mills theories, such as QCD, the answer to this question is that these poles are transmitted to \(\mathbf{\Pi}(q)\) by the fully-dressed vertices that appear in the diagrammatic expansion of the gluon self-energy [1; 9; 13; 14; 76], see Fig. 1. The poles of the vertices are produced dynamically, when elementary fields (_e.g._, two gluons, two ghosts, or three gluons) merge to create composite colored scalars with vanishing masses [15; 16; 17; 18; 19; 20; 21]. These processes are controlled by appropriate bound-state equations, analogous to the standard Bethe-Salpeter equations (BSEs) [77; 78]; they arise as special kinematic limits (\(q\to 0\)) of the SDEs governing the various vertices [22; 24; 43; 44]. The residues of the vertices are functions of the remaining kinematic variables; when convoluted with the rest of the components comprising the gluon SDE, they account for the final residue, \(m^{2}\), that one identifies as the squared gluon mass in Eq. (2.4) [22; 24; 43; 44]. Figure 1: The diagrammatic representation of the gluon self-energy. The fully-dressed three-gluon, ghost-gluon, and four-gluon vertices are depicted as red, blue, and green circles, respectively. The special analytic structure of these vertices induces the poles required for the activation of the Schwinger mechanism. To elucidate how a contribution to the total gluon mass emerges from diagram \((a_{1})\) in Fig. 1, consider the three-gluon vertex \(\hbox{$\rm I$\kern-1.8pt$\Gamma$}^{abc}_{\alpha\mu\nu}(q,r,p)=gf^{abc}\hbox{$ \rm I$\kern-1.8pt$\Gamma$}_{\alpha\mu\nu}(q,r,p)\), where \(g\) is the gauge coupling, \(f^{abc}\) the structure constants of the SU(3) gauge group, and \(q+r+p=0\). The formation of the poles in the three-gluon vertex may be described by separating \(\hbox{$\rm I$\kern-1.8pt$\Gamma$}_{\alpha\mu\nu}(q,r,p)\) in two distinct pieces, \[\hbox{$\rm I$\kern-1.8pt$\Gamma$}_{\alpha\mu\nu}(q,r,p)=\Gamma_{\alpha\mu\nu}( q,r,p)+V_{\alpha\mu\nu}(q,r,p)\,, \tag{2.5}\] where \(\Gamma_{\alpha\mu\nu}(q,r,p)\) represents the pole-free component, while \(V_{\alpha\mu\nu}(q,r,p)\), whose origin is purely non-perturbative, contains all pole-related contributions. As we will discuss in detail in the next sections, the composition of \(V_{\alpha\mu\nu}(q,r,p)\) is rather elaborate; however, for the purposes of creating a mass for the gluon propagator in the Landau gauge, only a minimal structure of \(V_{\alpha\mu\nu}(q,r,p)\) is required, namely1 Footnote 1: In previous works [43; 44; 63; 64; 79], \(V_{1}(q,r,p)\) has been denoted as \(C_{1}(q,r,p)\). \[V_{\alpha\mu\nu}(q,r,p)=\frac{q_{\alpha}}{q^{2}}g_{\mu\nu}V_{1}(q,r,p)+\cdots\,, \tag{2.6}\] where all omitted terms drop out when \(V_{\alpha\mu\nu}(q,r,p)\) is inserted in diagrams \((a_{1})\). A detailed analysis reveals that [79] \[V_{1}(0,r,-r)=0\,; \tag{2.7}\] therefore, the Taylor expansion of \(V_{1}(q,r,p)\) around \(q=0\) yields \[\lim_{q\to 0}V_{1}(q,r,p)=2(q\cdot r)\,\mathbb{C}(r)+\,{\cal O}(q^{2})\,, \qquad\qquad{\mathbb{C}}(r):=\left[\frac{\partial V_{1}(q,r,p)}{\partial p^{ 2}}\right]_{q=0}\,. \tag{2.8}\] With the aid of Eq. (2.8), and after the extraction of the appropriate tensorial structure, the integral associated with the diagram \((a_{1})\) yields \[m^{2}_{(a_{1})}=-\Delta^{-1}_{(a_{1})}(0)=3\lambda Z_{3}\int_{k}k^{2}\Delta^{ 2}(k){\mathbb{C}}(k)\,, \tag{2.9}\] with \[\lambda:=ig^{2}C_{\rm A}/2\,, \tag{2.10}\] where \(C_{\rm A}\) is the Casimir eigenvalue of the adjoint representation [\(N\) for SU(\(N\))]. In the above formula, \(Z_{3}\) stands for the renormalization constant of the three-gluon vertex, and we denote by \[\int_{k}:=\frac{1}{(2\pi)^{4}}\int\!\!{\rm d}^{4}k \tag{2.11}\] the integration over virtual momenta; the use of a symmetry-preserving regularization scheme is implicitly assumed. We next use standard rules (see eg [63]) to rewrite Eq. (2.9) in Euclidean space; note, in particular, that \(m^{2}=\Delta_{\mbox{\tiny E}}(0)\). Then, using hyperspherical coordinates, we obtain \[m^{2}_{(a_{1})}=\frac{3\alpha_{s}C_{\rm A}Z_{3}}{8\pi}\int_{0}^{\infty}\!\!\!dy \,y^{2}\Delta_{\mbox{\tiny E}}^{2}(y)\,|\mathbb{C}_{\mbox{\tiny E}}(y)|\, \tag{2.12}\] with \(\alpha_{s}:=g^{2}/(4\pi)\) and \(y:=k_{\mbox{\tiny E}}^{2}\). Evidently, \(m^{2}\) depends on the renormalization point, \(\mu\); in particular, \(m=348\) MeV for \(\mu=4.3\) GeV [34; 67]2. Footnote 2: A renormalization-group-invariant gluonic mass scale of about half the proton mass has been obtained from the process-independent QCD effective charge [80; 81]. We emphasize that \(\mathbb{C}_{\mbox{\tiny E}}(q)\), in addition to providing the gluon mass through Eq. (2.12) and its two-loop extension, plays a central role in this entire construction due to its dual nature. In particular: (_i_) \(\mathbb{C}_{\mbox{\tiny E}}(r)\) is the _BS amplitude_ describing the formation of gluon-gluon _colored_ composite bound states; (_ii_) \(\mathbb{C}_{\mbox{\tiny E}}(r)\) leads to a characteristic displacement of the WI satisfied by the pole-free part of the three-gluon vertex; for that reason, \(\mathbb{C}_{\mbox{\tiny E}}(r)\) is called "_displacement function_". This predicted displacement has been confirmed by combining judiciously the results of several lattice simulations [64; 65]; as shown in Fig. 2, the result for \(\mathbb{C}_{\mbox{\tiny E}}(r)\) is clearly nonvanishing. ## III Schwinger poles of the three-gluon vertex In this section we elaborate on the pole content of the three-gluon vertex, which arises as a consequence of the activation of the Schwinger mechanism. Our analysis relies on the bound-state interpretation of the poles associated with the Schwinger mechanism (see _e.g._, [43; 44; 63; 64; 79]), making extensive use of the diagrammatic structure of the SDE of the three-gluon vertex. The dynamics of \(\mathrm{I\hskip-2.845276ptI}_{\alpha\mu\nu}(q,r,p)\) are determined by the SDE shown in panel \((A)\) of Fig. 3. Following the standard way of writing the SDE of a vertex, a particular gluon leg of \(\mathrm{I\hskip-2.845276ptI}_{\alpha\mu\nu}(q,r,p)\) is singled out (in this case the leg carrying momentum \(q\)), and is connected to the various multiparticle kernels through all elementary vertices of the theory. The remaining two legs (with momenta \(r\) and \(p\)) are attached to the multiparticle kernels through fully-dressed vertices. Note that the full SDE is Bose-symmetric, albeit not manifestly so3; in order to expose its Bose symmetry, the detailed skeleton expansion of the kernels must be taken into account. Footnote 3: Within the \(n\)PI effective action formalism, the resulting SDE for the three-gluon vertex is manifestly Bose-symmetric with respect to all of its three legs [82; 83; 84; 85; 86; 87]. The seed of the Schwinger mechanism may be traced inside the four-particle kernel appearing in the top panel of Fig. 3. It is triggered by the emergence of a colored scalar excitation, formed as a bound state of a pair of gluons, as shown pictorially in the bottom panel of Fig. 3; note that the propagator of the composite scalar is given by \(i\delta^{ab}/q^{2}\). The resulting scalar-gluon-gluon interaction is described by the tensor denoted by \(B_{\mu\nu}(q,r,p)\) in the bottom panel of Fig. 3. The dynamics of \(B_{\mu\nu}(q,r,p)\) is determined by solving the linear homogeneous BSE, which arises as the limit \(q\to 0\) of the SDE for \(\mathrm{I\hskip-2.845276ptI}_{\alpha\mu\nu}(q,r,p)\) is taken. The nontrivial solution that one obtains corresponds to the "BS amplitude" for the formation of a massless scalar out of two gluons. As explained in detail in [13], the BS amplitude coincides, up to an overall scaling factor, with the displacement function \(\mathbb{C}(q)\). When the upper part of the four-gluon kernel (legs with \(k+q\) and \(-k\)) is connected to the external gluon (with momentum \(q\)) in order to form the three-gluon vertex, as shown in the bottom panel of Fig. 3, the part that contains the composite scalar gives rise to the transition amplitude \(I_{\alpha}(q)\), defined in Fig. 4. Lorentz invariance imposes that \(I_{\alpha}(q)=I(q)\,q_{\alpha}\), where \(I(q)\) is a scalar function, whose role and properties have been discussed in detail in [13; 24]; note, in particular, the exact relation \(m^{2}=g^{2}I^{2}(0)\). As a consequence, the massless poles are longitudinally coupled_[15, 17, 18, 19, 20], giving rise to tensorial structures of the general form \(q_{\alpha}/q^{2}\), \(r_{\mu}/r^{2}\), and \(p_{\nu}/p^{2}\). Therefore, the pole part, \(V_{\alpha\mu\nu}(q,r,p)\), satisfies the important relation \[P^{\alpha}_{\alpha^{\prime}}(q)P^{\mu}_{\mu^{\prime}}(r)P^{\nu}_{\nu^{\prime}}( p)V_{\alpha\mu\nu}(q,r,p)=0\,. \tag{3.1}\] Note, in addition, that when \(V_{\alpha\mu\nu}(q,r,p)\) is contracted by two transverse projectors, only the poles in the uncontracted channel survive, _e.g.4_, Footnote 4: In the language of Eq. (4.4), \(P^{\mu}_{\mu^{\prime}}(r)P^{\nu}_{\nu^{\prime}}(p)V_{\alpha\mu\nu}(q,r,p)= \frac{q_{\alpha}}{q^{2}}P^{\mu}_{\mu^{\prime}}(r)P^{\nu}_{\nu^{\prime}}(p)\,[ V_{1}g_{\mu\nu}+V_{2}p_{\mu}r_{\nu}]\). \[P^{\mu}_{\mu^{\prime}}(r)P^{\nu}_{\nu^{\prime}}(p)V_{\alpha\mu\nu}(q,r,p)=\text{ only poles in }q^{2}\,. \tag{3.2}\] The nonlinear nature of the SDE makes \(V_{\alpha\mu\nu}(q,r,p)\) contain mixed poles, of the type \(q_{\alpha}r_{\mu}/q^{2}r^{2}\), etc. In the Landau gauge, these additional terms do not affect the gluon mass, Figure 3: Top: skeleton expansion of the three-gluon vertex. The gray ellipses denote multi-particle kernels, which are one-particle irreducible with respect to the \(q\)-channel. Bottom: decomposition of the kernel of diagram (\(c_{1}\)) into a term with no poles in the \(q\) channel, denoted by (\(d_{1}\)) (blue ellipse), and a term that contains a massless bound state, with propagator \(i/q^{2}\), denoted by (\(d_{2}\)). For the purpose of clearer visualization of the various structures, the components of the four-gluon kernel are separated from the corresponding vertex graphs by dotted horizontal lines. which only depends on the residue of the single pole that coincides with the external momentum of the gluon SDE (\(q\) in the conventions of Fig. 1). Nonetheless, this type of pole is crucial for maintaining gauge invariance, by balancing properly the STI satisfied by \(\Gamma_{\alpha\mu\nu}(q,r,p)\). In order to appreciate how such terms arise, we make the following two key observations. (_i_) To begin with, the part of the vertex with no poles in the channel \(q\), contains poles in the other two (\(r\) and \(p\)). This is because the kernel associated with this part (blue ellipse in Fig. 3) contains fully dressed vertices, as indicated schematically in the top panel of Fig. 5, for the case of the "one-gluon exchange" approximation. Denoting by \(\mathrm{I\hskip-2.0ptI}_{\!\!A}:=\mathrm{I\hskip-2.0ptI}\Gamma(p,k+q,r-k)\) and \(\mathrm{I\hskip-2.0ptI}_{\!\!B}:=\mathrm{I\hskip-2.0ptI}\Gamma(r,k-r,-k)\), as indicated in Fig. 5, the contribution to the (\(d_{1}\)) of Fig. 3 may be schematically written as \[(d_{1})\sim\int_{\!\!k}\!\!\Gamma^{(0)}\,\Delta\,\mathrm{I\hskip-2.0ptI}_{\!\!A }\,\Delta\,\mathrm{I\hskip-2.0ptI}_{\!\!B}\,\Delta\sim\int_{\!\!k}\!\!\Gamma^{( 0)}\,\Delta\,(\mathrm{I\hskip-2.0ptI}_{\!\!A}+V_{\!\!A})\,\Delta\,(\mathrm{I \hskip-2.0ptI}_{\!\!B}+V_{\!\!B})\,\Delta\,, \tag{3.3}\] where \[\Gamma^{(0)}_{\alpha\mu\nu}(q,r,p)=(q-r)_{\nu}g_{\alpha\mu}+(r-p)_{\alpha}g_{ \mu\nu}+(p-q)_{\mu}g_{\nu\alpha}\,, \tag{3.4}\] is the tree-level expression of the three-gluon vertex. Then, using Eq. (2.5), and noting that the vertices \(V_{\!\!A}\) and \(V_{\!\!B}\) furnish poles only in the external momenta \(p\), and \(r\), respectively, since poles in all other directions are annihilated Figure 4: Definition of the scalar-gluon transition amplitude, \(I_{\alpha}(q)\). The green circles represent the so-called “proper vertex functions” or “bound state wave functions” [16]. In particular, the \(B_{\mu\nu}\), first introduced in Fig. 3, describes the effective interaction between a composite scalar and two gluons, while \(B\) and \(B_{\mu\nu\rho}\) describe the interaction of a composite scalar with a ghost-antighost pair and three gluons, respectively. by the Landau gauge propagators [see Eq. (3.2)], one obtains \[(d_{1})\,\sim\,\underbrace{\int_{k}\!\!\Gamma^{(0)}\!\!\Delta\,\Gamma_{\!\!A}\, \Delta\,\Gamma_{\!\!B}\,\Delta}_{\text{no pole}}+\,\underbrace{\int_{k}\!\! \Gamma^{(0)}\!\!\Delta\,V_{\!\!A}\,\Delta\,\Gamma_{\!\!B}\,\Delta}_{\text{ pole in }p^{2}}+\,\underbrace{\int_{k}\!\!\Gamma^{(0)}\!\!\Delta\, \Gamma_{\!\!A}\,\Delta\,V_{\!\!B}\,\Delta}_{\text{pole in }r^{2}}+\, \underbrace{\int_{k}\!\!\Gamma^{(0)}\!\!\Delta\,V_{\!\!A}\,\Delta\,V_{\!\!B}\, \Delta}_{\text{poles in }r^{2},\,p^{2}}. \tag{3.5}\] (_ii_) Furthermore, the same kernel appears in the part of the vertex describing the pole in the \(q\)-channel. Thus, as indicated in the bottom panel of Fig. 5, one obtains contributions of the type \[(d_{2})\sim\int\!\!V\,\Delta\,\mathrm{I\!I}_{\!\!A}\,\Delta\,\mathrm{I\!I}_{\! \!B}\,\Delta\sim\int\!\!V\,\Delta\,(\Gamma_{\!\!A}+V_{\!\!A})\,\Delta\,( \Gamma_{\!\!B}+V_{\!\!B})\,\Delta\,, \tag{3.6}\] giving rise to \[(d_{2})\,\sim\,\underbrace{\int\!\!V\Delta\,\Gamma_{\!\!A}\,\Delta\,\Gamma_{\! \!B}\,\Delta}_{\text{pole in }q^{2}}+\,\underbrace{\int\!\!V\!\Delta\,V_{\!\!A}\,\Delta\, \Gamma_{\!\!B}\,\Delta}_{\text{poles in }q^{2},\,r^{2}}+\,\underbrace{\int\!\!V\! \Delta\,V_{\!\!A}\,\Delta\,V_{\!\!B}\,\Delta}_{\text{poles in }q^{2},\,r^{2}}+\, \underbrace{\int\!\!V\!\Delta\,V_{\!\!A}\,\Delta\,V_{\!\!B}\,\Delta}_{\text{ poles in }q^{2},\,r^{2},\,p^{2}}\,. \tag{3.7}\] The main conclusion of the analysis presented in this section is summarized in Fig. 6, where Eq. (2.5) is represented pictorially. In particular, the component \(V_{\alpha\mu\nu}(q,r,p)\) is comprised by single poles, mixed double poles, and and mixed triple pole, depending on the number of gluon-scalar transition amplitudes (grey circles) contained in them. The three types of effective amplitudes, \(T_{\mu\nu}(q,r,p)\), \(T_{\mu}(q,r,p)\), and \(T(q,r,p)\) (white circles) are completely pole-free; see also Eq. (4.9). Figure 5: Top: one-gluon exchange form of the blue kernel introduced in Fig. 3, and its subsequent decomposition into pole-free part (yellow vertices) and terms containing poles in \(r\) and \(p\). Bottom: decomposition of the scalar-gluon-gluon interaction, \(B_{\mu\nu}(q,r,p)\), defined in of Fig. 3, into pole-free and pole terms. We end this section with two final comments. First, as we will demonstrate in Sec. VIII, the triple mixed pole is not genuine; its strength is reduced due to requirements imposed by the self-consistency of the vertex STI, or, at the diagrammatic level, by virtue of Eq. (8). Second, the validity of Eq. (11), which, in the bound-state formulation of the Schwinger mechanism arises naturally, guarantees that the lattice "observables" of the general form \[L(q,r,p)=\frac{\lambda_{\alpha\mu\nu}(q,r,p)P_{\alpha\alpha^{\prime}}(q)P_{ \mu\mu^{\prime}}(r)P_{\nu\nu^{\prime}}(p)\mathds{I}\Gamma^{\alpha^{\prime}\mu ^{\prime}\nu^{\prime}}(q,r,p)}{\lambda_{\alpha\mu\nu}(q,r,p)P_{\alpha\alpha^{ \prime}}(q)P_{\mu\mu^{\prime}}(r)P_{\nu\nu^{\prime}}(p)\Gamma_{0}^{\alpha^{ \prime}\mu^{\prime}\nu^{\prime}}(q,r,p)}\,, \tag{12}\] where the \(\lambda_{\alpha\mu\nu}(q,r,p)\) are appropriate projectors, are completely pole-free. Indeed, all lattice results obtained thus far show no trace of pole divergences [49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 88]. ## IV A purely longitudinal basis In this section we introduce an appropriate basis for describing the special component \(V^{\alpha\mu\nu}(q,r,p)\), which, due to the condition Eq. (11), is strictly longitudinal. As is well known, the most general Lorentz decomposition of the three-gluon vertex is comprised of 14 independent tensors. However, the strict longitudinality condition of Eq. (11) imposes 4 constraints on the form factors of \(V^{\alpha\mu\nu}(q,r,p)\). As a result, \(V^{\alpha\mu\nu}(q,r,p)\) can be decomposed in a basis comprised of 10 tensors, denoted by \(v_{i}^{\alpha\mu\nu}(q,r,p)\), accompanied Figure 6: The general structure of the three-gluon vertex after the activation of the Schwinger mechanism. Note, in particular, that the term \(V_{\alpha\mu\nu}(q,r,p)\) contains single poles, such as \(q_{\alpha}/q^{2}\), as well as mixed poles of the forms \(q_{\alpha}r_{\mu}/q^{2}r^{2}\) and \(q_{\alpha}r_{\mu}p_{\nu}/q^{2}r^{2}p^{2}\). The term “(perms)” denotes the permutations of the external legs that lead to a Bose-symmetric \(V_{\alpha\mu\nu}(q,r,p)\). by the associated form factors, denoted by \(\mathbb{V}_{i}(q,r,p)\), _i.e._, \[V^{\alpha\mu\nu}(q,r,p)= \sum_{i=1}^{10}\mathbb{V}_{i}(q,r,p)\,v_{i}^{\alpha\mu\nu}(q,r,p)\,, \tag{4.1}\] where \[\begin{split} v_{1}^{\alpha\mu\nu}&=q^{\alpha}g^{ \mu\nu}\,,\qquad\qquad\quad v_{2}^{\alpha\mu\nu}=q^{\alpha}p^{\mu}r^{\nu}\,, \qquad\qquad\quad v_{3}^{\alpha\mu\nu}=r^{\mu}g^{\nu\alpha}\,,\\ v_{4}^{\alpha\mu\nu}&=r^{\mu}q^{\nu}p^{\alpha}\,, \qquad\qquad\quad v_{5}^{\alpha\mu\nu}=p^{\nu}g^{\alpha\mu}\,,\qquad\qquad \quad v_{6}^{\alpha\mu\nu}=p^{\nu}r^{\alpha}q^{\mu}\,,\\ v_{7}^{\alpha\mu\nu}&=q^{\alpha}r^{\mu}(q-r)^{\nu} \,,\qquad\quad v_{8}^{\alpha\mu\nu}=r^{\mu}p^{\nu}(r-p)^{\alpha}\,,\qquad\quad v _{9}^{\alpha\mu\nu}=p^{\nu}q^{\alpha}(p-q)^{\mu}\,,\\ v_{10}^{\alpha\mu\nu}&=q^{\alpha}r^{\mu}p^{\nu}\,. \end{split} \tag{4.2}\] Note that these tensors form three distinct groups, depending on the number of momenta to which they are longitudinal: the \(v_{i}^{\alpha\mu\nu}\) with \(i=1,\ldots,6\) are longitudinal to a single momentum, those with \(i=7,\ldots,9\) to two, while \(v_{10}^{\alpha\mu\nu}\) is longitudinal to all three momenta. Now, following the bound state interpretation, each form factor \(\mathbb{V}_{i}\) can have massless poles in each of the channels to which the corresponding tensor, \(v_{i}^{\alpha\mu\nu}\), is longitudinal. In particular, exhibiting the poles explicitly, we have \[\mathbb{V}_{1} =\frac{V_{1}}{q^{2}}\,;\qquad\mathbb{V}_{3}=\frac{V_{3}}{r^{2}} \,;\qquad\mathbb{V}_{5}=\frac{V_{5}}{p^{2}}\,;\qquad\mathbb{V}_{7}=\frac{V_{ 7}}{q^{2}r^{2}}\,;\qquad\mathbb{V}_{9}=\frac{V_{9}}{p^{2}q^{2}}\,;\] \[\mathbb{V}_{2} =\frac{V_{2}}{q^{2}}\,;\qquad\mathbb{V}_{4}=\frac{V_{4}}{r^{2}} \,;\qquad\mathbb{V}_{6}=\frac{V_{6}}{p^{2}}\,;\qquad\mathbb{V}_{8}=\frac{V_{ 8}}{r^{2}p^{2}}\,;\qquad\mathbb{V}_{10}=\frac{V_{10}}{q^{2}r^{2}p^{2}}\,, \tag{4.3}\] where the \(V_{i}\equiv V_{i}(q,r,p)\) are regular functions, which, in the appropriate limits, capture the corresponding pole residues. With the above definitions, \(V_{\alpha\mu\nu}(q,r,p)\) can be recast in the form \[V_{\alpha\mu\nu}(q,r,p) =\frac{q_{\alpha}}{q^{2}}\,(g_{\mu\nu}V_{1}+p_{\mu}r_{\nu}V_{2})\ +\ \frac{r_{\mu}}{r^{2}}\,(g_{\alpha\nu}V_{3}+q_{\nu}p_{\alpha}V_{4})\ +\ \frac{p_{\nu}}{p^{2}}\,(g_{\alpha\mu}V_{5}+r_{\alpha}q_{\mu}V_{6})\] \[+\frac{q_{\alpha}r_{\mu}}{q^{2}r^{2}}(q-r)_{\nu}V_{7}\ +\ \frac{r_{\mu}p_{\nu}}{r^{2}p^{2}}(r-p)_{\alpha}V_{8}\,+\frac{p_{\nu}q_{ \alpha}}{p^{2}q^{2}}(p-q)_{\mu}V_{9}\ +\ \frac{q_{\alpha}r_{\mu}p_{\nu}}{q^{2}r^{2}p^{2}}\,V_{10}\,. \tag{4.4}\] It is clear from the diagrammatic representation of Fig. 6, that \(V_{\alpha\mu\nu}(q,r,p)\) is Bose-symmetric. Consequently, and given that the color factor \(f^{abc}\) has been factored out, we have that \[V_{\alpha\mu\nu}(q,r,p)=-V_{\mu\alpha\nu}(r,q,p)=-V_{\nu\mu\alpha}(p,r,q)\,. \tag{4.5}\] Then, from Eqs. (4.4) and (4.5) follows that the form factors \(V_{i}(q,r,p)\) satisfy the following symmetry relations \[V_{1,2}(q,r,p) = -V_{1,2}(q,p,r)\,,\qquad\quad V_{7}(q,r,p)\,=V_{7}(r,q,p)\,,\] \[V_{3,4}(q,r,p) = -V_{3,4}(p,r,q)\,,\qquad\quad V_{8}(q,r,p)\,=V_{8}(q,p,r)\,,\] \[V_{5,6}(q,r,p) = -V_{5,6}(r,q,p)\,,\qquad\quad V_{9}(q,r,p)\,=V_{9}(p,r,q)\,, \tag{4.6}\] with \(V_{10}(q,r,p)\) being totally anti-symmetric. In addition, some form factors are related to each other by the cyclic permutations of their arguments, namely \[V_{3,4}(q,r,p) = V_{1,2}(r,p,q)\,,\qquad\quad V_{8}(q,r,p)\,=V_{7}(r,p,q)\,,\] \[V_{5,6}(q,r,p) = V_{1,2}(p,q,r)\,,\qquad\quad V_{9}(q,r,p)\,=V_{7}(p,q,r)\,. \tag{4.7}\] In the limit \(q\to 0\), we obtain from Eq. (4.6) the relations \[V_{1}(0,r,-r) = V_{3}(r,0,-r)=V_{5}(r,-r,0)=0\,,\] \[V_{2}(0,r,-r) = V_{4}(r,0,-r)=V_{6}(r,-r,0)=0\,,\] \[V_{10}(0,r,-r) = V_{10}(r,0,-r)=V_{10}(r,-r,0)=0\,. \tag{4.8}\] The relations derived above will be employed in the analysis presented in the following sections. Finally, it is instructive to make contact between the form of the vertex \(V_{\alpha\mu\nu}(q,p,r)\) given in Eq. (4.4) and the pictorial representation of the same vertex, depicted in Fig. 6. In particular, the effective amplitudes \(T_{\mu\nu}\), \(T_{\mu}\), and \(T\) may be expressed in terms of the form factors \(V_{i}\) through the direct matching of the various tensorial structures, namely \[I(q)T_{\mu\nu}(q,r,p)= -i\left[g_{\mu\nu}V_{1}(q,r,p)+p_{\mu}r_{\nu}V_{2}(q,r,p)\right]\,,\] \[I(q)I(r)T_{\nu}(q,r,p)= (r-q)_{\nu}V_{7}(q,r,p)\,,\] \[I(q)I(r)I(p)T(q,r,p)= iV_{10}(q,r,p)\,. \tag{4.9}\] We conclude this discussion with some remarks regarding the basis given by Eq. (4.2). The 14 tensors required for the full description of \(\mathrm{I\kern-1.8ptI}^{\alpha\mu\nu}(q,r,p)\) may be obtained by supplementing the \(v_{i}^{\alpha\mu\nu}\) of Eq. (4.2) with 4 totally transverse tensors, say \(\overline{t}_{i}^{\,\alpha\mu\nu}\), such that \(q_{\alpha}\overline{t}_{i}^{\alpha\mu\nu}=r_{\mu}\overline{t}_{i}^{\alpha\mu \nu}=p_{\nu}\overline{t}_{i}^{\alpha\mu\nu}=0\,.\) For example, one could use the \(t_{i}^{\alpha\mu\nu}\) given in Eq. (3.6) of [90], corresponding to the transverse part of the Ball-Chiu (BC) basis [73]. However, the resulting basis, \(v_{i}^{\alpha\mu\nu}\cup t_{j}^{\alpha\mu\nu}\), introduces spurious divergences in certain form factors of the pole-free part, thus being unsuitable for many applications. Furthermore, as explained in [90], the BC basis is inconvenient for the description of \(V^{\alpha\mu\nu}(q,r,p)\), because the 10 non-transverse tensors (the \(\ell_{i}^{\alpha\mu\nu}\) in Eq. (3.4) of [90]) are not longitudinal5, in the sense that \(P_{\alpha^{\prime}\alpha}(q)P_{\mu^{\prime}\mu}(r)P_{\nu^{\prime}\nu}(p)\ell_{i }^{\alpha\mu\nu}\neq 0\). As a result, in the BC basis, the transverse components of \(V^{\alpha\mu\nu}(q,r,p)\) acquire poles as well, which combine in complicated ways with the non-transverse ones to yield a strictly longitudinally coupled \(V^{\alpha\mu\nu}(q,r,p)\). The basis of [91] appears to suffer from the same shortcoming. Thus, it is preferable to decompose the pole-free and pole parts in different bases, such as the BC for \(\Gamma^{\alpha\mu\nu}(q,r,p)\) and Eq. (4.2) for \(V^{\alpha\mu\nu}(q,r,p)\). Footnote 5: In [73], the \(\ell_{i}^{\alpha\mu\nu}\) span the part of the vertex that saturates the STI, which was denominated “longitudinal”, in contradistinction to the “transverse” (automatically conserved) component. The confusion caused by the fact that the \(\ell_{i}^{\alpha\mu\nu}\) are not longitudinal, in the sense explained above, may be avoided by using the term “non-transverse” instead. ## V Mixed poles from the Slavnov-Taylor identity In this section, we turn our attention to the STI satisfied by the full vertex \({\rm I\hskip-2.845276ptI}_{\alpha\mu\nu}(q,r,p)\). As we will show in detail, when the gluon propagator is finite at the origin (massive), the STI imposes an extended pole content on the three-gluon vertex. Specifically, the only way to achieve self-consistency is by introducing mixed poles in \(V_{\alpha\mu\nu}(q,r,p)\); the form factors associated with these poles must satisfy strict constraints, which preclude their vanishing. We emphasize that the central assumption underlying this analysis is that both the BRST symmetry6 and the associated STIs remain intact when the gluon acquires a mass through the action of the Schwinger mechanism. This assumption is strongly corroborated by the STI-driven extraction of the \(\mathbb{C}(q)\) using lattice inputs [63; 64; 14; 65]; for a variety of related discussions and approaches, see [94; 95; 96; 97; 98; 27; 99; 98], and references therein. Footnote 6: We employ the _standard_ BRST symmetry of QCD, to be distinguished from the modified BRST symmetry of the refined Gribov-Zwanziger action [92; 93]. 15 ### Abelian STI with a hard mass To fix the ideas, let us consider first the simplified situation where the three-gluon vertex satisfies the Abelian STI given by \[q^{\alpha}{\rm I\hskip-2.845276ptI}_{\alpha\mu\nu}(q,r,p)=P_{\mu\nu}(p)\Delta^ {-1}(p)-P_{\mu\nu}(r)\Delta^{-1}(r)\,. \tag{5.1}\] Moreover, let the gluon propagator be given by the tree-level form, _i.e._, \(\Delta^{-1}(q)\to q^{2}-m^{2}\), corresponding to a simple massive propagator in Minkowski space. Then, after substitution, the STI becomes \[q^{\alpha}\mathrm{I\hskip-2.845276ptI}_{\alpha\mu\nu}(q,r,p)=g_{\mu\nu}(p^{2}-r^ {2})+r_{\mu}r_{\nu}-p_{\mu}p_{\nu}+m^{2}\left(\frac{p_{\mu}p_{\nu}}{p^{2}}- \frac{r_{\mu}r_{\nu}}{r^{2}}\right)\,. \tag{5.2}\] Evidently, the form factors associated with the tensor structures \(p_{\mu}p_{\nu}\) and \(-r_{\mu}r_{\nu}\) on the r.h.s. of Eq. (5.2) contain poles in \(p^{2}\) and \(r^{2}\), respectively, whose residue is \(m^{2}\). In fact, these tensor structures are longitudinal to the uncontracted legs of the vertex, _i.e._, those carrying momenta \(r\) and \(p\). Hence, the self-consistency of Eq. (5.2) requires that \(\mathrm{I\hskip-2.845276ptI}_{\alpha\mu\nu}(q,r,p)\) should contain longitudinally coupled poles of the form \(r_{\mu}/r^{2}\) and \(p_{\nu}/p^{2}\). Evidently, from the cyclic permutations of Eq. (5.2) (equivalently, from Bose symmetry), \(\mathrm{I\hskip-2.845276ptI}_{\alpha\mu\nu}(q,r,p)\) must also contain massless poles longitudinally coupled to \(q_{\alpha}\), _i.e._, of the form \(q_{\alpha}/q^{2}\). Thus, the STI implies that \(\mathrm{I\hskip-2.845276ptI}_{\alpha\mu\nu}(q,r,p)\) must assume the special form of Eq. (2.5), with a nonzero pole part, \(V_{\alpha\mu\nu}(q,r,p)\). At this point, let us assume that the pole-free part of the vertex, \(\Gamma_{\alpha\mu\nu}(q,r,p)\), reduces to the tree-level form given in Eq. (3.4), _i.e._, \(\Gamma_{\alpha\mu\nu}(q,r,p)\to\Gamma_{\alpha\mu\nu}^{(0)}(q,r,p)\), such that, from Eq. (2.5), \(\mathrm{I\hskip-2.845276ptI}_{\alpha\mu\nu}(q,r,p)=\Gamma_{\alpha\mu\nu}^{(0)} (q,r,p)+V_{\alpha\mu\nu}(q,r,p)\). Then, Eq. (5.2) can be recast into an STI for \(V_{\alpha\mu\nu}(q,r,p)\), namely \[q^{\alpha}V_{\alpha\mu\nu}(q,r,p)=m^{2}\left(\frac{p_{\mu}p_{\nu}}{p^{2}}- \frac{r_{\mu}r_{\nu}}{r^{2}}\right)\,, \tag{5.3}\] together with its cyclic permutations. Next, assume that \(V_{\alpha\mu\nu}(q,r,p)\) satisfies the longitudinality condition of Eq. (3.1), as required by both the Schwinger mechanism and lattice QCD. Expanding out the transverse projectors in that equation, and using Eq. (5.3) and its permutations, one straightforwardly obtains \[V_{\alpha\mu\nu}(q,r,p)=\frac{m^{2}}{2}\left[\frac{q_{\alpha}r_{\mu}}{q^{2}r^ {2}}(q-r)_{\nu}+\frac{r_{\mu}p_{\nu}}{r^{2}p^{2}}(r-p)_{\alpha}+\frac{p_{\nu} q_{\alpha}}{p^{2}q^{2}}(p-q)_{\mu}\right]\,, \tag{5.4}\] which shows that \(V_{\alpha\mu\nu}(q,r,p)\) must contain mixed double poles, with residues proportional to the gluon mass. Note that this result amounts to the constant mass limit of the _Ansatz_ given in [24], constructed therein for a momentum-dependent mass, \(m^{2}(q)\). Furthermore, the combination \(\Gamma_{\alpha\mu\nu}^{(0)}(q,r,p)+V_{\alpha\mu\nu}(q,r,p)\), with \(V_{\alpha\mu\nu}(q,r,p)\) given by Eq. (5.4) reproduces the effective three-gluon vertex of Cornwall [99; 100]. Lastly, comparing Eqs. (4.4) and (5.4) we read off the expressions for the form factors \[V_{7}(q,r,p)=V_{8}(q,r,p)=V_{9}(q,r,p)=\frac{m^{2}}{2}\,, \tag{5.5}\] with all other \(V_{i}\) vanishing in this simple case. If the form of the pole-free part is not known, as is generally the case, the complete momentum dependence of \(V_{\alpha\mu\nu}(q,r,p)\) cannot be determined. Nevertheless, the values of the form factors \(V_{i}(q,r,p)\) of Eq. (4.4) at zero momenta can be obtained unequivocally from the STI. In particular, in the toy model of Eq. (5.2) \[V_{9}(q)=m^{2}/2\,, \tag{5.6}\] independently of the exact form of \(\Gamma_{\alpha\mu\nu}\), where we use Eq. (4.6) and define \[V_{9}(q):=V_{9}(q,-q,0)=V_{9}(0,q,-q)\,. \tag{5.7}\] Evidently, the same result holds for \(V_{8}(q,-q,0)=V_{8}(q,-q,0)=V_{7}(q,0,-q)=V_{7}(0,q,-q)\). ### General case: mixed poles and the residue function Having fixed the general ideas, we now turn to the full form of the STI, and demonstrate how to obtain from it expressions for the \(V_{i}\) when one of the momenta vanishes. The STI is given by [72] \[q^{\alpha}\mathds{I}\Gamma_{\alpha\mu\nu}(q,r,p)=F(q)\left[\Delta^{-1}(p)P^{ \alpha}_{\nu}(p)H_{\alpha\mu}(p,q,r)-\Delta^{-1}(r)P^{\alpha}_{\mu}(r)H_{\alpha \nu}(r,q,p)\right]\,; \tag{5.8}\] the cases \(r^{\mu}\mathds{I}\Gamma_{\alpha\mu\nu}(q,r,p)\) and \(p^{\nu}\mathds{I}\Gamma_{\alpha\mu\nu}(q,r,p)\) are obtained from Eq. (5.8) through permutations of the appropriate momenta and indices. In the above equation, \(F(q)\) is the ghost dressing function, defined in terms of the ghost propagator \(D^{ab}(q)=i\delta^{ab}D(q)\) by \(D(q)=F(q)/q^{2}\), while \(H_{\mu\nu}(r,q,p)\) represents the ghost-gluon scattering kernel, with \(r\), \(q\), \(p\) denoting the momenta of the anti-ghost, ghost, and gluon, respectively. The most general Lorentz structure of \(H_{\mu\nu}(r,q,p)\) is given by [101] \[H_{\mu\nu}(r,q,p)=g_{\nu\mu}A_{1}+r_{\mu}r_{\nu}A_{2}+p_{\mu}p_{\nu}\mathbb{A} _{3}+p_{\mu}r_{\nu}A_{4}+r_{\mu}p_{\nu}\mathbb{A}_{5}\,, \tag{5.9}\] where \(A_{i}\equiv A_{i}(r,q,p)\) and \(\mathbb{A}_{i}\equiv\mathbb{A}_{i}(r,q,p)\); the use of distinct notation for the third and fifth form factors will become clear in what follows. At tree level, \(A_{1}^{(0)}=1\), while all other form factors vanish. Note that since in Eq. (5.8) the \(H_{\mu\nu}(r,q,p)\) is contracted by transverse projectors, only the form factors \(A_{1}\), \(A_{4}\) and \(\mathbb{A}_{3}\) contribute to the STI. At this point, it is crucial to recognize that the Schwinger mechanism induces poles not only to the vertex \(\mathds{\Gamma}_{\alpha\mu\nu}(q,r,p)\) but also to the ghost-gluon kernel \(H_{\mu\nu}(r,q,p)\). In particular, the poles are longitudinally coupled, carrying the momentum and Lorentz index of the incoming gluon leg. Therefore, they are contained in the form factors \(\mathbb{A}_{3,5}(r,q,p)\), which assume the general form \[\mathbb{A}_{3,5}(r,q,p)=\frac{A_{3,5}^{\mathbf{p}}(r,q,p)}{p^{2}}+A_{3,5}(r,q, p)\,, \tag{5.10}\] where \(A_{3,5}(r,q,p)\) denotes the pole-free part. To determine the residues of the poles required by the STI, we begin by decomposing both sides of Eq. (5.8) in the same basis and equating coefficients of independent tensor structures. Since the tensors appearing in the STI have two free Lorentz indices and two independent momenta, they can all be decomposed in the same basis employed for \(H_{\mu\nu}(r,q,p)\) in Eq. (5.9). In particular, the contracted pole-free part may be written as \[q^{\alpha}\Gamma_{\alpha\mu\nu}(q,r,p)=S_{1}g_{\mu\nu}+S_{2}r_{\mu}r_{\nu}+S_ {3}p_{\mu}p_{\nu}+S_{4}p_{\mu}r_{\nu}+S_{5}r_{\mu}p_{\nu}\,, \tag{5.11}\] with \(S_{i}\equiv S_{i}(q,r,p)\). At tree level, \(S_{1}^{(0)}=p^{2}-r^{2}\), \(S_{2}^{(0)}=1\), \(S_{3}^{(0)}=-1\), and \(S_{4}^{(0)}=S_{5}^{(0)}=0\). Note that, from Bose symmetry, \(S_{1}\), \(S_{4}\) and \(S_{5}\) must be anti-symmetric under the exchange of \(r\leftrightarrow p\), such that \[S_{1}(0,r,-r)=S_{4}(0,r,-r)=S_{5}(0,r,-r)=0\,. \tag{5.12}\] Then, since \(\Gamma_{\alpha\mu\nu}(q,r,p)\) is pole-free, its contraction with \(q^{\alpha}\) vanishes when \(q=0\), such that Eqs. (5.11) and (5.12) imply also that \[S_{2}(0,r,-r)=-S_{3}(0,r,-r)\,. \tag{5.13}\] As for the pole part, after contracting Eq. (4.4) with \(q^{\alpha}\), we obtain \[q^{\alpha}V_{\alpha\mu\nu}(q,r,p)=V_{1}g_{\mu\nu}+\frac{\overline{V}_{2}}{r^{ 2}}r_{\mu}r_{\nu}+\frac{\overline{V}_{3}}{p^{2}}p_{\mu}p_{\nu}+V_{2}p_{\mu}r_{ \nu}+\frac{\overline{V}_{5}}{r^{2}p^{2}}r_{\mu}p_{\nu}\,, \tag{5.14}\] with the \(\overline{V}_{i}\equiv\overline{V}_{i}(q,r,p)\) given by \[\overline{V}_{2}= -V_{3}-(p\cdot q)V_{4}-2V_{7}\,,\qquad\overline{V}_{3}=-V_{5}-(r \cdot q)V_{6}+2V_{9}\,,\] \[\overline{V}_{5}= V_{10}+(p^{2}-r^{2})V_{8}-p^{2}\left[V_{3}+(q\cdot p)V_{4}+V_{7} \right]-r^{2}\left[V_{5}+(q\cdot r)V_{6}-V_{9}\right]\,. \tag{5.15}\] As in the previous subsection, we now isolate the tensor structures \(r_{\mu}r_{\nu}\) and \(p_{\mu}p_{\nu}\) on both sides of Eq. (5.8); equating their coefficients yields \[S_{2}= \,\frac{1}{r^{2}}\left\{F(q)\left\{\Delta^{-1}(r)\left[A_{1}(r,q,p) +(p\cdot r)A_{4}(r,q,p)\right]+r^{2}\Delta^{-1}(p)\mathbb{A}_{3}(p,q,r)\right\} \!-\!\overline{V}_{2}\right\}\,,\] \[S_{3}= \,-\frac{1}{p^{2}}\left\{F(q)\left\{p^{2}\Delta^{-1}(r)\mathbb{A} _{3}(r,q,p)+\Delta^{-1}(p)\left[A_{1}(p,q,r)+(p\cdot r)A_{4}(p,q,r)\right] \right\}+\overline{V}_{3}\right\}\,. \tag{5.16}\] Since \(S_{3}\) is pole-free by definition, in the limit \(p\to 0\), the term \(1/p^{2}\) must be canceled by the content of the curly bracket in Eq. (5.16). Thus, what appears to be a pole in \(p^{2}\) must be converted into an evitable singularity. The condition for this to occur is given by \[V_{9}(q)=\frac{F(q)}{2}\left[m^{2}A_{1}(q)-\Delta^{-1}(q)A_{3}^{\mathsf{P}}(q) \right]\,, \tag{5.17}\] where \(m^{2}=-\Delta^{-1}(0)\), in Minkowski space, and we used Eq. (5.10) and defined \[A_{1}(q):=A_{1}(0,q,-q)\,,\qquad A_{3}^{\mathsf{P}}(q):=A_{3}^{\mathsf{P}}(q,- q,0)\,. \tag{5.18}\] Note that setting the ghost-sector Green's functions in Eq. (5.17) to their tree level expressions in Eq. (5.17), _i.e._, \(F\to 1\), \(A_{1}\to 1\) and \(\mathbb{A}_{3}\to 0\), leads to Eq. (5.6). Similarly, the requirement that the \(S_{2}\) of Eq. (5.16) be pole-free at \(r=0\) yields a relation identical to Eq. (5.17), but with \(V_{9}(q)\) substituted by \(V_{7}(0,q,-q)\). This last result follows also from Bose symmetry, according to Eq. (4.6). For the same reason, Eq. (5.17) also holds with the left-hand side substituted by any one of \(V_{7}(0,q,-q)\), \(V_{8}(q,-q,0)\), and \(V_{8}(q,0,-q)\). Returning to Eq. (5.17), it is clear that the only way for \(V_{9}\) to vanish identically for all \(q\) (_i.e._, for \(V_{\alpha\mu\nu}\) not to contain the associated mixed pole) is for the r.h.s. to also vanish; however, at least when \(q=0\), this cannot happen in the Landau gauge. Indeed, as was demonstrated in [102], in this gauge, \[A_{1}(0)=\widetilde{Z}_{1}\,,\qquad A_{3}^{\mathsf{P}}(0)=0\,, \tag{5.19}\] where \(\widetilde{Z}_{1}\) is the renormalization constant of the ghost-gluon vertex, which is _finite_ by virtue of Taylor's theorem [70]. Hence, at the origin, Eq. (5.17) reduces to \[V_{9}(0)=\frac{1}{2}\widetilde{Z}_{1}F(0)m^{2}\,. \tag{5.20}\] Consequently, just as in the toy model of Eq. (5.2), the self-consistency of the full STI in the presence of an infrared finite gluon propagator requires the appearance of a pole associated with the form factor \(V_{9}\). The function \(V_{9}(q)\), associated with the mixed pole \(1/q^{2}p^{2}\) will be particularly important in the analysis that follows. To understand its nature, consider a function of two variables, \(x\) and \(y\) of the form \(f(x,y)=g(x,y)/xy\), with \(g(x,0)\neq 0\) and \(g(0,y)\neq 0\), such that \(f(x,y)\) has simple poles as \(x\to 0\) and \(y\to 0\). In particular, if we take \(y\to 0\), the residue of this pole is a function of \(x\), given by \(r(x)=g(x,0)/x\). In fact, if we subsequently take \(x\to 0\), \(g(0,0)\neq 0\) is the residue of the function \(r(x)\). Evidently, in this analogy, \(g(x,0)\) plays the role of \(V_{9}(q)\); in what follows, we will refer to \(V_{9}(q)\) as the _"residue function"_. We conclude this discussion by pointing out that the tensor structures \(g_{\mu\nu}\), \(r_{\nu}p_{\mu}\) and \(r_{\mu}p_{\nu}\) of Eq. (5.8), associated with the pole-free form factors \(S_{1,4,5}\), lead to constraints on the behavior of the remaining form factors, \(V_{i}\), which are absent from the simplified result of Eq. (5.4). These additional relations, however, constrain certain derivatives of the \(V_{i}\), rather than the values of the form factors themselves. Indeed, the constraint obtained from \(g_{\mu\nu}\) is equivalent to the so-called "Ward Identity displacement", which has been analyzed in detail in recent works [63; 13; 64; 14]. On the other hand, the \(r_{\mu}p_{\nu}\) structure leads to a constraint on the form factor \(V_{10}\), which amounts to a drastic reduction of the triple pole associated with it; the detailed demonstration of this point is given in Section VIII. ## VI Residue function from the Schwinger-Dyson equation The special relation given in Eq. (5.17) implies that, in the presence of an infrared finite gluon propagator, the appearance of mixed poles in the three-gluon vertex is an inevitable requirement of the STI. In this section we explore this same relation from the point of view of the SDE satisfied by the three-gluon vertex. Specifically, we will show that, when the dynamical structures imposed by the activation of the Schwinger mechanism are duly taken into account, a truncated form of the vertex SDE leads to an approximate version of Eq. (5.17). ### General considerations In order to obtain from the vertex SDE the relation satisfied by the residue function \(V_{9}(q)\) in Eq. (5.17), we follow the same procedure employed in its derivation from the STI: (_i_) we begin by contracting the vertex SDE by \(q_{\alpha}\); (_ii_) then, we isolate the tensor structure \(p_{\mu}p_{\nu}\) from the result, which yields \(\overline{V}_{3}/p^{2}+S_{3}\) [recall Eqs. (5.11) and (5.14)]; (_iii_) finally, we multiply by \(p^{2}\) and take the limit \(p=0\), where \(\overline{V}_{3}(q,r,p)\to 2V_{9}(q)\) [see Eq. (5.15)]. To streamline the application of this procedure, it is convenient to set up the vertex SDE with tree-level vertices in the leg carrying momentum \(p\), as in Fig. 7, rather than the version shown in Fig. 1. With this choice, the contraction of the SDE by \(q^{\alpha}\) triggers inside the diagrams the STIs for the fully dressed vertices, with an incoming \(q\)-leg. These STIs, in turn, simplify the identification of certain pole contributions stemming from the four-gluon vertex, as we will see shortly. We emphasize that the SDE given in Fig. 7 is truncated, by keeping only "one-loop dressed diagrams" containing gluons and the massless composite excitations associated with the Schwinger mechanism. Note, in particular, the absence of contributions originating from the ghost loop denoted by \((c_{2})\) in Fig. 3, and that the only representatives from graph \((c_{3})\) are diagrams \((e_{3})\) and \((e_{4})\). Given this truncation, we do not expect to reproduce Eq. (5.17) in its entirety; in particular, it is reasonable to expect that the term \(m^{2}\) will be approximated by its one-loop dressed gluonic expression, \(m^{2}_{(a_{1})}\), given in Eq. (2.9). As we will see in what follows, this is indeed what happens. Recalling the diagrammatic analysis presented in Sec. III, it is relatively straightforward to establish that diagrams \((e_{1})\), \((e_{3})\), and \((e_{4})\) contain poles in the \(q\)- and \(r\)-channels, but not in the \(p\)-channel, which is relevant for the derivation of Eq. (5.17). Instead, \((e_{2})\) possesses a pole in the \(p\)-channel, as may be deduced by means of two different (but ultimately equivalent) arguments, both related to the nature of the fully-dressed four-gluon vertex [103; 104; 105; 106; 107; 108; 109; 110; 111; 112; 113], \(\Gamma\!\Gamma^{abts}_{\alpha\mu\delta\tau}\). Figure 7: The diagrams contributing to the truncated SDE for three-gluon vertex that we employ; ghost and two-loop diagrams have been omitted. The swordfish diagrams \((e_{2,3,4})\) carry a symmetry factor of \(1/2\). The first argument is based on the observation that the special diagram \((d_{2})\) of Fig. 3 is part of \(\mathds{I}\!\Gamma^{abts}_{\alpha\mu\delta\tau}\); evidently, since the vertex SDE is now written with respect to the \(p\)-leg, the appropriate replacement (_e.g._, \(q\to p\)) must be carried out in \((d_{2})\), which acquires thusly a massless bound state propagator \(i/p^{2}\). The second is by noticing that the STI satisfied by \(\mathds{I}\!\Gamma^{abts}_{\alpha\mu\delta\tau}\) generates naturally a pole in the \(p\)-channel, provided that the poles of the three-gluon vertices appearing in its r.h.s. are properly included. Specifically, the STI reads [114; 39; 115] \[q^{\alpha}\mathds{I}\!\Gamma^{abts}_{\alpha\mu\delta\tau}(q,r,p-k,k) =F(q)\big{[}f^{tad}f^{dbts}H^{\gamma}_{\ \delta}(k+r,q,p-k)\mathds{I}\!\Gamma_{\gamma\mu\tau}(-k-r,r,k)\] \[+f^{sad}f^{dbt}H^{\gamma}_{\ \tau}(-k-q,q,k)\mathds{I}\! \Gamma_{\gamma\mu\delta}(k+q,r,p-k)\] \[+f^{bad}f^{dst}H^{\gamma}_{\ \mu}(p,q,r)\underbrace{\mathds{I}\! \Gamma_{\gamma\tau\delta}(-p,k,p-k)}_{\text{Contains }\frac{p_{\gamma}}{p^{2}}}\big{]}+\ldots\,, \tag{6.1}\] where the ellipsis denotes terms involving a ghost-ghost-gluon-gluon scattering kernel. Since this kernel has no tree-level value [115; 39], we expect it to be subleading and omit it from our treatment. Evidently, the full three-gluon vertices appearing on the r.h.s. of Eq. (6.1) enter with their entire pole content. In particular, the underbrace in that equation highlights the explicit appearance of a \(p_{\gamma}/p^{2}\) term, originating from the vertex \(\mathds{I}\!\Gamma_{\gamma\tau\delta}(-p,k,p-k)\). It is clear that the two arguments presented above are interlinked: the r.h.s. of the STI in Eq. (6.1) contains a pole in \(1/p^{2}\) because diagram \((d_{2})\) is part of the four-gluon vertex, whose contraction by \(q^{\alpha}\) appears on the l.h.s. This accurate balance evidences once again the harmonious interplay between symmetry and dynamics. ### The derivation Armed with these observations, we now proceed to the derivation of Eq. (5.17), following the three main steps, (_i_)-(_iii_), mentioned above. To that end, we start from the complete expression for \((e_{2})\), \[(e_{2})^{abc}_{\alpha\mu\nu}=-\frac{ig^{2}Z_{3}}{2}\int_{k}\Delta^{\rho\delta} (k-p)\Delta^{\sigma\tau}(k)\Gamma^{(0)}_{\nu\sigma\rho}(-p,k,p-k)f^{cst} \mathds{I}\!\Gamma^{abts}_{\alpha\mu\delta\tau}(q,r,p-k,k)\,, \tag{6.2}\] where we have factored out a \(g\). Then, following step (_i_), we contract Eq. (6.2) with \(q^{\alpha}\), thus triggering the STI of Eq. (6.1). Then, Eq. (6.2) yields \[q^{\alpha}(e_{2})_{\alpha\mu\nu} = -\frac{\lambda Z_{3}F(q)}{p^{2}}\int_{k}\Delta^{\rho\delta}(k-p) \Delta^{\sigma\tau}(k)\Gamma^{(0)}_{\nu\sigma\rho}(-p,k,p-k)p^{\gamma}H_{\gamma \mu}(p,q,r) \tag{6.3}\] \[\times [V_{1}(-p,k,p-k)g_{\delta\tau}+V_{2}(-p,k,p-k)p_{\delta}p_{\tau}] +\ldots\,,\] where the color structure \(f^{abc}\) has been canceled out from both sides, and the ellipsis denotes terms that do not contain \(1/p^{2}\) poles, and thus cannot contribute to \(V_{9}(q)\). Next, we evaluate the term \(p^{\gamma}H_{\gamma\mu}(p,q,r)\) appearing in Eq. (6.3) using the well-known STI [72] \[p^{\gamma}H_{\gamma\mu}(p,q,r)=\mathrm{I\hskip-2.0ptI}_{\mu}(p,q,r)\,, \tag{6.4}\] where \(\mathrm{I\hskip-2.0ptI}_{\mu}^{abc}(p,q,r)=-gf^{abc}\mathrm{I\hskip-2.0ptI}_{ \mu}(p,q,r)\) is the ghost-gluon vertex, whose most general Lorentz decomposition reads [116, 101, 67] \[\mathrm{I\hskip-2.0ptI}_{\mu}(p,q,r)=p_{\mu}B_{1}(p,q,r)+r_{\mu}\mathbb{B}_{ 2}(p,q,r)\,. \tag{6.5}\] Then, it follows from Eq. (6.4) that \[p^{\gamma}H_{\gamma\mu}(p,q,r)=p_{\mu}B_{1}(p,q,r)+r_{\mu}\mathbb{B}_{2}(p,q,r )\,, \tag{6.6}\] while Eq. (5.9) allows us to write the \(B_{i}\) in terms of the \(A_{i}\) and \(\mathbb{A}_{i}\) as [116, 101] \[B_{1}(p,q,r)= \,A_{1}(p,q,r)+p^{2}A_{2}(p,q,r)+(p\cdot r)A_{4}(p,q,r)\,,\] \[\mathbb{B}_{2}(p,q,r)= \,(p\cdot r)\mathbb{A}_{3}(p,q,r)+p^{2}\mathbb{A}_{5}(p,q,r)\,. \tag{6.7}\] Note that, since \(\mathbb{A}_{i}(p,q,r)\) displays a pole when \(r=0\), so does the \(\mathbb{B}_{2}(p,q,r)\); the pole amplitude associated with the ghost-gluon vertex has been studied in detail in [63, 33, 43]. Now, we proceed to step (_ii_). Clearly, only the term \(p_{\mu}B_{1}(p,q,r)\) of Eq. (6.6) can contribute to the tensor structure \(p_{\mu}p_{\nu}\), once inserted in Eq. (6.3) for \(q^{\alpha}(e_{2})_{\alpha\mu\nu}\). Hence, we can write \[q^{\alpha}(e_{2})_{\alpha\mu\nu}= \,-\frac{\lambda Z_{3}F(q)B_{1}(p,q,r)p_{\mu}}{p^{2}}\int_{k} \Delta^{\rho\delta}(k-p)\Delta^{\sigma\tau}(k)\Gamma^{(0)}_{\nu\sigma\rho}(-p,k,p-k)\] \[\times [V_{1}(-p,k,p-k)g_{\delta\tau}+V_{2}(-p,k,p-k)p_{\delta}p_{\tau} ]+\ldots\,, \tag{6.8}\] where the ellipsis now denotes terms that cannot contribute to \(V_{9}(q)\) because they do not contain either a \(1/p^{2}\) or a \(p_{\mu}\). In anticipation of the fact that we will take \(p\to 0\) at the end of the calculation, we can already consider \(p\) to be small. In this case, we can use into Eq. (6.8) the Taylor expansion given in Eq. (2.8), and its analog with \(V_{1}\) substituted by \(V_{2}\). Then, one sees that the term \(V_{2}(-p,k,p-k)\) cannot contribute to \(V_{9}(q)\), since it is two orders higher in \(p\) than \(V_{1}(-p,k,p-k)\). Hence, we obtain explicitly \[q^{\alpha}(e_{2})_{\alpha\mu\nu}=3\lambda Z_{3}F(q)B_{1}(p,q,r)\left(\frac{p_{ \mu}p_{\nu}}{p^{2}}\right)\int_{k}k^{2}\Delta^{2}(k)\mathbb{C}(k)+\ldots\,, \tag{6.9}\] with the ellipsis now including terms that are higher-order in the Taylor expansion around \(p=0\). At this point, recalling Eqs. (5.14) and (5.15), the scalar coefficient of \(p_{\mu}p_{\nu}\) in Eq. (6.9) yields a contribution to \(\overline{V}_{3}/p^{2}+S_{3}\), namely \[\overline{V}_{3}^{(e_{2})}(p,q,r)/p^{2}+S_{3}^{(e_{2})}(p,q,r)=\frac{1}{p^{2} }F(q)B_{1}(p,q,r)\left[3\lambda Z_{3}\int_{k}k^{2}\Delta^{2}(k)\mathbb{C}(k) \right]+\ldots\,, \tag{6.10}\] where the superscript "\((e_{2})\)" emphasizes that the above expression contains only the contribution from diagram \((e_{2})\). Lastly, we perform step (_iii_), _i.e._, multiply Eq. (6.10) by \(p^{2}\) and set \(p=0\). In doing so, we note from Eq. (6.7) that \(B_{1}(0,q,-q)=A_{1}(q)\), while Eqs. (4.8) and (5.15) imply that \(\overline{V}_{3}^{(e_{2})}(0,q,-q)=2V_{9}^{(e_{2})}(q)\). Furthermore, since \((e_{2})\) is the only diagram of Fig. 7 that contributes to \(V_{9}(q)\), we have \(V_{9}(q)=V_{9}^{(e_{2})}(q)\), such that \[V_{9}(q)=F(q)A_{1}(q)\left[\frac{3\lambda Z_{3}}{2}\int_{k}k^{2}\Delta^{2}(k) \mathbb{C}(k)\right]=\frac{1}{2}F(q)A_{1}(q)m_{(a_{1})}^{2}\,, \tag{6.11}\] where we used Eq. (2.9) to obtain the last equality. Therefore, the SDE of Fig. 7 satisfies an approximate form of Eq. (5.17), where only the term containing \(A_{3}^{\mathsf{p}}(q)\) in that equation is absent. This term could arise in the full SDE either from the diagrams that we omitted in Fig. 7, or from the ghost-ghost-gluon-gluon kernel that we dropped in Eq. (6.1); its proper restoration requires a detailed treatment that goes beyond the scope of the present work. Finally, we point out that, at \(q=0\), the SDE result for \(V_{9}(0)\) satisfies the STI requirement of Eq. (5.20) _exactly_, by virtue of Eq. (5.19). ## VII Computing the residue function We next turn to the numerical determination of the residue function, \(V_{9}(q)\), from Eq. (5.17). To this end, we first transform Eq. (5.17) to Euclidean space, to obtain \[V_{9}(q)=\frac{F(q)}{2}\left[m^{2}A_{1}(q)+\Delta^{-1}(q)A_{3}^{\mathsf{p}}(q) \right]\,. \tag{7.1}\] As we will explain below, for the determination of \(A_{3}^{\mathsf{p}}(q)\) we will make use of the displacement function \(\mathbb{C}(q)\), shown in Fig. 2. Since in [64; 65] the \(\mathbb{C}(q)\) has been computed in the so-called "asymmetric MOM scheme" [52; 54; 67; 102], with \(\mu=4.3\) GeV, the same renormalization prescription will be employed in what follows. Note that, in this scheme, the finite ghost-gluon renormalization constant appearing in Eqs. (5.19) and (5.20) is given by \(\widetilde{Z}_{1}=0.9333\)[14; 64]. Then, for the \(F(q)\) and \(\Delta(q)\) appearing in Eq. (7.1) we use physically motivated fits to lattice data of [67], given by Eqs. (C6) and (C11) of [63], respectively. These fits are shown as continuous blue lines in Fig. 8, where they are compared to the lattice data of [67] (points). Note that the value of \(\Delta^{-1}(0)=0.121\) GeV\({}^{-2}\) corresponding to this fit leads to the previously mentioned value of \(m=348\) MeV for \(\mu=4.3\) GeV. Moreover, the fitting functions for both \(\Delta(q)\) and \(F(q)\) were constructed in such a way that they reproduce the respective one-loop resummed anomalous dimensions. For the tree-level (classical) form factor \(A_{1}(q)\) of the ghost-gluon kernel, we employ the Figure 8: Lattice data (points) of [67] for \(\Delta^{-1}(q)\) (left) and \(F(q)\) (right), together with their corresponding fits (blue solid lines), given by Eqs. (C6) and (C11) of [63], respectively. result of the SDE analysis of [101]7; the result is shown as red squares in the left panel of Fig. 9, and is seen to deviate by at most 11%, at \(q=1.43\) GeV, from the tree-level value, \(A_{1}^{(0)}(q)=1\). Footnote 7: In [101] the Taylor scheme [117; 118; 119; 120] was employed; the conversion to the asymmetric scheme proceeds through the relation \(A_{1}^{\rm asym}=\widetilde{Z}A_{1}^{\rm Taylor}\). Then, the only unknown ingredient in Eq. (7.1) is the ghost-gluon pole term \(A_{3}^{\sf p}(q)\), which may be computed as follows. We start with the one-loop dressed truncation of the SDE describing the ghost-gluon kernel, \(H_{\mu\nu}(r,q,p)\), see, _e.g._, Fig. 3 of [101]. From this SDE, we derive a dynamical equation for \(\mathbb{A}_{3}(r,q,p)\), using the projector \(\mathcal{T}_{3}^{\mu\nu}\), given in Eqs. (3.7) and (3.8) of [101]. Then, recalling Eq. (5.10), we obtain \(A_{3}^{\sf p}(q)\) from the equation for \(\mathbb{A}_{3}(r,q,p)\) by multiplying it by \(p^{2}\) and taking the limit \(p\to 0\). This procedure furnishes a _linear_ integral equation for \(A_{3}^{\sf p}(q)\), which has the form \[A_{3}^{\sf p}(q)=\int_{k}\mathcal{K}_{1}(k,q)A_{3}^{\sf p}(k)+\int_{k} \mathcal{K}_{2}(k,q)\mathbb{C}(k)\,, \tag{7.2}\] where we notice the appearance of \(\mathbb{C}(q)\), and the kernels \(\mathcal{K}_{i}(k,q)\) are comprised by combinations of \(\Delta\), \(F\), and kinematic factors. Then, we solve Eq. (7.2) numerically through the Nystrom method [121], employing for \(\mathbb{C}(q)\) the result of [64; 65], shown in Fig. 2. Through this procedure, we obtain the result shown as red squares in the right panel of Fig. 9. Figure 9: Left: The form factor \(A_{1}(q)\) of the ghost-gluon kernel, in the soft-antighost limit, taken from [101]. Right: Pole amplitude \(A_{3}^{p}(q)\) of the ghost-gluon kernel, computed from the truncated SDE of Eq. (7.2). For convenience, we provide fits for the functions \(A_{1}(q)\) and \(A_{3}^{\mathbf{p}}(q)\), which, in conjunction with the fits for \(\Delta(q)\) and \(F(q)\), allow \(V_{9}(q)\) to be computed most expeditiously. Specifically, both \(A_{1}(q)\) and \(A_{3}^{\mathbf{p}}(q)\) can be accurately fitted by the low-degree rational functions \[A_{1}(q)=\widetilde{Z}_{1}\left[1+R_{1}(q)\right]\,,\qquad A_{3}^{\mathbf{p}}( q)=\widetilde{Z}_{1}R_{2}(q)\,, \tag{7.3}\] where \[R_{1}(q)=\frac{q^{2}/a_{1}+\left(q^{2}/a_{2}\right)^{2}+\left(q^{2}/a_{3} \right)^{3}}{1+q^{2}/b_{1}+\left(q^{2}/b_{2}\right)^{2}+\left(q^{2}/b_{3} \right)^{3}}\,,\qquad R_{2}(q)=\frac{q^{2}/c_{1}}{1+q^{2}/d_{1}+\left(q^{2}/d _{2}\right)^{2}}\,, \tag{7.4}\] with fitting parameters given by \(a_{1}=1.71\) GeV\({}^{2}\), \(a_{2}=2.68\) GeV\({}^{2}\), \(a_{3}=4.51\) GeV\({}^{2}\), \(b_{1}=0.410\) GeV\({}^{2}\), \(b_{2}=1.30\) GeV\({}^{2}\), \(b_{3}=1.89\) GeV\({}^{2}\), \(c_{1}=-27.3\) GeV\({}^{2}\), \(d_{1}=0.419\) GeV\({}^{2}\) and \(d_{2}=1.03\) GeV\({}^{2}\). We emphasize that the fits in Eq. (7.3) preserve certain limits of the original functions, \(A_{1}\) and \(A_{3}^{\mathbf{p}}\). First, at the origin, the fits for \(A_{1}(q)\) and \(A_{3}^{\mathbf{p}}(q)\) satisfy Eq. (5.19). Next, a one-loop calculation reveals that, at large values of the momentum, \(A_{1}(q)\) saturates to a constant [101]; in addition, the numerical SDE result indicates that, in the same kinematic limit, \(A_{3}^{\mathbf{p}}\sim 1/q^{2}\). It is straightforward to verify that these ultraviolet features are correctly captured by the fits given by Eq. (7.3). Using the above ingredients in Eq. (7.1), we obtain the \(V_{9}(q)\) shown as a solid blue curve in Fig. 10. Comparing this result to that for \(F(q)\), shown in the right panel of Fig. 8, it is clear that the shape of \(V_{9}(q)\) is dominated by the ghost dressing function in Eq. (7.1). Figure 10: Residue function, \(V_{9}(q)\), computed from Eq. (5.17), using the \(A_{3}^{\mathbf{p}}(q)\) shown in Fig. 9 (blue continuous), compared to the result obtained if \(A_{3}^{\mathbf{p}}(q)\) is set to zero (red dashed line). Next, we test the effect of \(A_{3}^{\mathbf{p}}\) on \(V_{9}\). To this end, we set \(A_{3}^{\mathbf{p}}(q)\) to zero in Eq. (7.1), in which case we obtain the result shown as a red dashed line in Fig. 10. Note that the latter becomes equal to the full result (blue continuous) at \(q=0\), by virtue of Eq. (5.19), but differs significantly from it for \(q>1\) GeV. To understand this difference, let us note that although \(A_{3}^{\mathbf{p}}(q)\) itself is small in comparison to the corresponding pole of the three-gluon vertex, \(\mathbb{C}(q)\) [cf. Fig. 2 and Fig. 9], or even the saturation value of \(V_{9}(q)\), it appears multiplied by \(\Delta^{-1}(q)\). Since the latter increases rapidly in the ultraviolet, the product \(A_{3}^{\mathbf{p}}(q)\Delta^{-1}(q)\) can contribute a considerable amount to \(V_{9}(q)\) at large \(q\), as indeed is observed. ## VIII Absence of mixed triple pole In this section, we analyze the infrared behavior of the form factor \(V_{10}(q,r,p)\), which is accompanied by a denominator \(q^{2}r^{2}p^{2}\) in Eq. (4.4). As such, at first sight, one expects that this term should act as a triple mixed pole. Consider, for instance, taking all momenta to zero by first taking \(p\to 0\), which implies also \(r=-q\), and then taking \(q\to 0\). In this case, if \(V_{10}(0,0,0)\) were nonvanishing, one would have \[\lim_{q,p\to 0}\frac{V_{10}(q,r,p)}{q^{2}r^{2}p^{2}}=\frac{V_{10}(0,0,0)}{q^{4} p^{2}}\,, \tag{8.1}\] where we use the shorthand notation \[\lim_{q,p\to 0}:=\lim_{q\to 0}\lim_{p\to 0}\,. \tag{8.2}\] However, as we will demonstrate in Subsection VIII.1, the STI of Eq. (5.8) requires \(V_{10}(q,r,p)\) to vanish in this limit as \[\lim_{q,p\to 0}V_{10}(q,r,p)=2(q\cdot p)q^{2}f(0)\,, \tag{8.3}\] where \(f(q)\) is some pole-free function at \(q=0\). Consequently, in Euclidean space, \[\lim_{q,p\to 0}\frac{V_{10}(q,r,p)}{q^{2}r^{2}p^{2}}=\lim_{q,p\to 0}\frac{2\cos \theta}{|q||p|}f(0)\,, \tag{8.4}\] where \(q\cdot p=|q||p|\cos\theta\) and \(|q|\) denotes the magnitude of the Euclidean momentum \(q\). Moreover, an approximate SDE analysis presented in Subsection VIII.2 shows that this particular requirement is enforced by the Schwinger mechanism, and especially due to the validity of Eq. (2.8). Note that the divergence in Eq. (8.4) is in fact weaker than that associated with the form factors \(V_{7}\), \(V_{8}\), and \(V_{9}\), namely \[\lim_{q,p\to 0}\frac{V_{9}(q,r,p)}{q^{2}p^{2}}=\lim_{q,p\to 0}\frac{ \widetilde{Z}_{1}F(0)m^{2}}{2q^{2}p^{2}}\,, \tag{8.5}\] where we used Eq. (5.20). ### Demonstration from the STI For the derivation of Eq. (8.3) from the STI of Eq. (5.8), we begin by noting that Eq. (4.8) already implies that \[\lim_{p\to 0}V_{10}(q,r,p)=2(p\cdot q)V_{10}^{\prime}(q)\,,\qquad V_{10}^{ \prime}(q):=\left[\frac{\partial V_{10}(q,r,p)}{\partial r^{2}}\right]_{p=0}\,. \tag{8.6}\] Then, to obtain Eq. (8.3) we need to show that \[\lim_{q\to 0}V_{10}^{\prime}(q)=q^{2}f(0)\,. \tag{8.7}\] In order to constrain \(V_{10}\) from the STI, let us first note that this form factor appears in Eq. (5.8) through the combination \(\overline{V}_{5}\), defined in Eqs. (5.14) and (5.15). Then, isolating its respective tensor structure, \(r_{\mu}p_{\nu}\), and equating the corresponding coefficients on each side of Eq. (5.8), we obtain \[S_{5}=\frac{1}{r^{2}p^{2}}\left\{(p\cdot r)F(q)\left[p^{2}\Delta^{-1}(r) \mathbb{A}_{3}(r,q,p)-r^{2}\Delta^{-1}(p)\mathbb{A}_{3}(p,q,r)\right]- \overline{V}_{5}\right\}\,, \tag{8.8}\] where we note the term \(r^{2}p^{2}\) in the denominator. Since \(S_{5}\) is pole-free, the r.h.s of Eq. (8.8) must be an evitable singularity at \(p=0\) and \(r=0\), which implies that the term in curly brackets must vanish sufficiently fast in those limits. Then, since \(S_{5}\) is antisymmetric under the exchange or \(r\) and \(p\), it suffices to consider the \(p=0\) limit of Eq. (8.8); hence, we expand the term in brackets around \(p=0\). Using the Bose symmetry relations of Eqs. (4.6) and (4.7), it is straightforward to show that the zeroth order term vanishes. However, the linear term yields the nontrivial constraint \[\overline{V}_{5}^{\prime}(q)=-\frac{F(q)}{2}\left\{m^{2}\left[q^{2}A_{3}(0,q,-q)+A_{3}^{\mathbf{p}}(0,q,-q)\right]+\Delta^{-1}(q)A_{3}^{\mathbf{p}}(q) \right\}\,, \tag{8.9}\] where we emphasize that \(A_{3}^{\mathbf{p}}(q)\) [see Eq. (5.18)] corresponds to a kinematic limit (soft-gluon) different from \(A_{3}^{\mathbf{p}}(0,q,-q)\) (soft-antighost), and define \[\overline{V}_{5}^{\prime}(q):=\left[\frac{\partial V_{5}(q,r,p)}{\partial r^{ 2}}\right]_{p=0}\,. \tag{8.10}\] Next, we further expand Eq. (8.9) around \(q=0\). The zeroth order term is easily seen to vanish, while the first nonvanishing term is given by \[\overline{V}^{\prime}_{5}(q)=q^{2}f_{1}(0)\,, \tag{8.11}\] where \[f_{1}(0):=-\frac{F(q)m^{2}}{2}\left\{A_{3}(0,0,0)+\left[\frac{d}{ dq^{2}}\left(A_{3}^{\mathsf{P}}(0,q,-q)-A_{3}^{\mathsf{P}}(q,-q,0)\right) \right]_{q=0}\right\}\,. \tag{8.12}\] Then we relate \(V^{\prime}_{10}\) to \(\overline{V}^{\prime}_{5}\) by expanding Eq. (5.15) around \(p=0\). In doing so, we make extensive use of the Bose symmetry relations of Eqs. (4.6) and (4.7), and invoke Eq. (2.8). After some algebra, this procedure yields \[V^{\prime}_{10}(q)=\overline{V}^{\prime}_{5}(q)+q^{2}f_{2}(q)\,, \tag{8.13}\] where \[f_{2}(q):=\mathbb{C}(q)-q^{2}\left[\frac{\partial V_{2}(p,q,r)}{ \partial r^{2}}\right]_{p=0}+\left[\frac{\partial}{\partial r^{2}}\left(V_{7} (r,p,q)-V_{7}(q,p,r)\right)\right]_{p=0}\,. \tag{8.14}\] Finally, combining Eqs. (8.11) and (8.13) we obtain the announced result, Eq. (8.7), by identifying \(f(0):=f_{1}(0)+f_{2}(0)\). ### SDE realization Now, we show how Eq. (8.7) follows from the SDE of the three-gluon vertex. Note that, by virtue of Eq. (8.13), which is a consequence of Bose symmetry, it suffices to demonstrate Eq. (8.11). To this end, we employ a procedure similar to that used in Section VI to obtain \(V_{9}(q)\). Specifically, (_i_) we contract the vertex SDE of Fig. 7 by \(q_{\alpha}\); (_ii_) then, we isolate the tensor structure \(r_{\mu}p_{\nu}\) from the result, which yields a contribution to \(\overline{V}_{5}/(r^{2}p^{2})+S_{5}\); (_iii_) next, we multiply the result by \(r^{2}p^{2}\) and expand to lowest order in \(p=0\), thus obtaining \(\overline{V}^{\prime}_{5}\); (_iv_) lastly, we expand \(\overline{V}^{\prime}_{5}\) to lowest order around \(q=0\). In carrying out step (_i_) above, we note that diagrams (\(e_{1}\)), (\(e_{3}\)) and (\(e_{4}\)) of Fig. 7 do not contribute to \(V_{10}\), for the exact same reasons that they do not contribute to \(V_{9}\), as discussed in Section VI. Hence, we focus on diagram (\(e_{2}\)). Moreover, after triggering the STI of Eq. (6.1) for the four-gluon vertex in \((e_{2})\), we see that only the term highlighted with an underbrace can contribute to \(\overline{V}_{5}\). Hence, we are led back to Eq. (6.3). Then, we carry out step \((\)_ii_\()\). Evidently, only the term \(r_{\mu}\mathbb{B}_{2}(p,q,r)\) of Eq. (6.6) contributes to the tensor structure \(r_{\mu}p_{\nu}\). Hence, we can write \[q^{\alpha}(e_{2})_{\alpha\mu\nu} = -\frac{\lambda Z_{3}F(q)\mathbb{B}_{2}(p,q,r)r_{\mu}}{p^{2}}\int_ {k}\Delta^{\rho\delta}(k-p)\Delta^{\sigma\tau}(k)\Gamma^{(0)}_{\nu\sigma\rho}(- p,k,p-k) \tag{8.15}\] \[\times [V_{1}(-p,k,p-k)g_{\delta\tau}+V_{2}(-p,k,p-k)p_{\delta}p_{\tau}]+ \ldots\,,\] with the ellipsis denoting terms that cannot contribute to \(\overline{V}_{5}\) because they do not contain either a \(1/p^{2}\) or a \(r_{\mu}\). Then, for small \(p\), we can expand Eq. (8.15) around \(p=0\), using Eq. (2.8). Note that to first order in \(p\), Eq. (6.7) implies \(\mathbb{B}_{2}(p,q,r)=-(p\cdot q)\mathbb{A}_{3}(0,q,-q)\). Hence, we obtain \[q^{\alpha}(e_{2})_{\alpha\mu\nu} = -F(q)(q\cdot p)q^{2}\mathbb{A}_{3}(0,q,-q)\left(\frac{r_{\mu}p_{ \nu}}{r^{2}p^{2}}\right)\left[3\lambda Z_{3}\int_{k}k^{2}\Delta^{2}(k)\mathbb{ C}(k)\right]+\ldots\,, \tag{8.16}\] with ellipsis now including terms that are dropped in the expansion around \(p=0\). To complete step \((\)_ii_\()\), we note that the form factor of the tensor \(r_{\mu}p_{\nu}\) is \(\overline{V}_{5}/p^{2}r^{2}+S_{5}\). Hence, invoking Eq. (2.9), \[\frac{\overline{V}_{5}^{(e_{2})}}{p^{2}r^{2}}+S_{5}^{(e_{2})}=\frac{\overline{ V}_{5}}{p^{2}r^{2}}+S_{5}^{(e_{2})}=-\frac{(q\cdot p)}{r^{2}p^{2}}F(q)q^{2} \mathbb{A}_{3}(0,q,-q)m_{(a_{1})}^{2}+\ldots\,. \tag{8.17}\] Note that in the first equality, we used the fact that only \((e_{2})\) contributes to \(\overline{V}_{5}\), _i.e._, \(\overline{V}_{5}=\overline{V}_{5}^{(e_{2})}\), whereas \(S_{5}\) may receive contributions from other diagrams. Proceeding to step \((\)_iii_\()\), we multiply Eq. (8.17) by \(r^{2}p^{2}\) and expand the result to the first order in \(p\). Using Eq. (8.10), we find \[\overline{V}_{5}^{\prime}(q)=-\frac{F(q)m_{(a_{1})}^{2}}{2}\left[q^{2}A_{3}(0, q,-q)+A_{3}^{\mathbf{p}}(0,q,-q)\right]\,. \tag{8.18}\] Note that this result is nearly identical to Eq. (8.9), differing from it only by the substitutions \(m\to m_{(a_{1})}\) and \(A_{3}^{\mathbf{p}}(q)\to 0\). Finally, we perform \((\)_iv_\()\), _i.e._, expand Eq. (8.18) around \(q=0\). Using Eq. (5.19), we obtain \[\overline{V}_{5}^{\prime}(q)=q^{2}f_{3}(0)\,,\qquad f_{3}(0):=-\frac{F(0)m_{(a _{1})}^{2}}{2}\left\{A_{3}(0,0,0)+\left[\frac{dA_{3}^{\mathbf{p}}(0,q,-q)}{ dq^{2}}\right]_{q=0}\right\}\,, \tag{8.19}\] which is Eq. (8.11), with \(f(0):=f_{3}(0)\). As in the previous section, the STI results given by Eqs. (8.11) and (8.12), and the SDE result in Eq. (8.19) are strikingly similar; again, the observed discrepancy is due to the SDE truncation, or the approximate nature of the STI in Eq. (6.1). ## IX Conclusions The intense scrutiny of the correlation functions of QCD by means of continuous methods [9; 23; 27; 27; 122; 123; 124; 34], and lattice simulations [125; 126; 127; 128; 129; 130; 131; 132; 133; 134; 135; 136; 137; 138; 139; 140; 141; 142; 143; 144; 145; 146; 147; 148; 149; 150; 151; 152; 153; 154; 155; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 180], supports the notion that the gluons acquire a nonperturbative mass [63; 64] through the action of the celebrated Schwinger mechanism. In a non-Abelian context, the main dynamical characteristic of this mechanism is the formation of composite massless poles in the vertices of the theory [15; 15; 15; 15; 15; 156; 157; 158; 159; 160; 161; 162; 163; 164; 165; 166; 167; 168; 169; 170; 171; 172; 173; 174; 175; 176; 177; 178; 179; 181; 182; 183; 184; 185; 186; 187; 188; 189; 190; 191; 192; 193; 194; 195; 196; 197; 198; 199; 200; 201; 202; 203; 204; 205; 206; 207; 208; 209; 210; 211]. These poles display the crucial feature of being completely longitudinally coupled, a fact that guarantees the absence of divergences in (Landau gauge) lattice form factors. In this article we have analyzed the pole content of the three-gluon vertex, whose role is known to be instrumental in the realization of the Schwinger mechanism, accounting for the bulk of the gluon mass [13; 14; 63; 64]. It turns out that the resulting structures are quite rich, being imposed by Bose-symmetry and the STI satisfied by the three-gluon vertex. In particular, we have focused on the appearance and role of the mixed double and triple poles, of the type \(1/q^{2}p^{2}\) and \(1/q^{2}r^{2}p^{2}\), respectively, which are inert as far as the direct act of mass generation is concerned. It turns out that the mixed double poles are an indispensable requirement for the flawless completion of the STI satisfied by this vertex in the presence of an infrared finite (massive) gluon propagator. In fact, the STI imposes powerful constraints relating the so-called "residue function" to all other components entering in the STI. We emphasize that, at this level, the presence of these poles is dictated solely by the STI, and is not related to any particular dynamical realization. In that sense, it appears to be of general validity, hinging only on the longitudinal nature of the poles. The picture emerging from the bound-state realization of the Schwinger mechanism, as captured by the vertex SDE [22; 24; 43; 44], satisfies the general constraints imposed by the STI, thus passing a highly nontrivial self-consistency check. As for the mixed triple pole, our analysis reveals that their strength is substantially reduced (_i.e._, weaker than a double mixed pole), again by virtue of inescapable requirements imposed by the STI. Interestingly enough, the salient qualitative features of this result are recovered by the vertex SDE, exposing once again the complementarity between symmetry and dynamics. Our analysis strongly indicates that higher \(n\)-point functions functions (_i.e._, Green's functions with \(n\) incoming gluons, and \(n>3\)) will also possess an extended structure of poles. This is already seen at the level of the four-gluon vertex \(\mathrm{I\!\Gamma}^{abts}_{\alpha\mu\delta\tau}\) (\(n=4\)), which enters in the demonstration of Sec. VI. In particular, \(\mathrm{I\!\Gamma}^{abts}_{\alpha\mu\delta\tau}\) is forced by the STI of Eq. (6.1), namely by the \(V\)-parts of the three-gluon vertices appearing on the r.h.s., to have poles in all channels carrying the momenta of the external legs, together with the channels obtained by forming sums of momenta, as happens in the case \(p=q+r\). A preliminary study reveals that the diagrammatic interpretation of all these poles is fully consistent with the notions and elements introduced in Sec. III. We hope to report the results of a detailed inquiry in the near future. It would be clearly important to unravel an organizing principle that accounts for the pole proliferation in the fundamental vertices of QCD. A possible approach is the construction of low-energy effective descriptions of Yang-Mills theories with a gluon mass, in the spirit of the gauged non-linear sigma model proposed by Cornwall, see [131; 130; 21; 99]. In this model, the addition of a gluon mass at the level of the effective Lagrangian is compensated by the presence of angle-valued scalar fields, which act as would-be Nambu-Goldstone particles. When the equation of motion of these scalars is solved as a power series in the gauge field, and the solution is substituted back into the Lagrangian, the various vertices acquire longitudinally coupled massless poles, see, _e.g._, Eq. (5.4). A systematic comparison between the pole patterns obtained within this model (or variants thereof) and those induced by the Schwinger mechanism at the level of the fundamental theory, as detailed here, might afford clues on the structure of possible low-energy effective descriptions of QCD. ## X Acknowledgments The work of A. C. A., B. M. O., and L. R. S. is supported by the CNPq grants 307854/2019-1, 141409/2021-5, and 162264/2022-4, respectively. A. C. A also acknowledges financial support from project 464898/2014-5 (INCT-FNA). M. N. F. and J. P. are supported by the Spanish MICINN grant PID2020-113334GB-I00. M. N. F. acknowledges financial support from Generalitat Valenciana through contract CIAPOS/2021/74. J. P. also acknowledges funding from the regional Prometeo/2019/087 from the Generalitat Valenciana.
2303.13525
Forecasting Workload in Cloud Computing: Towards Uncertainty-Aware Predictions and Transfer Learning
Predicting future resource demand in Cloud Computing is essential for optimizing the trade-off between serving customers' requests efficiently and minimizing the provisioning cost. Modelling prediction uncertainty is also desirable to better inform the resource decision-making process, but research in this field is under-investigated. In this paper, we propose univariate and bivariate Bayesian deep learning models that provide predictions of future workload demand and its uncertainty. We run extensive experiments on Google and Alibaba clusters, where we first train our models with datasets from different cloud providers and compare them with LSTM-based baselines. Results show that modelling the uncertainty of predictions has a positive impact on performance, especially on service level metrics, because uncertainty quantification can be tailored to desired target service levels that are critical in cloud applications. Moreover, we investigate whether our models benefit transfer learning capabilities across different domains, i.e. dataset distributions. Experiments on the same workload datasets reveal that acceptable transfer learning performance can be achieved within the same provider (because distributions are more similar). Also, domain knowledge does not transfer when the source and target domains are very different (e.g. from different providers), but this performance degradation can be mitigated by increasing the training set size of the source domain.
Andrea Rossi, Andrea Visentin, Diego Carraro, Steven Prestwich, Kenneth N. Brown
2023-02-24T14:51:30Z
http://arxiv.org/abs/2303.13525v2
# Uncertainty-Aware Workload Prediction in Cloud Computing ###### Abstract Predicting future resource demand in Cloud Computing is essential for managing Cloud data centres and guaranteeing customers a minimum Quality of Service (OoS) level. Modelling the uncertainty of future demand improves the quality of the prediction and reduces the waste due to overallocation. In this paper, we propose univariate and bivariate Bayesian deep learning models to predict the distribution of future resource demand and its uncertainty. We design different training scenarios to train these models, where each procedure is a different combination of pretraining and fine-tuning steps on multiple datasets configurations. We also compare the bivariate model to its univariate counterpart training with one or more datasets to investigate how different components affect the accuracy of the prediction and impact the OoS. Finally, we investigate whether our models have transfer learning capabilities. Extensive experiments show that pretraining with multiple datasets boosts performances while fine-tuning does not. Our models generalise well on related but unseen time series, proving transfer learning capabilities. Runtime performance analysis shows that the models are deployable in real-world applications. For this study, we preprocessed twelve datasets from real-world traces in a consistent and detailed way and made them available to facilitate the research in this field. Bayesian Neural Networks, Cloud Computing, Workload Prediction, Uncertainty, Deep Learning, Transfer Learning. ## 1 Introduction The advent of cloud computing services is relatively new. However, in recent years they have gained enormous popularity because of benefits including reduction of the cost of the business and increase in productivity [1]. These advantages gained even more importance during the COVID-19 pandemic [2], where work from home was widely adopted. The advances in Artificial Intelligence (AI) and Big Data have led more companies to use cloud computing systems, increasing their demand. Workload forecasting and scheduling play a fundamental role in the cost of operating the data centres, but predicting the resource demand is a challenging task [3]. Moreover, data centres have a high environmental impact. In this regard, the energy consumption of data centres will grow from 292 TWh in 2016 to 353 TWh in 2030 due to the increase in the number of users [4] and, if it is left uncontrolled, the greenhouse gas emission due to ICT technologies might increase over 14% in 2040, compared to 1-1.6% from 2007 to 2016 [5]. Cloud computing providers aim to preconfigure the machines to guarantee a high Quality of Service (QoS). The benefits of predicting future demand include better resource utilisation and a reduction of the overallocation with the opportunity of serving more customers, which leads to an increase in profit and an overall decrease in energy consumption, CO2 emission and maintenance costs. Machine learning (ML) and deep learning (DL) approaches have been widely used to forecast future demand in the cloud environment [6, 7, 8], with a focus on probabilistic and uncertainty aspects as well [9, 10]. In this paper, we evaluate the performance of probabilistic forecasts based on Hybrid Bayesian Neural Networks (HBNNs), built upon our previous work [11]. We consider bivariate time series to predict processing units and memory usage simultaneously, training the models using multiple datasets to increase our model's generalisation capabilities and analyse the impact of Bayesian components in the network. We assess the performance of the predictive models using four publicly available datasets from Google Cloud [12, 13] and Alibaba clusters [14, 15]. We preprocess them to overcome the inconsistency and often undetailed versions in the literature for replicability. Moreover, we investigate the transfer learning (TL) approach to predict the future workload of traces that are not part of the training set. To the best of our knowledge, this is the first time that TL has been applied in the context of workload prediction in cloud computing. The contributions in this paper are as follows: * We preprocess and make available twelve different datasets of the resource demand divided into windows of 5 minutes from cluster cells of four public traces from Alibaba and Google Cloud Computing systems. These are the most used datasets in the cloud workload computing literature. * We validate the results by applying the model to multiple datasets and extending them to variations of the model, which includes bivariate predictions. * We provide a comparison of a bivariate model to the univariate counterpart, training with one or more traces, showing the importance of the training with multiple datasets. * We investigate the generalisation capabilities of the model with a deep analysis of the training scenarios in the context of data-driven applications for workload prediction in Cloud Computing, focusing on the TL approach. * All the investigations are centred on the probability distribution estimation and modelling of the uncertainty of the prediction, with a focus on the epistemic uncertainty and, specifically, the data variability problem (distributional representation of the training set). The remainder of the paper is structured as follows. Section 2 introduces related work. We describe the baseline and the proposed models in Section 3. Section 4 presents the training scenarios that are compared in this paper. Section 5 describes the experiments' methodology and discusses the proposed methods' results based on real-world load traces data. Section 6 concludes the paper and discusses the future works. ## 2 Related Work In cloud computing management, workload prediction plays an important role that has been broadly studied over the past 20 years. Starting with statistical methods such as Autoregressive Integrated Moving Average (ARIMA) by Calheiros _et al._[16], in the last few years, research has focused on ML and DL approaches that have been shown to outperform statistical methods. For this reason, this section is focused on ML and DL techniques. Some of the work is centred on prediction at the machine level, others at the cluster level and design and evaluate models in a wide variety of datasets for univariate and multivariate forecasts. A large selection of ML methods has tackled cloud workload forecasting. Khan _et al._[17] combined clustering algorithms to group Virtual Machines (VMs) with similar patterns and used Hidden Markov Modelling to forecast changes in workload patterns. Banerjee _et al._[18] proposed a multi-step-ahead prediction framework composed of a set of supervised learning approaches such as Linear Regression, k-Nearest Neighbours, Support Vector Regressor (SVR), Gradient Boosting and decision tree. The authors applied a prediction-based VM placement algorithm to minimise resource utilisation and power consumption. Kim _et al._[19] designed an ensemble model based on eight ML predictors for an online forecast to reduce the Service Level Agreement violations. The experiments are run on Google Cloud Trace 2011 and Facebook Hadoop trace. With the advent of DL, many architectures have been investigated. Leka _et al._[20] implemented a hybrid neural network with a Convolutional Neural Network (CNN) followed by a Long Short-Term Memory (LSTM), which improved the accuracy compared to the single components alone in a real-world dataset with 12 samples of VMs with the most common workload patterns. Dang-Quang _et al._[21] proposed a multivariate Bidirectional LSTM (Bi-LSTM), which improves the accuracy of the prediction w.r.t. the univariate counterpart using a real trace workload dataset from the Delft University of Technology called GWA-T-12 Bitbrains. This is due to the strong correlation between the CPU and HTTP workload traces, which do not hold in our traces for processing unit and memory demand. Ouhame _et al._[22] proposed a hybrid multivariate CNN and LSTM model which performs better than ARIMA-LSTM, Vector Autoregressive GRU and Vector Autoregressive Multilayer Perceptron models on Bitbrains dataset. Huidan Xi _et al._[6] implemented an attention-based LSTM and compare it with an LSTM without the attention mechanism. They predicted the CPU workload at the machine level over three machines on the Alibaba CDC dataset, showing that attention improves accuracy. Qian _et al._[23] evaluated the performance of an encoder-decoder model based on the attention mechanism at the machine level over a sample of 1,024 machines from the Google Cloud trace 2011. They compared their model to a standard LSTM and an Echo State Network at different prediction steps, showing that their model outperformed the baseline. Patel _et al._[24] predicted CPU usage with a hybrid prediction method composed of 1-dimensional convolution (1DConv) and LSTM on three real-world datasets, Google Cloud trace 2011, Alibaba 2018 and Bitbrains. The first part of the model combines three different CNN blocks, which capture patterns from dilated versions of the trace. The model outperformed the sequential versions of the network with one single CNN block. Ruan _et al._[25] proposed a turning point-based trend prediction using a cloud feature-enhanced deep learning model. The model is based on LSTM components and applied to real-world datasets by Google Cloud trace 2011. Karim _et al._[26] implemented a multi-step-ahead prediction model called BHyPreC, which combined a Bi-LSTM on the top of the stacked LSTM and GRU blocks, outperforming baselines such as ARIMA, LSTM, GRU and Bi-LSTM. The authors showed that combining different RNN components enhances the accuracy of CPU workload prediction. Combining the architectures mentioned above to build ensemble models has also become trendy in cloud workload prediction. This is because different models can capture various aspects of the trace to improve the accuracy of the forecast. Valarnath _et al._[27] proposed an ensemble of Random Forests (RF) followed by an LSTM for predicting the CPU utilisation of VMs in the Alibaba 2018 dataset. It outperformed ML models such as linear regression, SVR, Gradient Boosting, RF and Gaussian Process Regression. An outlier detection mechanism is applied to the RF ensemble's output before training the LSTM module. Yazdanian _et al._[28] proposed a hybrid model named E2LG, which decomposes first the workload time series into components of different frequency bands and then they use an ensemble of Generative Adversarial Network and LSTM to predict each sub-band. In this architecture, LSTM blocks are used as generators, and 1DConv blocks are used as a discriminator. The model is experimentally tested on HTTP workload datasets for both one-step-ahead and multi-step-ahead predictions. The research has also moved on to the probabilistic aspects of time series forecasting in recent years. In our previous work [11], we extend the DL model to a probabilistic approach employing an HBNN that captures the epistemic and aleatory uncertainty of the prediction. We showed the advantages of forecasting a probability distribution in contrast to a point estimate but limited to univariate forecast and using only one dataset at a time in the training phase. While the aleatory uncertainty cannot be eliminated, more training data can reduce epistemic uncertainty [29]. Salinas _et al._[30] designed a probabilistic DL method called DeepAR trained on large related time series (electricity demand and traffic forecast), reaching state-of-the-art performance. Another common technique used for improving the accuracy of the predictive models is transfer learning (TL). This technique aims to learn a task (target domain) by transferring the knowledge of another model trained for a different but related task (source domain). Generally, it can also apply using a pretrained model to predict related but unseen datasets [31]. In this context, the source domain comprises the datasets used in the pretrained model, while the unseen datasets are the target domain. Fawaz _et al._[32] investigated the TL approach in the context of time series classification showing that the performance can improve or degrade according to the dataset used for the transfer. Hao _et al._[33] built a QoS Bayesian Network to efficiently estimate the QoS of VMs by quantifying the uncertain relationships among their relevant features on Alibaba published datasets. Khan _et al._[34] applied TL to clustering algorithms to estimate the energy state of the VMs. We surveyed a total of 72 works on Cloud Computing Workload. For the sake of brevity, we did not include them all. None of them used the Google Cloud Trace 2019, and Alibaba Cluster traces 2020 in the context of workload prediction. A GPU trace from the latter is also available, which has become essential in DL and AI applications. Most papers use Google Cloud Trace 2011, Alibaba 2018, Bitbrains and other HTTP server traces. Still, the preprocessing steps of the datasets are usually inconsistent or not well described, limiting the possibility of replicating the research. Furthermore, none of the previous works investigated the concept of transfer learning in the context of probabilistic workload prediction and the generalisation capabilities of DL models exploiting multiple datasets, a milestone step in the era of big data. ## 3 Predictive Model In the resource management scheme in a cloud computing environment (see Fig. 1), the resource manager is the leading actor, which uses the future demand prediction to configure the VM in the cloud servers. The forecast is provided by the predictive model, which is the focus of this paper. Predictive models are trained based on the historical workload data. Every time a new configuration occurs, the workload history is updated to feed the predicted model with newly available data. We can classify the models based on the training datasets, the prediction type, and the DL architecture used to forecast future demand. We propose a wide variety of models. We add a prefix to their name to identify them better. The model can be trained using one trace (S) from the twelve preprocessed datasets or all of them (M). Regarding the type of prediction, we can distinguish between univariate (U) models and bivariate (B) models. In the former case, we predict just one resource at a time; in the latter, we simultaneously predict both processing units and memory demand. The architectures used as a predictive model are an LSTM-based model (LSTM), used as the baseline, while the proposed models are the HBNN and the LSTMD, where D stands for distribution. Each model in the analysis is given by combining these three categories. For instance, M-B-HBNN refers to an HBNN model trained with multiple datasets for a bivariate time series. In the following sections, we describe these models in more detail. A graphical representation of the three compared architectures is depicted in Fig. 2. ### _Lstm_ An LSTM-based model is used as the baseline of our experiments. The network consists of an input layer with a size of 288, corresponding to the past 24 hours of workload. The input size has been found experimentally. The sequence is given to a succession (between one and three) of 1DConv layers followed by an LSTM layer, frequently used to deal with sequential data. The combination of these two types of layers has been proven effective in time series forecasting [20, 24]. The LSTM is followed by a sequence of dense layers whose number varies with the training set's size and the prediction type. For instance, for the M-B-LSTM, the number of dense layers is 3, while for the S-U-LSTM is 2. The last layer has one single neuron in the case of univariate models or two in the case of bivariate models. The Mean Squared Error (MSE) is used as the metric for the optimization via the Adam algorithm [35]. The training converges in a few hundred epochs, and an early stopping strategy is applied to prevent overfitting. More details about the training/test split can be found in the following sections. The network is implemented in Keras1 and the hyperparameter tuning, which also includes the number of layers in the network, has been performed using the Talos2 optimization library. Footnote 1: [https://keras.io/](https://keras.io/) Footnote 2: [https://github.com/autonomio/talos](https://github.com/autonomio/talos) ### _Hbnn_ Similarly to our previous work [11], we design a Bayesian Last Layer network to capture the epistemic uncertainty. The architecture differs from the LSTM model only on the last two layers so that we can understand the impact of the Bayesian layer. The sequential input is fed to one or more 1DConv layers, followed by an LSTM layer. As in the previous model, the LSTM is placed behind a sequence of dense layers and a Bayesian dense layer, which replaces the last dense layer of the baseline. A dense layer follows the Bayesian one with two neurons for the univariate prediction or 4 for the bivariate forecast to predict the mean and the variance of one or two Normal distributions embedded in a distributional layer which is the output of our model. The output distribution captures the aleatory uncertainty given by the noise of the workload demand. The network is optimized by minimizing the loss function, which is the negative log-likelihood between the target demand and the distribution predicted by the model. Except for these differences in architecture and loss function, a similar training strategy to the baseline is applied. The Bayesian layer and the distribution layer are implemented using TensorFlow probability3, the rest of the network is implemented in Keras, and the hyperparameterization has been run in the same way as the LSTM model. Footnote 3: [https://www.tensorflow.org/probability](https://www.tensorflow.org/probability) Footnote 4: [https://github.com/autonomo/talos](https://github.com/autonomo/talos) Footnote 5: [https://www.tensorflow.org/probability](https://www.tensorflow.org/probability) ### _Lstmd_ The LSTMD model's architecture is the same as the HBNN, but a standard dense layer replaces the Bayesian dense layer. The output is still a Gaussian distribution in the case of a univariate prediction or two Gaussian distributions in the case of a bivariate prediction. Also, in this case, the weights are optimized with the negative log-likelihood as the loss function. ## 4 Training Scenarios The first part of the experiments focuses on comparing univariate and bivariate predictive models trained with one or more traces. We keep the same training procedure as our previous work [11], with a bigger hyperparameter space, a different scaling and the use of data shuffling to improve the overall performance of the models. Moreover, we preprocess and extend the work to more datasets. Furthermore, we investigate the impact of training a bivariate version of the predictive model in contrast with the univariate case and the benefits of training a deep learning model using multiple datasets. The second part is focused on the TL approach applied to the M-B-HBNN model, using the same network's architecture (i.e. with the hyperparameters found on the optimisation for the M-B-HBNN model). We apply different training scenarios, including fine-tuning (FT) the network's weights, i.e. starting from the weights of the pretrained model, we proceed with further training of the network on the datasets we want to predict. We can enumerate seven different approaches: * **All**: the model is trained with all 12 clusters. We then predict the specific datasets without any further training. * **All FT**: We start from _All_. We then fine-tune the network on the specific dataset before making the prediction. This assesses whether the FT process leads to a better weight configuration for the dataset we want to predict. * **All-but-one**: the model is trained with 11 out of 12 datasets (source domain). We then predict the remaining dataset (target domain). With this experiment, we want to further investigate the M-B-HBNN model's generalisation capabilities and evaluate models on unseen clusters (zero-shot TL). This would be very helpful when a new cluster is available, and we want to deploy a predictive model immediately. * **All-but-one FT**: We start with the _All-but-one_ version. We then fine-tune the network on the remaining dataset before making the prediction (one-shot TL). This investigates the TL approach with the FT on newly available data. * **GC19**: This applies only to the datasets from the Google Cloud Trace 2019. This approach is similar to _All_, but we used only Google Cloud Trace 2019 datasets. This assesses whether pretraining a model on related datasets from the same providers helps the predictive capabilities. * **GC19 FT**: We start with _GC19_ but with a fine-tuning on the specific dataset to predict. This is similar to _GC19_ but with the evaluation of the FT effect. * **Random**: this corresponds to the S-U-HBNN with random initialisation of the weights and trained on a single dataset at a time. This assesses whether starting from a pretrained model's weights or a random initialisation is better. Fig. 1: Workload Prediction Scheme Fig. 2: Network architectures comparison We also train _GC19-but-one_ and _GC19-but-one_\(FT\) versions, specific for the Google Cloud Trace 2019, but we omit the results due to the poor performance compared to the other models. Fig. 3 depicts a graphical representation of these training scenarios. ## 5 Experiments In this section, we first describe the experimental setup and the datasets used in our experiments. We then evaluate the models in terms of point estimate accuracy, the efficiency of the predicted resources and runtime performance. In each subsection, we first assess the extension to bivariate models and those trained with multiple datasets. We then discuss the results based on the TL approach. ### _Experimental Setup_ The training of the models has been run with a CPU Intel(r)Xeon(r)Gold 6240 at 2.60GHz and GPU NVIDIA Quadro RTX 8000 with 48 GB of memory where Ubuntu 20.04 is installed. The experiments are conducted over twelve real-world traces to train an LSTM model, used as a baseline, and the proposed models. For each dataset, the first 80% is used as the training set, 20% of which is used as the validation set. The remaining 20% of each dataset is used as the test set. In the case of multiple datasets, 80% of each dataset is concatenated and shuffled to make each batch more representative of each cluster. Still, there is never a time overlap between data in the training and test sets, such that the prediction of one trace cannot exploit information of a particular timestamp in another trace. We share a GitHub repository4 that contains further information on the models' architecture and the search space for the hyperparameters optimisation. In particular, the search space is based on the number of layers, the number of neurons for each layer, the batch size, the activation functions, the learning rate, the momentum and decay coefficients and the number of kernels in the 1DConv layer. Once the hyperparameters are tuned, we train the models ten times for each cluster and resource using various random seeds as initialisation to assess the optimisation algorithm's convergence. We forecast the 5-minute interval of demand 10 minutes in the future, where 10 minutes is a sufficient time interval for most applications [36, 37], e.g. resource allocation, vertical scaling etc. Footnote 4: [https://github.com/andreareds/UncertaintyAwareWorkloadPrdiction](https://github.com/andreareds/UncertaintyAwareWorkloadPrdiction) ### _Datasets_ The twelve datasets used in our experiments include one cluster from Google Cloud 2011, eight clusters from Google Cloud trace 2019, one from Alibaba Cluster Trace 2018 and two from Alibaba Cluster Trace 2020. More details on the datasets and the preprocessing phase are given for reproducibility in the following sections. All the preprocessed datasets can be downloaded from the shared repository. #### 5.2.1 Google Cloud Trace 2011 and 2019 The Google Cloud Trace 2011 [12] and 2019 [13] are publicly available datasets published by Google from the Google Cloud Platform and contain details about the resource utilisation of the cluster cells. In particular, Google Cloud Trace 2011 is composed of 29 days of resource usage collected in May 2011 from 12,500 machines in a single cluster cell, while Google Cloud Trace 2019 is composed of data of 29 days from 8 different cluster cells distributed around the world with around 10,000 machines for each cell. While Google Cloud Trace 2011 is preprocessed offline, the Google Cloud Trace 2019, about 2.4TiB compressed, is preprocessed using Google BigQuery. For each trace, we create a time series dataset that includes the average CPU and average memory usage for all the machines with a 5-minute interval, as done in other works in workload forecasting in cloud computing [38, 39, 40] with 8352 data points in total. Missing records are neglected for simplicity. For the tasks that run only partially Fig. 3: Training scenarios and transfer learning in a 5-minute window, we multiply the average resource by a weight corresponding to the fraction of the window in which the task is in execution. Data is finally scaled in the range [0, 1] using a MinMax scaling strategy for speeding up the convergence of the training. #### 5.2.2 Alibaba Cluster Trace 2018 and 2020 Consistently with the preprocessing phase for Google Cloud Trace, we preprocessed the Alibaba Cluster Trace 2018 [14] and 2020 [15]. The 2018 version includes the workload history for CPU and memory of about 4,000 machines in 8 days. The 2020 version is a longer trace of about two months from about 1,800 machines that contain over 6,500 GPUs. From this trace, we compile two datasets, one related to the CPU and memory usage and one for the GPU and GPU memory usage. As for the Google Cloud Trace, the average resource usage is aggregated in windows of 5 minutes each. Tasks that run only partially in the specified time interval are weighted with the fraction of time in which the task is running. Missing records are neglected for simplicity, and the data are scaled in the range [0, 1]. ### _Point Estimate Accuracy_ In this section, we compare the models in terms of prediction errors. The metrics used for this evaluation are MSE and Mean Absolute Error (MAE), as usually done in time series forecasting approaches. In the case of HBNN and LSTMD models, the error is computed w.r.t. the mean of the predicted distribution, while for the LSTM, the error is calculated based on the point prediction. Tables I and II show the results for the four combinations of single/multiple datasets and univariate/bivariate for CPU and memory demand, respectively. For each model combination, the average MSE and MAE are computed based on the results achieved on the twelve traces. As confirmed from the previous findings [41], learning patterns from multivariate time series is very hard, especially when the size of the training data is small, when there is no strong correlation between time series [42] and when we are focused on more short-term predictions [43]. In our traces, we analyse the Pearson correlation between the CPU/GPU and memory demand, finding no strong correlation between the time series and determining that homoscedasticity holds according to the Breusch-Pagan test [44]. We can appreciate the improvement achieved by training the model using multiple dataset traces. All the bivariate models improve their performance compared to the univariate case. We can see that the HBNN model is the one which benefits most from more training data. The model trained on multiple datasets does not require an extra tuning phase on the specific dataset we want to make the prediction: on the contrary, MSE and MAE metrics get worse if we apply FT. Overall the LSTMD-based models are the ones that achieve the best score in terms of these metrics. On the contrary, the S-B-HBNN failed to converge; the S-B-LSTM struggles and does not achieve the same performance as the univariate, while S-B-LSTM worsens to a lesser extent. However, for each possible combination, except for the single bivariate models, the models have the same accuracy according to the statistical difference test for time series forecasting models at 95% of confidence level (Diebold-Mariano Test [45]). For this reason, we cannot limit the analysis to these metrics. Tables III and IV show the results of the average MSE and MAE, respectively, for the TL approach. Again, the average is w.r.t. the clusters used to train the models. We can see the importance of training the model on multiple traces. A pretrained network on similar datasets is more impactful than training on the specific dataset we want to make the prediction, showing the model's TL capabilities when predicting a probability distribution. Moreover, the fine-tuning process never improves the performance of the pretrained models. We believe this is because the model achieves a local optimum that can hardly be improved with fine-tuning on much smaller datasets than those used to pretrain it. Specifically, the TL approach is the one that degrades the accuracy less when FT is applied. Among the considered training scenarios, M-B-HBNN is the one that achieves the best performance. We believe this is because when more data is available, the network has the chance to learn more patterns from the time series used in training due to a bigger and richer training set. ### _Impact of Prediction on QoS_ This section evaluates the models using the metrics defined in [11]. The goal is to assess the quality of the prediction w.r.t. a target QoS level that the providers should ensure to the customers. In particular, these metrics are: * Success Rate (SR): the percentage of future demand within the confidence interval * Overprediction (OP): the total amount of overprediction defined as the difference between the upper bound of the prediction and the real demand for the requests within the confidence interval * Underprediction (UP): the total amount of underprediction defined as the difference between the real demand and the upper bound of the prediction for the requests greater than the upper bound * Total Predicted Resources (TPR): the sum of all the upper bounds of the predictions In the case of HBNN and LSTMD models that predict a probability distribution, these scores are computed w.r.t. the upper bound of the confidence interval with a confidence level of 95%, 97% and 99%. To compute an interval from a point estimate, we can predict the output of the network plus a fixed threshold, e.g. 5%, as done in [9, 10]. The HBNN and LSTMD are compared to their LSTM counterpart with a fixed threshold s.t. its accuracy is close to the SR of the compared model. Table V lists all the results for all the possible combinations of models at the aforementioned confidence levels. We also draw a graphical representation of the total predicted resources versus the success rate for CPU demand in Fig. 4. The graph is drawn by varying the confidence level between 90% and 99.5% and computing the SR and TPR, respectively. If one curve is above the other, the model achieves better performance because, with the same SR, it predicts a lower TPR. On the other hand, with the same TPR, it reaches a higher SR, achieving a higher number of correctly forecasted values within the upper bound of the distribution but with a lower amount of total predicted resources. For the readability of the graph, we removed the plot of the single bivariate models, which achieve poor performance under this metric. From the aforementioned table and figure, we can see that the LSTMD and HBNN models consistently outperform the LSTM counterpart, with HBNN performing well for the CPU and LSTMD performing better for the memory prediction and showing the advantage of predicting the probability. Training a model with multiple datasets compared to the model trained with one single dataset has more advantages, despite the single univariate model seeming superior in the case of memory prediction at a 99% confidence level. Again, it is clear that the univariate prediction is an easier task compared to the bivariate version. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \cline{3-10} \multicolumn{1}{c}{} & \multicolumn{4}{c||}{**CPU/GPU**} & \multicolumn{4}{c|}{**Memory**} \\ \hline **Target QoS** & **Model** & **SR** & **OP** & **UP** & **TPR** & **SR** & **OP** & **UP** & **TPR** \\ \hline \hline \multirow{9}{*}{95\%} & S-U-LSTM & 93.86 & _181.68_ & 5.31 & _1181.36_ & 93.62 & _118.94_ & 3.69 & _1061.05_ \\ \cline{2-10} & S-U-HBNN & 93.86 & 181.83 & _5.09_ & 1181.72 & 93.54 & 128.57 & _3.61_ & 1070.75 \\ \cline{2-10} & \hline \hline S-U-LSTM & 93.14 & 171.98 & 5.64 & 1171.32 & 93.26 & _111.61_ & 3.87 & _1053.54_ \\ \cline{2-10} & S-U-LSTM & 93.1 & **166.55** & 6.36 & _1165.17_ & 93.26 & 117.54 & _3.79_ & 1059.55 \\ \hline \hline \multirow{9}{*}{95\%} & M-U-LSTM & 91.45 & 157.07 & 7.62 & 1154.44 & 95.74 & 146.99 & _2.57_ & 1090.21 \\ \cline{2-10} & M-U-HBNN & 91.41 & **154.71** & _7.02_ & **1152.67** & 95.78 & _143.77_ & 2.85 & _1086.72_ \\ \cline{2-10} & \hline M-U-LSTM & 94.62 & 196.45 & 4.30 & 1197.14 & 95.18 & _137.45_ & 3.07 & _1080.18_ \\ \cline{2-10} & M-U-LSTM & 94.62 & 197.69 & 4.65 & 1198.03 & 95.22 & 141.04 & 2.86 & 1083.98 \\ \hline \hline S-B-LSTM & 91.93 & 863.11 & 32.83 & 1835.26 & 91.89 & 676.14 & 12.58 & 1609.35 \\ \cline{2-10} & S-B-HBNN & 91.93 & 825.14 & 29.46 & 1800.66 & 91.89 & 675.94 & 13.53 & _1608.21_ \\ \hline \hline S-B-LSTM & 92.61 & 846.79 & 30.21 & 1821.57 & 92.45 & 679.73 & 13.18 & 1612.59 \\ \cline{2-10} & S-LSTM & 92.61 & 895.31 & 30.76 & 1870.14 & 92.45 & 686.47 & _11.70_ & 1620.56 \\ \hline \hline M-B-LSTM & 95.66 & 227.28 & 3.91 & 1228.35 & 97.43 & 188.17 & _1.28_ & 1132.69 \\ \cline{2-10} & M-B-HBNN & 95.7 & 216.83 & _3.81_ & _1218.01_ & 97.39 & 184.19 & 1.51 & _1128.48_ \\ \hline \hline M-B-LSTM & 96.95 & 283.94 & **2.46** & 1286.46 & 99.0 & 229.80 & **0.54** & 1175.06 \\ \cline{2-10} & M-B-LSTM & 96.99 & 259.69 & 2.70 & 1261.98 & 99.0 & 229.80 & 0.56 & _1175.03_ \\ \hline \hline S-U-LSTM & 95.18 & 201.68 & 4.13 & 1202.54 & 95.02 & **131.64** & 293 & **1074.50** \\ \cline{2-10} & S-U-HBNN & 95.22 & 202.32 & 3.92 & 1203.89 & 95.06 & 143.98 & 2.66 & 1087.12 \\ \hline \hline S-U-LSTM & 94.58 & **192.27** & 4.37 & **1192.88** & 94.86 & **125.88** & 2.94 & **1068.74** \\ \cline{2-10} & S-U-LSTM & 94.54 & 194.59 & 4.51 & 1195.06 & 94.9 & 130.45 & 3.00 & 1073.25 \\ \hline \hline M-U-LSTM & 93.82 & 183.61 & 5.51 & 1183.08 & 96.95 & 164.98 & 1.89 & 1108.89 \\ \cline{2-10} & M-U-HBNN & 93.82 & **180.01** & 5.04 & **1179.95** & 96.91 & 162.14 & 2.22 & _1105.72_ \\ \hline \hline M-U-LSTM & 95.95 & 219.12 & 3.16 & 1220.95 & 96.39 & 154.76 & 2.36 & _1098.19_ \\ \cline{2-10} & M-U-LSTM & 95.99 & _216.67_ & 3.70 & 1217.96 & 96.39 & 157.76 & 2.14 & 1101.42 \\ \hline \hline S-B-LSTM & 93.42 & 957.91 & 25.50 & 1937.39 & 94.78 & 755.57 & 7.32 & 1694.05 \\ \cline{2-10} & S-B-HBNN & 93.46 & _918.34_ & 22.26 & 1901.07 & 94.78 & 767.66 & 7.13 & 1706.43 \\ \hline \hline S-B-LSTM & 93.58 & 940.42 & 23.47 & 1921.93 & 94.58 & 765.92 & _7.35_ & 1704.37 \\ \cline{2-10} & S-B-LSTM & 93.58 & 964.9 & 25.02 & 1944.87 & 94.58 & 734.34 & 8.52 & 167.1.03 \\ \hline \hline M-B-LSTM & 96.75 & 254.86 & 2.85 & 1256.99 & 98.03 & 200.34 & **1.00** & 1145.14 \\ \cline{2-10} & M-B-HBNN & 96.75 & 242.97 & **2.81** & 1245.15 & 98.03 & 206.17 & 1.03 & 1150.93 \\ \hline \hline M-B-LSTM & 97.75 & 316.68 & **1.59** & 1320.07 & 99.44 & 258.94 & 0.31 & 1204.43 \\ \cline{2-10} & M-B-LSTM & 97.75 & 293.65 & 1.78 & 1296.85 & 99.44 & 276.75 & **0.19** & 1222.36 \\ \hline \hline S-U-LSTM & 97.03 & 247.30 & 2.32 & 1249.87 & 97.59 & **171.30** & 1.49 & **1115.60** \\ \cline{2-10} & S-U-HBNN & 97.03 & 241.64 & 2.31 & 1244.31 & 97.55 & 173.76 & 1.55 & 1118.02 \\ \hline \hline S-U-LSTM & 96.35 & 231.27 & 2.67 & 1233.59 & 97.07 & **153.41** & 1.77 & **1097.44** \\ \cline{2-10} & S-U-LSTM & 96.31 & **229.15** & 2.95 & **1231.18** & 97.11 & 159.16 & 1.81 & 1103.15 \\ \hline \hline M-U-LSTM & 96.47 & 228.65 & 3.22 & **1230.41** & 98.11 & 195.35 & 1.12 & _1140.03_ \\ \cline{2-10} & M-U-HBNN & 96.47 & 229.15 & 2.67 & 1231.46 & 98.07 & 197.21 & 1.41 & 1141.60 \\ \cline{2-10} & M-U-LSTM & 97.79 & 262.69 & 1.75 & show the benefits of having a pretrained model that can be used to predict the datasets that have never been seen before by the model. To the best of our knowledge, this is the first time this technique has been applied to cloud workload prediction. Interestingly, training the model only on the datasets from the same distribution (eight clusters from GC2019) allows the training to achieve better performance in terms of these metrics. Overall, we believe that training the model with multiple datasets leads to many benefits in terms of performance, also in the case of prediction on unseen datasets in the training set. More available data allows the model to capture more patterns shared among similar traces. Also, it will enable the service provider to accelerate the deployment process by avoiding the need to retrain new models from scratch or wait for new data when additional clusters join the cloud computing system. ### _Accuracy of the Predicted Uncertainty_ In this section, we evaluate the prediction accuracy of the probabilistic models HBNN and LSTMD. To do so, we plot the targeted confidence levels versus the success rate achieved by the models by varying the confidence level between 90% and 99.5%. A perfect model would achieve an SR equal to the targeted confidence level. This evaluation does not apply to LSTM, which predicts a point estimate, so we should compute a fixed threshold differently. Moreover, we compute the MSE and MAE of the curve w.r.t. the line \(y=x\) to aggregate the plot results in a single numerical value to evaluate the overall performance. The results are depicted in Fig. 7 and Tables VII and VIII for MSE and MAE, respectively. Although w.r.t. these metrics, the M-U-LSTMD is the model that overall achieves the best accuracy, there are interesting differences between each combination of the model, with the HBNN outperforming the LSTMD counterpart in the case of S-U and M-B versions. Similarly, we show the results for the TL experiments in Table IX and Fig. 8. From the accuracy point of view, we can see the benefits of training a model using multiple datasets and more data, even for prediction on unseen datasets. Again, the FT degrades the quality of the prediction also from an accuracy point of view. Moreover, in contrast with what we discussed in the previous experiments, we do not see any advantage in training the model using datasets from the same cluster type. However, the models generalize better when more datasets are used for the training. Furthermore, we should also consider that MSE and MAE are symmetric metrics. Regarding the QoS, we should prefer models that achieve a higher SR than the target confidence level rather than a lower one, even if the MSE and MAE are lower. For instance, we should favour a model that \begin{table} \begin{tabular}{|c|c|c|c|c|c||c|c|c|c|} \cline{3-10} \multicolumn{1}{c|}{} & \multicolumn{4}{c||}{**CPU/GPU**} & \multicolumn{4}{c|}{**Memory**} \\ \hline **Target QoS** & **Model** & **SR** & **OP** & **UP** & **TPR** & **SR** & **OP** & **UP** & **TPR** \\ \hline \hline \multirow{8}{*}{95\%} & All & **94.58** & 206.29 & 5.08 & 1206.2 & 95.74 & 156.71 & 2.41 & 1100.1 \\ \cline{2-10} & All FT & 87.47 & 167.17 & 11.69 & 1160.46 & 85.87 & 92.17 & 7.83 & 1030.13 \\ \cline{2-10} & All-but-one & 95.7 & 216.83 & 3.81 & 1218.01 & 97.39 & 184.19 & 1.51 & 1128.48 \\ \cline{2-10} & All-but-one & FT & 94.1 & 204.32 & 4.71 & 1204.6 & **95.14** & 160.2 & 2.68 & 1103.32 \\ \cline{2-10} & CG19 & 98.55 & 108.99 & 0.38 & 1126.04 & 97.18 & 116.24 & 1.26 & 1032.09 \\ \cline{2-10} & GC19 FT & 77.7 & 49.82 & 8.55 & 1058.69 & 89.36 & 74.61 & 3.84 & 987.87 \\ \cline{2-10} & Random & 92.29 & 190.49 & 6.53 & 1188.94 & 85.75 & 559.49 & 28.22 & 1477.07 \\ \hline \hline \multirow{8}{*}{97\%} & All & 95.95 & 230.14 & 3.97 & 1231.16 & **97.11** & 175.27 & 1.75 & 1119.31 \\ \cline{2-10} & All FT & 91.01 & 197.84 & 145.19 & 90.85 & 112.9 & 5.17 & 1053.52 \\ \cline{2-10} & All-but-one & **96.75** & 242.97 & 2.81 & 1245.15 & 98.03 & 206.17 & 1.03 & 1150.93 \\ \cline{2-10} & All-but-one & FT & 95.46 & 227.05 & 3.5 & 1228.53 & 96.15 & 176.18 & 2.03 & 1179.95 \\ \cline{2-10} & CG19 & 99.42 & 129.55 & 0.23 & 1143.51 & 98.19 & 133.11 & 0.92 & 1049.29 \\ \cline{2-10} & CG19 & 84.0 & 60.71 & 5.94 & 1072.2 & 92.9 & 87.48 & 2.61 & 1001.97 \\ \cline{2-10} & Random & 93.98 & 212.67 & 4.96 & 1212.69 & 89.64 & 626.76 & 18.71 & 1553.85 \\ \hline \hline \multirow{8}{*}{99\%} & All & 97.19 & 275.76 & 2.45 & 1278.3 & 98.35 & 210.79 & 1.0 & 1155.59 \\ \cline{2-10} & All FT & 94.5 & 255.78 & 4.47 & 1256.29 & 96.07 & 154.39 & 2.49 & 1097.7 \\ \cline{1-1} \cline{2-10} & All-but-one & 97.95 & 292.95 & 1.55 & 1296.39 & **99.04** & 248.08 & 0.54 & 1193.33 \\ \cline{1-1} \cline{2-10} & All-but-one & FT & 97.35 & 270.77 & 2.01 & 1273.74 & 97.71 & 206.75 & 1.2 & 1151.35 \\ \cline{1-1} \cline{2-10} & GC19 & **99.86** & 158.17 & 0.14 & 1175.46 & 98.84 & 165.23 & 0.54 & 1081.79 \\ \cline{1-1} \cline{2-10} & CG19 & 92.76 & 38.18 & 2.89 & 1097.72 & 96.6 & 112.81 & 1.3 & 1028.61 \\ \cline{1-1} \cline{2-10} & Random & 96.31 & 255.44 & 2.87 & 1257.56 & 94.5 & 760.59 & 7.53 & 1698.85 \\ \hline \end{tabular} \end{table} TABLE VI: CPU and Memory allocation statistics of the models Fig. 6: Total Predicted CPU for transfer learning experiments achieves a 96% SR compared to 94% w.r.t. a 95% confidence level, despite the score being the same in terms of MSE and MAE. For this reason, asymmetric metrics would be more suitable to evaluate the model's accuracy combined with the QoS we aim to provide to the customers. ### _Runtime Performance_ The applicability of these DL models to real-world scenarios strongly depends on the time necessary for training and deploying the model in a cloud resource management setup. The three critical aspects in determining the usability of DL models are the training time, the fine-tuning time (i.e. how often we refresh the network weights with newly available data) and the inference time. The training time depends, for instance, on the size of the training set; the fine-tuning time depends on how often we retune the weights of the deep learning model, and the inference time is related to the forecast time once the model is trained. We measure these three metrics by varying the size of the training set in 20%, 40%, 60% and 80%, by changing the number of steps among 6, 12, 18 and 24 for the fine-tuning time, which correspond to 30, 60, 90 and 120 minutes frequency and we measure the inference time of predicting one sample. The results are computed as an average of 10 runs for all the preprocessed traces. Table X lists all the time measurements in seconds for all the combinations of the models. We omit the measures for the TL part, where the speed of convergence of the network is strictly correlated to the size of the training set, and it is applied only to the HBNN model (the DL architecture is the same). As we can see, the models take more or less the same time for the fine-tuning and inference steps, with the HBNN often being the faster architecture. The training time, instead, varies with the type of training and the prediction. The HBNN is the slowest model, except for the univariate version trained with multiple datasets. However, the training phase generally is infrequent and done offline, e.g. overnight, so it is not a critical factor. At the same time, fine-tuning and inference are the most frequent actions in resource management operations. We would also like to underline that the results for the S-U versions refer to the training of one single (trace, resource pair, which means that we need to run this phase 24 times (12 clusters \(\times\) 2 resources). This also applies to the S-B (12 times) and M-U versions (2 times). With this runtime analysis, we observe that all the models can be practically deployed in \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \cline{3-5} \multicolumn{1}{c}{} & \multicolumn{2}{c|}{**Univariate**} & \multicolumn{2}{c|}{**Bivariate**} \\ \cline{3-5} \multicolumn{1}{c}{} & \multicolumn{1}{c|}{**CPU**} & \multicolumn{1}{c|}{**Memory**} & \multicolumn{1}{c|}{**CPU**} & \multicolumn{1}{c|}{**Memory**} \\ \cline{3-5} \multicolumn{1}{c}{} & \multicolumn{1}{c|}{**GPU**} & \multicolumn{1}{c|}{**GPU**} & \multicolumn{1}{c|}{**Memory**} & \multicolumn{1}{c|}{**GPU**} & \multicolumn{1}{c|}{**Memory**} \\ \hline \multirow{2}{*}{**Single**} & **LSTM** & 2.96 & 3.07 & 8.15 & 6.51 \\ \cline{2-5} & **HBNN** & 1.69 & 1.80 & 11.80 & 11.81 \\ \hline \hline \multirow{2}{*}{**Multi**} & **LSTM** & **1.24** & **1.14** & 6.89 & 21.05 \\ \cline{2-5} & **HBNN** & 12.07 & 2.42 & 1.97 & 8.66 \\ \hline \end{tabular} \end{table} TABLE VII: Average MSE comparison for resource demand accuracy \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \cline{3-5} \multicolumn{1}{c}{} & \multicolumn{2}{c|}{**Univariate**} & \multicolumn{2}{c|}{**Bivariate**} \\ \cline{3-5} \multicolumn{1}{c}{} & \multicolumn{1}{c|}{**CPU**} & \multicolumn{1}{c|}{**Memory**} & \multicolumn{1}{c|}{**CPU**} & \multicolumn{1}{c|}{**Memory**} \\ \cline{3-5} \multicolumn{1}{c}{} & \multicolumn{1}{c|}{**GPU**} & \multicolumn{1}{c|}{**Memory**} & \multicolumn{1}{c|}{**GPU**} & \multicolumn{1}{c|}{**Memory**} \\ \hline \multirow{2}{*}{**Single**} & **LSTM** & 1.47 & 1.67 & 2.66 & 2.34 \\ \cline{2-5} & **HBNN** & 1.10 & 1.27 & 3.32 & 3.03 \\ \hline \hline \multirow{2}{*}{**Multi**} & **LSTM** & **0.98** & **0.92** & 2.08 & 3.94 \\ \cline{2-5} & **HBNN** & 3.34 & 1.25 & 1.11 & 2.39 \\ \hline \end{tabular} \end{table} TABLE VIII: Average MAE comparison for resource demand accuracy Fig. 8: Memory prediction accuracy for transfer learning experiments Fig. 7: Memory prediction accuracy real-world scenarios, with the advantage of having a model trained with multiple datasets, which does not require any parallelization of the systems for each possible cluster cell. ## 6 Conclusion and Future Work Probabilistic forecasting and uncertainty modelling provide a broader picture to the resource manager in a cloud computing environment, helping the scheduling decision process. In this paper, we analysed the performance of probabilistic models that capture the uncertainty in the prediction by forecasting a probability distribution of the future demand. We evaluated univariate and bivariate models for predicting processing units and memory usage by training the models with one or more related time series from Google Cloud and Alibaba traces. We also investigated the TL technique and the generalisation capabilities of the models for related but unseen historical workload data. We showed that the training phase using multiple datasets allows the models to achieve better performance regarding resource prediction efficiency and accuracy w.r.t. a target QoS level and to reach good results for more challenging tasks such as bivariate forecasting. Moreover, the TL method allows predictions on unseen traces in the training phase, opening the chance of not retraining from scratch a model when new cluster cells are available or waiting for enough historical data. Finally, we analysed the runtime performance of the models for their deployment in practical applications. This analysis showed that the models could be practically applied to predict the demand for future resources. In future, we would like to analyse further the performance of the HBNN model in the case of different loss functions, including non-symmetrical loss, to address the target QoS level in the training phase and compare it to a full Bayesian version of the network, also in the case of multi-step-ahead prediction. We will also focus on extending the TL concept to a machine-level workload forecast to understand and analyse the amount of training data and their representativeness necessary for the model to reach high generalisation capabilities. Finally, the predictions can be exploited for scheduling, resource allocation, scaling and workload balancing problem for managing VMs in a cloud computing environment, integrating the predictive model and the resource allocator in the same pipeline. ## Acknowledgments This work was conducted with the financial support of Science Foundation Ireland under Grant Nos. 18/CRT/6223, 16/RC/3918 and 12/RC/2289-P2. which are co-funded under the European Regional Development Fund; by TAILOR (GA No. 952215), a project funded by EU Horizon 2020 research and innovation programme, and by the Google Cloud Research Credits program with the award GCP203677602. The authors thank Dr Diego Carraro from Insight Centre of Data Analytics, University College Cork, for helpful conversation and feedback. Andrea Rossi is the corresponding author.
2306.01927
Circular geodesics in the field of double-charged dilatonic black holes
A non-extreme dilatonic charged (by two ``color electric'' charges) black hole solution is examined within a four-dimensional gravity model that incorporates two scalar (dilaton) fields and two Abelian vector fields. The scalar and vector fields interact through exponential terms containing two dilatonic coupling vectors. The solution is characterized by a dimensionless parameter $a$ $(0 < a < 2)$, which is a specific function of dilatonic coupling vectors. The paper presents solutions for timelike and null circular geodesics that may play a crucial role in different astrophysical scenarios, including quasinormal modes of various test fields in the eikonal approximation. For $a = 1/2, 1, 3/2, 2$, the radii of the innermost stable circular orbit are presented and analyzed.
Kuantay Boshkayev, Gulnara Suliyeva, Vladimir Ivashchuk, Ainur Urazalina
2023-06-02T21:48:36Z
http://arxiv.org/abs/2306.01927v2
# Circular geodesics in the field of double-charged dilatonic black holes ###### Abstract A non-extreme dilatonic charged (by two "color electric" charges) black hole solution is considered in a 4-dimensional gravity model including two scalar (dilaton) fields and two Abelian vector fields. Scalar and vector fields interact through exponential terms which contain two dilatonic coupling vectors. The solution is governed by a dimensionless parameter \(a\) (\(0<a<2\)), which is a certain function of dilatonic coupling vectors. The paper presents solutions for timelike and null circular geodesics which may play an important role in different astrophysical problems, including quasinormal modes of various test fields in the eikonal approximation. For \(a=1/2,1,3/2,2\) the innermost stable circular orbit (ISCO) radii are presented and analysed. black holes, timelike geodesics, null geodesics, scalar fields, Abelian vector fields, ISCO. ## 1 Introduction After the discovery of gravitational waves in 2016, the interest in the physics of stellar-mass black holes has begun to grow rapidly [1]. Gravitational waves were originally postulated by Einstein over a century ago. Their long search led to the creation of huge super-sensitive laser interferometers LIGO, VIRGO and others, and the development of the latest highly precise methods of detection and data analysis [2]. Observations and studies of the motion of stars at the heart of the Milky Way galaxy have confirmed the existence of a supermassive black hole [3; 4; 5]. These surveys provide information about the star clusters near the galactic center and the black hole mass which is used by researches to test the predictions of general relativity [6]. In addition, the imaging process of the shadow of the supermassive black holes at the center of the M87 galaxy and in the Milky Way galaxy has required remarkable ingenuity and cohesion among scientists around the world [7; 8]. For these purposes, our entire planet was used as one giant radio telescope, consisting of separate groups of telescopes scattered in all continents. The presence of the image of a black hole shadow and its analysis verify the correctness and reliability of general relativity. All discoveries over the past 60 years testify to the significance and relevance of research in this direction. At the same time, in addition to general relativity, there are various modified and extended theories of gravity, where alternative black holes with extra parameters exist [9; 10; 11; 12; 13; 14; 15]. However, to date, within the range of observational errors, those black holes cannot be for 100% distinguished from ordinary Schwarzschild, Reissner-Nordstrom, and Kerr black holes [16]. Commonly observational data is used as an important tool for constraining black hole parameters in those theories [17; 18; 19]. This fact makes it possible to examine black holes with "colored charges" when there are scalar fields. Classical Schwarzschild and Reissner-Nordstrom black holes, as well as various models of dilatonic black holes with two colored electric charges are considered here. The dilaton is a hypothetical scalar field particle. It appears in the metric through the introduction of coupling vectors (constants), which, in turn, define a new particular subclass of black holes. In limiting cases, dilatonic solutions reduce to the Schwarzschild and Reissner-Nordstrom solutions, depending on the values of the coupling vectors. Since the dilaton has not been discovered yet, it would be interesting to investigate effects associated with its presence. In this paper we consider the possibility of dis tinguishing ordinary astrophysical black holes from dilatonic ones. In this regard, the main objective of the paper is to study the motion of test particles and photons in circular orbits in the gravitational field of astrophysical and dilatonic black holes with different color charges and coupling vectors. To achieve this goal, the following problems are posed: * Obtaining geodesic equations in the field of dilatonic black holes using the Lagrange formalism. * Calculation of the angular momentum, the energy, the effective potential of test particles and photons, and the radii of the innermost stable circular orbits in the field of dilatonic black holes. The solutions of dilatonic dyonic black holes with an arbitrary coupling constant and a canonical scalar field have been considered in Refs. [20; 21; 22; 23]. The solutions were obtained up to the solutions of two master equations for moduli functions. Some physical parameters of the solutions were also obtained: gravitational mass, scalar charge, Hawking temperature, black hole area entropy, and parameterized post-Newtonian (PPN) parameters. While in Ref. [20; 21; 22] only the physical characteristics of black holes were studied, in Ref. [23] quasi-normal modes of a massless test scalar field were studied in the background of a gravitational field for a non-extremal dilatonic dyonic black hole. Timelike and null geodesics, for example circular ones, play an important role in various astrophysical context, including accretion disks, quasiperiodic oscillations, quasinormal modes of various test fields in the eikonal approximation [24] and shadows of supermassive black holes [25; 26]. These results and studies will allow one to distinguish ordinary black holes from dilatonic ones [17]. The novelty of the work resides in the fact that the geodesics of test particles in the gravitational field of dilatonic black holes with two scalar fields and two-color electric charges will be studied for the first time. In the past, some of us have investigated geodesics in the context of standard black holes, naked singularities, and spinning deformed relativistic compact objects [27; 28; 29; 30]. In Ref. [31], geodesics around rotating black hole mimickers are studied, given by the exact solution of the stationary and axially symmetric field equations in vacuum, known as the \(\delta\)-Kerr metric. The authors study its optical properties based on a ray-tracing code for photon motion, characterize the apparent shape of the shadow of a compact object, and compare it with a Kerr black hole. In order to obtain qualitative estimates related to the observed shadow of a supermassive compact object in the M87 galaxy, the authors are guided by the values of the object's spin a and observation inclination angle close to the measured values. It has been shown that, based on only one set of shadow edge observations, it is not possible to rule out the \(\delta\)-Kerr solution as a viable source of geometry outside the compact object. It was shown in [32] that from a geometric point of view it is impossible to distinguish electrically and magnetically charged Reissner-Nordstrom black holes. One of the ways to describe the differences between these solutions is to study the dynamical motion of charged test particles near a charged black hole and study the effect of charge coupling parameters on the stability of circular orbits. The authors investigate the synchrotron radiation of charged particles accelerated by a charged black hole and estimate the intensity of the relativistic radiation of charged particles. Recently in Ref. [33] the authors have studied closed photon orbits in spherically-symmetric static solutions of supergravity theories, a Horndeski theory, and a theory of quintessence. These orbits lie in what is called a photon sphere (anti-photon sphere) if the orbit is unstable (stable). It was shown that in all the asymptotically flat solutions examined that admit a regular event horizon, and whose energy-momentum tensor satisfies the strong energy condition, there is one and only one photon sphere outside the event horizon. Note that while in [34; 35; 36; 37] the features of dilatonic black holes are studied, in [31; 32; 38] various solutions to the field equation are considered, which in the limiting case describe ordinary black holes. Refs. [31; 32; 33; 38] also study geodesics, which may differ in the field of ordinary black holes. There are some works studying the motion of test particles and photons around static [39; 40; 41; 42; 43] and rotating [44; 45] dilatonic black holes in different theories of gravity. To the knowledge of the authors, geodesics around dilatonic black holes with two scalar fields and two vector fields have not been studied elsewhere in the literature. In this paper, a solution for a charged dilaton black hole (BH) (it is S-dual to dyonic-like solution from [23]) and a particular solutions for null geodesics are considered. It should be noted that there is some interest in spherically symmetric solutions see Refs. [46; 47; 48; 49] and references there. These solutions appear in gravitational models with scalar fields and antisymmetric forms. In our analyses, we follow the methodology of [50], where the motion of neutral test particles is studied in the field of a Reissner-Nordstrom black hole. The paper is organised as follows: in Section 2 we present charged black solutions with two scalar (dilaton) fields and two Abelian vector fields and consider their main characteristics. In Section 3 we examine the geodesics of neutral test particles and photons, derive the expressions for energy, angular momentum, effective etc, study their behavior and analyze the stability of circular geodesics. In Section 4 we summarise our conclusion and discuss perspectives. ## 2 Charged black hole solution The action of a model containing two scalar fields, 2-form and dilatonic coupling vectors is given by \[S=\frac{1}{16\pi G}\int d^{4}x\sqrt{|g|}\bigg{\{}R[g]-g^{\mu\nu} \partial_{\mu}\vec{\varphi}\partial_{\nu}\vec{\varphi}\] \[\qquad-\frac{1}{2}e^{2\vec{\lambda}_{1}\vec{\varphi}}F^{(1)}_{ \mu\nu}F^{(1)\mu\nu}-\frac{1}{2}e^{2\vec{\lambda}_{2}\vec{\varphi}}F^{(2)}_{ \mu\nu}F^{(2)\mu\nu}\bigg{\}}, \tag{1}\] where \(g=g_{\mu\nu}(x)dx^{\mu}\otimes dx^{\nu}\) is the metric, \(|g|=|\det(g_{\mu\nu})|\), \(\vec{\varphi}=(\varphi^{1},\varphi^{2})\) is the vector of scalar fields belonging to \(\mathbb{R}^{2}\), \(F^{(i)}=dA^{(i)}=\frac{1}{2}F^{(i)}_{\mu\nu}dx^{\mu}\wedge dx^{\nu}\) is the 2-form with \(A^{(i)}=A^{(i)}_{ii}dx^{\mu}\), \(i=1,2\); \(G\) is the gravitational constant, \(\vec{\lambda}_{1}=(\lambda_{1i})\neq\vec{0}\), \(\vec{\lambda}_{2}=(\lambda_{2i})\neq\vec{0}\) are the dilatonic coupling vectors obeying \[\vec{\lambda_{1}}\neq\vec{\lambda_{2}}, \tag{2}\] and \(R[g]\) is the Ricci scalar. Here and in what follows we set \(G=c=1\) (where \(c\) is the speed of light in vacuum.) We consider a so-called double-charged black hole solution to the field equations corresponding to the action (1) which is defined on the (oriented) manifold \[\mathcal{M}=\mathbb{R}\times(2\mu,+\infty)\times S^{2}, \tag{3}\] and has the following form \[ds^{2} = H^{a}\bigg{\{}-H^{-2a}\left(1-\frac{2\mu}{R}\right)dt^{2} \tag{4}\] \[\qquad\qquad\qquad+\frac{dR^{2}}{1-\frac{2\mu}{R}}+R^{2}d\Omega^ {2}\bigg{\}},\] \[\varphi^{i} = \nu^{i}\ln H, \tag{5}\] with the 2-form defined by \[F^{(1)}=\frac{Q_{1}}{H^{2}R^{2}}dt\wedge dR,\quad F^{(2)}=\frac{Q_{2}}{H^{2} R^{2}}dt\wedge dR, \tag{6}\] where \(Q_{1}\) and \(Q_{2}\) are the (color) electric charges, \(\mu>0\) is the extremality parameter, \(d\Omega^{2}=d\theta^{2}+\sin^{2}\theta d\phi^{2}\) is the canonical metric on the unit sphere \(S^{2}\) (\(0<\theta<\pi\), \(0<\phi<2\pi\)), \(\tau=\sin\theta d\theta\wedge d\phi\) is the standard volume form on \(S^{2}\) and the moduli function is adopted as \[H=1+\frac{P}{R}, \tag{7}\] with \(P>0\) obeying \[P(P+2\mu)=\frac{1}{2}Q^{2}, \tag{8}\] or equivalently \[P=-\mu+\sqrt{\mu^{2}+\frac{1}{2}Q^{2}}. \tag{9}\] All the remaining parameters of the solution are defined as follows 1 Footnote 1: It should be emphasized that for vanishing \(a\to 0\), \(\nu^{i}\to 0\), \(Q_{i}\to 0\) and \(\varphi^{i}\to 0\) and the line element Eq. (4) reduces to the Schwarzschild metric. \[a = \frac{(\vec{\lambda}_{1}-\vec{\lambda}_{2})^{2}}{\Delta}, \tag{10}\] \[\Delta \equiv \frac{1}{2}(\vec{\lambda}_{1}-\vec{\lambda}_{2})^{2}+\vec{\lambda }_{1}^{2}\vec{\lambda}_{2}^{2}-(\vec{\lambda}_{1}\vec{\lambda}_{2})^{2},\] (11) \[\nu^{i} = \frac{\lambda_{1i}\vec{\lambda}_{2}(\vec{\lambda}_{2}-\vec{ \lambda}_{1})+\lambda_{2i}\vec{\lambda}_{1}(\vec{\lambda}_{1}-\vec{\lambda}_{ 2})}{\Delta}, \tag{12}\] \(i=1,2\) and \[Q_{1}^{2}=\frac{\vec{\lambda}_{2}(\vec{\lambda}_{2}-\vec{\lambda}_{1})}{2 \Delta}Q^{2},\quad Q_{2}^{2}=\frac{\vec{\lambda}_{1}(\vec{\lambda}_{1}-\vec{ \lambda}_{2})}{2\Delta}Q^{2}. \tag{13}\] Here the following additional restrictions on dilatonic coupling vectors are imposed \[\vec{\lambda}_{1}(\vec{\lambda}_{1}-\vec{\lambda}_{2})>0,\qquad\vec{\lambda}_ {2}(\vec{\lambda}_{2}-\vec{\lambda}_{1})>0. \tag{14}\] We note that \[\Delta>0, \tag{15}\] is valid for \(\vec{\lambda}_{1}\neq\vec{\lambda}_{2}\). Due to relations (14) and (15) the \(Q_{s}^{2}\) are well-defined. Note that the restrictions (14) imply relations \(\vec{\lambda}_{s}\neq\vec{0}\), \(s=1,2\), and (2). Indeed, in this case we have the sum of two non-negative terms in (12): \((\vec{\lambda}_{1}-\vec{\lambda}_{2})^{2}>0\) and \[C=\vec{\lambda}_{1}^{2}\vec{\lambda}_{2}^{2}-(\vec{\lambda}_{1}\vec{\lambda}_{ 2})^{2}\geq 0, \tag{16}\] due to the Cauchy-Schwarz inequality. Moreover, \(C=0\) if and only if vectors \(\vec{\lambda}_{1}\) and \(\vec{\lambda}_{2}\) are collinear. Relation (16) implies \[0<a\leq 2. \tag{17}\] For non-collinear vectors \(\vec{\lambda}_{1}\) and \(\vec{\lambda}_{2}\) we get \(0<a<2\) while \(a=2\) for collinear ones. This solution may be verified just by a straightforward substitution into the equations of motion. In addition, in Ref. [23], the definition of gravitational mass was obtained in relation to \(\mu\) and \(P\) parameters: \[M=\mu+\frac{a}{2}P. \tag{18}\] The following relations can be found/verified from Eqs. (10)-(13) \[\vec{\nu}^{2}=\frac{(\vec{\lambda}_{1}-\vec{\lambda}_{2})^{2}(\vec{\lambda}_{ 1}^{2}\vec{\lambda}_{2}^{2}-(\vec{\lambda}_{1}\vec{\lambda}_{2})^{2})}{\Delta^ {2}}=\frac{a(2-a)}{2}, \tag{19}\] and \[Q_{1}^{2}+Q_{2}^{2}=\frac{a}{2}Q^{2}. \tag{20}\] The calculation of scalar curvature for the metric \(ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}\) in (4) yields [23] \[R[g]=\frac{a(2-a)P^{2}(R-2\mu)}{2R^{4-a}(R+P)^{1+a}}. \tag{21}\] For the Schwarzschild (\(a=0\)) and Reissner-Nordstrom (\(a=2\)) solutions one immediately gets \(R[g]=0\). ## III Geodesic motion Geodesics is a fundamental tool for understanding the motion of test particles in the field of dilatonic black holes, and can provide valuable insights into the nature of these objects and their effects on surrounding spacetime. Geodesics can be derived from the Lagrangian \[\mathcal{L}=\frac{1}{2}g_{\alpha\beta}(x)\dot{x}^{\alpha}\dot{x}^{\beta}, \tag{22}\] using the Euler-Lagrange equations: \[\frac{d}{d\tau}\left(\frac{\partial\mathcal{L}}{\partial\dot{x}^{\alpha}} \right)-\frac{\partial\mathcal{L}}{\partial x^{\alpha}}=0, \tag{23}\] where \(\dot{x}^{\alpha}=dx^{\alpha}/d\tau=u^{\alpha}\) is the 4-velocity vector, e.g. of a test particle, moving along the curve \(x^{\alpha}(\tau)\), \(\tau\) is the proper time for massive particles moving along timelike geodesics and affine parameter in case of null geodesics, respectively, and \(\alpha=0,1,2,3\). The generalized momentum \(p_{\alpha}=g_{\alpha\beta}(x)\dot{x}^{\beta}\) for Lagrangian (22) is normalized as : \[g^{\alpha\beta}(x)p_{\alpha}p_{\beta}=g_{\alpha\beta}(x)u^{\alpha}u^{\beta}=-k, \tag{24}\] where \(k=-1,0,1\) for spacelike, null and timelike geodesics, correspondingly. For general \(k\) the quantity \((-k/2)\) is the energy integral of motion for the Lagrangian (22). For simplicity we do not consider spacelike geodesics in the paper. We consider null or timelike geodesics in the equatorial plane (\(\theta=\pi/2\)). In this case, the Lagrangian for the metric given by Eq. (4) reads \[\mathcal{L}=\frac{1}{2}H^{a}\left[-H^{-2a}\left(1-\frac{2\mu}{R}\right)\dot{t} ^{2}+\frac{\dot{R}^{2}}{1-\frac{2\mu}{R}}+R^{2}\dot{\phi}^{2}\right]. \tag{25}\] For cyclic coordinates \(t\) and \(\phi\) one obtains the integrals of motion \[\tilde{E}=H^{-a}\left(1-\frac{2\mu}{R}\right)\dot{t},\quad\tilde{L}=H^{a}R^{2 }\dot{\phi}, \tag{26}\] associated for \(k=1\) with the total energy \(E=\tilde{E}m\) and angular momentum \(L=\tilde{L}m\) of a test (neutral point-like) particle of mass \(m\), which are the constants of motion. In order to find the equation for the \(R\)-coordinate for \(\dot{R}\neq 0\) one has to use Eq. (24), which has the following form for the line element from Eq. (4) \[-H^{-a}\left(1-\frac{2\mu}{R}\right)\dot{t}^{2}+\frac{H^{a}\dot{R}^{2}}{1- \frac{2\mu}{R}}+H^{a}R^{2}\dot{\phi}^{2}=-k. \tag{27}\] This relation reduces to the following differential equation by virtue of Eq. (26): \[-\frac{H^{a}E^{2}}{m^{2}\left(1-\frac{2\mu}{R}\right)}+\frac{H^{a}\dot{R}^{2} }{1-\frac{2\mu}{R}}+\frac{H^{-a}L^{2}}{m^{2}R^{2}}=-k. \tag{28}\] Eq. (28) can be presented in terms of the effective potential: \[\dot{R}^{2}+V^{2}=\frac{E^{2}}{m^{2}}, \tag{29}\] which is explicitly given by \[V=\sqrt{H^{-2a}\left(1-\frac{2\mu}{R}\right)\left(H^{a}k+\frac{L^{2}}{m^{2}R^{ 2}}\right)}. \tag{30}\] It may be readily verified that the Lagrange equation for the radial coordinate \(R\) is equivalent to the following one \[\ddot{R}+V\frac{\partial V}{\partial R}=0. \tag{31}\] For \(\dot{R}\neq 0\) it can be obtained just by differentiating Eq. (3.8) with respect to parameter \(\tau\) and then dividing the result by \(\dot{R}\). In case \(\dot{R}=0\) the radial equation reads \[\frac{\partial V}{\partial R}=0. \tag{3.11}\] It does not follow from Eq. (3.8) and should be considered separately. Now we focus on timelike geodesics (\(k=1\)). The behavior of the effective potential as a function of \(R/\mu\) for the fixed value of \(Q/\mu=0.6\) (which is roughly \(P/\mu=0.0863\)) and for different values of \(L/(\mu m)\) is illustrated in Fig. 1. The cases of \(a=0\), \(a=1\) and \(a=2\) formally correspond to the Schwarzschild, Sen [51] and Reissner-Nordstrom solutions, respectively. It is clearly seen that the maximum of the effective potential for \(a=0\) exceeds the maximums for other cases of \(a\). In Fig. 2 the effective potential at fixed \(L_{ISCO}\) is illustrated as a function of the dimensionless radial coordinate \(R/(2\mu)\) for different \(a\)=0, 0.5, 1, 1.5, 2. Dots show the inflection points. As one may see in terms of \(R\) and \(\mu\) these points differ from the standard Schwarzschild and Reissner-Nordstrom black holes. Moreover, three dimensional plot of the effective potential with fixed value of \(Q/\mu=0.6\) is shown in Fig. 3 as a function of \(R/\mu\) and \(L/(\mu m)\). In Table 1 we present the radii of ISCO, corresponding orbital angular momentum, energy and effective potential of test particles for fixed \(Q/\mu=0.6\) as a function of parameter \(a\). The graphical representation of these values (points) are shown in Figs. 2-8. ### Circular geodesics Here we contemplate circular motions, which are characterized by condition: \(\dot{R}=0\), so \(V=E/m\). The derivative of the effective potential with re Figure 2: The effective potential of test particles with \(L_{ISCO}\) for different \(a\)=0, 0.5, 1, 1.5, 2 given in Table 1 as a function of the dimensionless radial coordinate \(R/(2\mu)\). Dots indicate the inflection points (\(R_{ISCO}/(2\mu)\), \(V_{ISCO}=V(R_{ISCO}/(2\mu))\)). \begin{table} \begin{tabular}{c c c c c} \(a\) & \(\frac{R_{ISCO}}{2\mu}\) & \(\frac{L_{ISCO}}{\mu m}\) & \(\frac{L_{ISCO}}{m}\) & \(V_{ISCO}\) \\ \hline 0.0 & 3.0000 & 3.4641 & 0.9428 & 0.9428 \\ 0.5 & 3.0211 & 3.5176 & 0.9422 & 0.9422 \\ 1.0 & 3.0427 & 3.5716 & 0.9417 & 0.9417 \\ 1.5 & 3.0650 & 3.6260 & 0.9412 & 0.9412 \\ 2.0 & 3.0879 & 3.6809 & 0.9407 & 0.9407 \\ \hline \end{tabular} \end{table} Table 1: Values of parameters \(x_{ISCO}=R_{ISCO}/(2\mu)\), \(L_{ISCO}/(\mu m)\), \(E_{ISCO}/m\) and \(V_{ISCO}=V(R_{ISCO}/(2\mu))\) for different \(a\). Note, the net charge here is fixed as \(Q/\mu=0.6\) which is equivalent to \(P/\mu=0.0863\) according to Eq. (2.9). \(x_{ISCO}\) has been calculated accounting for Eqs. (3.17)-(3.21). spect to \(R\) is given by: \[\frac{\partial V}{\partial R} = \left[2R^{3}H^{2+a}V\right]^{-1}\Big{[}H^{a}(aP(R-2\mu) \tag{3.12}\] \[+ 2\mu(P+R))+\frac{2L^{2}}{m^{2}R^{2}}\Big{\{}R(3\mu-R)\] \[+ P(R(a-1)+\mu(3-2a))\Big{\}}\Big{]}.\] The radial equation (3.11) implies \[\frac{L^{2}}{m^{2}}=\frac{H^{a}R^{2}\left[aP(R-2\mu)+2\mu(P+R)\right]}{2\left[ R(R-3\mu)+P(R(1-a)+\mu(2a-3))\right]}, \tag{3.13}\] which is substituted to Eq. (3.9) to obtain: \[\frac{E^{2}}{m^{2}}=\frac{H^{-a}(R-2\mu)^{2}\left[2R-P(a-2)\right]}{2R\left[R (R-3\mu)+P(R(1-a)+\mu(2a-3))\right]}. \tag{3.14}\] Figs. 4 and 5 show \(E/m\) and \(L/(\mu m)\) as functions of \(R/\mu\) for fixed \(Q/\mu=0.6\). Here, the distinctions among all the curves with various \(a\) will be larger for larger values of \(Q/\mu\). From Eqs. (3.13) and (3.14), it can be seen, that for timelike geodesics the motion is possible only for for \(R(R-3\mu)+P(R(1-a)+\mu(2a-3))>0\), with limiting radius: \[R_{\gamma\pm} \equiv \frac{1}{2}\Big{[}P(a-1)+3\mu \tag{3.15}\] \[\pm \sqrt{(P(1-a)-3\mu)^{2}-4P\mu(2a-3)}\Big{]},\] where \(R_{\gamma+}=R_{0}\), in fact, is the radius of the photon sphere and \(R_{\gamma-}\) is not physical since it is less than \(2\mu\). The dependence of the normalized radius of photon sphere \(R_{0}/\mu\) on \(Q/\mu\) is depicted in Fig. 6. As one can see, the larger \(Q/\mu\) the larger \(R_{0}/\mu\), at least in this representation of parameters. ### Innermost stable circular orbits Innermost stable circular orbit (ISCO) is an important quantity in the study of geodesics in the field of black holes. For example, it plays a crucial role in the physics of accretion disks as it defines the inner radius of the disk. Any massive particle that goes beyond the ISCO will fall onto a black hole, Figure 3: Three dimensional plot of the effective potential \(V\) as a function of \(L/(\mu m)\) and \(R/\mu\) according to Eq. (3.9). The black solid curve indicates \(V\) when \(L/(\mu m)\) is defined from Eq. (3.13) for circular geodesics. Dot shows the radius of ISCO with corresponding \(L/(\mu m)\) and \(V\). Here we only consider the case with \(a\)=1 and \(Q/\mu=0.6\). so the disk is supposed to have a radius larger than the ISCO to maintain its structure. The location of the ISCO, and the properties of the accretion disk (such as its size, temperature, and brightness) can be used to study the features of black holes and their surrounding environment. Using the condition \[\frac{\partial^{2}V}{\partial R^{2}}=0, \tag{3.16}\] or equivalently by equating to zero the derivatives of Eq. (3.13) or Eq. (3.14) with respect to \(R\) one can find \(R_{ISCO}\). For selected values of \(a=0,0.5,1,1.5,2\) we have the following expressions of \(R_{ISCO}\). For \(a=0\), which corresponds to the Schwarzschild black hole, the ISCO radius is given by \[x_{isco}(a=0)=3, \tag{3.17}\] where \(x_{isco}=R_{isco}/(2\mu)\). In Fig. 7 the radii of ISCOs \(R_{ISCO}/\mu\) are presented versus \(Q/\mu\). As one can see, in the representation for vanishing \(Q/\mu\) all radii reduce to 6 and for increasing \(Q/\mu\), apart from \(a\)=0 case, \(R_{ISCO}/\mu\) grows non-linearly. For \(a=1/2\) the ISCO radius is \[x_{isco} = \frac{1}{8}(6-3p)+\frac{\sqrt{X_{1/2}}}{8(p+2)}\] \[+ \frac{1}{2}\sqrt{\frac{A_{1/2}}{\sqrt{X_{1/2}}}-\frac{X_{1/2}}{1 6(p+2)^{2}}+T_{1/2}}, \tag{3.18}\] \[X_{1/2} = 16(2+p)^{2}Y_{1/2}^{\frac{1}{3}}\] \[+ (p+2)(72+92p+22p^{2}+p^{3})\] \[+ p^{2}(1+p)(8+16p+7p^{2}+p^{3})Y_{1/2}^{-\frac{1}{3}},\] \[Y_{1/2} = \frac{p^{3}(1+p)\sqrt{R_{1/2}}}{64(p+2)^{2}}+Z_{1/2},\] \[R_{1/2} = (2+p)^{-1}(1+p)(2+3p)\Big{[}3968+12288p\] \[+ 12192p^{2}+4392p^{3}+563p^{4}+51p^{5}\Big{]},\] \[Z_{1/2} = 4^{-3}(p+2)^{-3}p^{3}(1+p)^{2}\] \[\times \Big{[}128+264p+114p^{2}+10p^{3}+p^{4}\Big{]},\] \[A_{1/2} = \frac{1}{8}(432+720p+280p^{2}+9p^{4}),\] \[T_{1/2} = \frac{3(72+92p+22p^{2}+p^{3})}{16(2+p)},\] where \(p=P/2\mu\). For \(a=1\) which is formally identical to the Sen black hole case, the result is straightforward \[x_{isco}(a=1)=1+(1+p)^{\frac{1}{3}}+(1+p)^{\frac{2}{3}}. \tag{3.19}\] For \(a=3/2\) the ISCO radius is \[x_{isco} = 1+\frac{p}{2}+X_{3/2}^{\frac{1}{3}}+\frac{Q_{3/2}}{X_{3/2}^{\frac {1}{3}}}, \tag{3.20}\] \[X_{3/2} = T_{3/2}+\frac{p(1+p)}{16(2+3p)}\sqrt{R_{2/3}},\] \[T_{3/2} = \frac{32+96p+108p^{2}+51p^{3}+9p^{4}}{16(2+3p)},\] \[R_{3/2} = \frac{(2+p)\Big{[}384+896p+544p^{2}+88p^{3}-13p^{4}\Big{]}}{2+3p},\] \[Q_{3/2} = \frac{8+20p+15p^{2}+4p^{3}}{4(2+3p)}.\] For \(a=2\), which corresponds to the Reissner-Nordstrom black hole case, the ISCO radius is \[x_{isco}(a=2)=1+p+X_{2}^{\frac{1}{3}}+\frac{1+p+p^{2}}{X_{2}^{\frac{1}{3}}}, \tag{3.21}\] Figure 7: The location of normalized radii \(R_{ISCO}/\mu\) as a function of \(Q/\mu\). where \[X_{2}=\frac{2+p(1+p)\Big{[}7+4p(1+p)+\sqrt{5+4p(1+p)}\Big{]}}{2(1+2p)}.\] Using expressions (9) and (18) allows one to represent Eqs. (3.17) - (3.21) in terms of the gravitational mass \(M\) and net charge \(Q\). Correspondingly, in Table 2 we present \(\mu\) and \(P\) in terms of \(M\) and \(Q\) depending upon \(a\). Constraints for \(Q/M\) can be obtained from Table 2, by requiring \(\mu>0\). Correspondingly, one finds \(-2\sqrt{2}/a<Q/M<2\sqrt{2}/a\) and the case \(a=2\) will give \(-\sqrt{2}<Q/M<\sqrt{2}\). In Ref [23] it was shown that for \(a=2\) case under the radial coordinate transformation, \(R=r_{RN}-P\), along with \(M_{RN}=\mu+P\), the metric (4) coincides with the Reissner-Nordstrom metric. Taking this into account, one can rewrite Eq. (3.21) in the following form: \[\frac{r_{isco}^{(RN)}}{M_{RN}} = 2+F_{RN}^{1/3}+F_{RN}^{-1/3}\Big{[}4-\frac{3Q_{RN}^{2}}{M_{RN}^{ 2}}\Big{]}, \tag{3.22}\] \[F_{RN} = 8+\frac{2Q_{RN}^{4}}{M_{RN}^{4}}+\frac{Q_{RN}^{2}}{M_{RN}^{2}} \left(-9+\sqrt{G_{RN}}\right),\] \[G_{RN} = 5-\frac{9Q_{RN}^{2}}{M_{RN}^{2}}+\frac{4Q_{RN}^{4}}{M_{RN}^{4}},\] which matches with the equation for \(r_{isco}\) in Ref. [50]. Here \(M_{RN}\) and \(Q_{RN}\) are the mass and charge, respectively, of the Reissner-Nordstrom solution. Besides, the relationship between the charges is given by \(Q_{RN}^{2}=Q^{2}/2\), whereas the masses are equal \(M_{RN}=M\). As expected, in the limiting case \(Q\to 0\), \(R_{ISCO}\) corresponds to the Schwarzschild value \(6M\). In the cases of \(0<a\leq 2\) the values of \(R_{ISCO}\) depend upon the ratio of \(Q/\mu\). This behaviour is illustrated in Fig. 8. The values of \(R_{ISCO}\) at \(Q=\mu\) can also been seen in Fig. 7 which are equal to \(R_{ISCO}/\mu=6.10\), 6.22, 6.34, 6.46 for cases \(a\)=0.5, 1.0, 1.5, 2.0, respectively. Figure 8: The normalized by \(2\mu\) radius of ISCO as a function of \(Q/(2\mu)\) for different \(a\)=0.5, 1, 1.5, 2. Numbers near the points imply the values of energy \(E/m\) and angular momentum \(L/(\mu m)\) (underlined numbers) corresponding to a specific ISCO. The energy and the angular momentum in the last stable circular orbit can be found numerically after plugging expressions (3.18), (3.19), (3.20) and (3.21) into Eqs. (3.13) and (3.14). The results are reported in Table 1. In Fig. 9 the normalized photon sphere radius \(R_{0}/(2\mu)\) (left panel) and \(R_{ISCO}/(2\mu)\) (right panel) are given versus \(p=P/(2\mu)\). As one can see, also in these representations \(R_{0}/(2\mu)\) and \(R_{ISCO}/(2\mu)\) increase with increasing \(p=P/(2\mu)\). In Fig. 10 the normalized by gravitational mass \(M\) photon sphere radius is shown versus \(Q/M\) for different \(a\). As one may notice, the Schwarzschild case with \(a=0\) possess \(R_{0}/M=3\) and for other cases \(R_{0}/M<3\). The Reissner-Nordstrom case with \(a=2\) has smallest radius \(R_{0}/M=1\) for the extreme case \(Q/M=\sqrt{2}\). In Fig. 11 (left panel) we present the normalized by gravitational mass \(M\) radii of ISCOs as a function of net charge over gravitational mass \(Q/M\). As one may see that for increasing charge the radii of ISCOs decrease as one expects in analogy with the Reissner-Nordstrom solution [50]. In this representation all physical quantities can be measured and used to distinguish charged black holes with different \(a\) i.e. different coupling constant vectors \(\vec{\lambda}_{1}\) and \(\vec{\lambda}_{2}\). It is interesting to note, that a similar result was obtained in Fig. 3 (left panel) of Ref. [52] for a neutral test particle in the Sen spacetime. It is shown that for increasing charge over gravitational mass ratio the normalized by gravitational mass \(M\) radius \(r_{ISCO}\) decreases. In order to compare this result with our findings one should find the relationship between the net charge \(Q\) of the present paper \begin{table} \begin{tabular}{c c c} \hline \hline \(a\) & \(\mu\) & \(P\) \\ \hline 0.0 & \(M\) & – \\ 0.5 & \(\frac{3M}{2}-\frac{1}{4}\sqrt{4M^{2}+Q^{2}}\) & \(-2M+\sqrt{4M^{2}+Q^{2}}\) \\ 1.0 & \(M-\frac{Q^{2}}{8M}\) & \(\frac{Q^{2}}{4M}\) \\ 1.5 & \(-\frac{M}{2}+\frac{3}{4}\sqrt{4M^{2}-Q^{2}}\) & \(2M-\sqrt{4M^{2}-Q^{2}}\) \\ 2.0 & \(\sqrt{M^{2}-\frac{Q^{2}}{2}}\) & \(M-\sqrt{M^{2}-\frac{Q^{2}}{2}}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Values of parameters \(\mu\) and \(P\) in terms of \(M\) and \(Q\) depending on \(a\). Note, for vanishing \(Q\) one gets \(P=0\) and \(\mu=M\). At the same time for vanishing \(a\) one recovers the Schwarzschild metric, \(\mu=M\) and \(P\) disappears in the metric. Figure 10: The location of normalized by gravitational mass \(M\) photon sphere radii as a function net charge over gravitational mass \(Q/M\). Figure 9: Left: The normalized by \(2\mu\) photon sphere radius as a function of \(p=P/(2\mu)\) for different values of \(a\). Right: The radius of ISCOs as a function of \(p=P/(2\mu)\) for different values of \(a\). and the one in Ref. [52], which we denote as \(Q_{Sen}\). To do so, one has to give a close look at the line element Eq. (1) in Ref. [52] and compare it with the one considered here when \(a=1\). The two solutions, apart from notations, physical origin and interpretation, are identical. Thus, by comparing the two solutions one finds that \(Q=2Q_{Sen}\) and the value \(Q/M=1.4\) in Fig. 11 left panel for \(a=1\) is equal to \(Q_{Sen}/M=0.7\) in in Fig. 3 (left panel) of Ref. [52]. Furthermore, one of the quantities which is of great interest is the efficiency of converting matter into radiation (see for details page 662 of Ref. [53]) \[\eta=[1-\tilde{E}(R_{ISCO})]\times 100\% \tag{33}\] In Fig. 11 (right panel) the efficiency is shown versus \(Q/M\). As we expected, the efficiency for different values of \(a=0.0,0.5,1.0,1.5,2.0\) is always larger than the \(a=0\) case. In general, the circular orbits are allowed in the region \(R_{0}<R<R_{ISCO}\), however, all those orbits are unstable. For stability of circular orbits, the following condition must be fulfilled: \[\frac{\partial^{2}V}{\partial R^{2}}>0. \tag{34}\] The behavior of the second derivative of the effective potential is depicted in Figs. 12 and 13. From these plots, it can be seen where orbits become stable. The location of ISCO is determined by finding the radius at which the second derivative of the effective potential changes sign from negative to positive. ## IV Conclusion In this work we considered the solution for a double-charged dilatonic black hole. We studied circular geodesics of massive neutral test particles and photons for different values of parameters \(a\) and \(P\) (the latter is directly connected with the net charge \(Q\)). We calculated the energy, angular momentum and radius of ISCO of test particles in terms of \(a\), \(P\) and \(\mu\) and investigated in detail their behavior for \(a=0,1/2,1,3/2,2\) cases, corresponding to various configurations of the double-charged dilatonic black holes. Our analysis is mainly based on the study of the behavior of an effective potential which defines the position and stability properties of circular orbits. The stability of circular geodesics was analyzed by examining the sign of the second derivative of the effective potential with respect to the radial coordinate. For neutral test particles stable orbits exist only starting from \(R_{ISCO}\) up to infinity. The ISCO radius and photon sphere radius have been calculated for particular values of \(a\). It turned out that, in this representation of the parameters of the line element, for larger \(a\) with increasing \(Q/\mu\) Figure 11: Left: The normalized by gravitational mass \(M\) radii of ISCOs as a function of \(Q/M\) for different values of \(a\). Right: The efficiency as a function of \(Q/M\) for different values of \(a\). Figure 12: Representation of the second derivative of the effective potential as a function of \(R/\mu\) for \(a=0\). the radius of the ISCO and photon sphere radius increase. The photon sphere radius and the ISCO radius have their minimum values \(R_{0}/\mu=3\) and \(R_{ISCO}/\mu=6\), respectively, in the Schwarzschild limiting case (\(a\)=0). However, in the representation of the net charge over gravitational mass the situation is utterly opposite. For the Schwarzschild case (\(a=0\)) one gets the largest values for the photon sphere and ISCO radii \(3M\) and \(6M\), respectively, and for the Reissner-Nordstrom case (\(a=2\)) one obtains \(M\) and \(3M\), respectively. Taking into consideration Table 2 for \(a=2\) and \(Q/M=\sqrt{2}\) the parameter \(P\) will be equal to \(M\). Correspondingly, the photon sphere and ISCO radii will be equal to \(2M\) and \(4M\), respectively, if one uses the coordinate transformation \(r_{RN}=R+P\) and the relation between charges \(Q_{RN}=Q/\sqrt{2}\). These results are in agreement with the ones reported in Ref. [50]. The efficiency of converting matter into radiation as expected is larger for the Reissner-Nordstrom case \(\eta=8.14\%\) and smaller for the Schwarzschild case \(\eta=5.72\%\)[54]. All other configurations are restricted within these two cases. This peculiarity can be used to distinguish ordinary or astrophysical black holes from the double-charged dilatonic black holes. It would be interesting to extend the analyses of the paper in future studies related to the quasinormal modes at final moments of black hole mergers in binaries, quasiperiodic oscillations in the X-ray systems, radiative flux and spectral luminosity of accretion disks around astrophysical black holes. ## Acknowledgements KB thanks professors Daniele Malafarina and Hernando Quevedo for fruitful discussions during the preparation of this paper. KB acknowledges partial support from the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant No. AP19680128). For VI the research was funded by RUDN University, scientific project number FSSF-2023-0003.
2305.18926
Document-Level Multi-Event Extraction with Event Proxy Nodes and Hausdorff Distance Minimization
Document-level multi-event extraction aims to extract the structural information from a given document automatically. Most recent approaches usually involve two steps: (1) modeling entity interactions; (2) decoding entity interactions into events. However, such approaches ignore a global view of inter-dependency of multiple events. Moreover, an event is decoded by iteratively merging its related entities as arguments, which might suffer from error propagation and is computationally inefficient. In this paper, we propose an alternative approach for document-level multi-event extraction with event proxy nodes and Hausdorff distance minimization. The event proxy nodes, representing pseudo-events, are able to build connections with other event proxy nodes, essentially capturing global information. The Hausdorff distance makes it possible to compare the similarity between the set of predicted events and the set of ground-truth events. By directly minimizing Hausdorff distance, the model is trained towards the global optimum directly, which improves performance and reduces training time. Experimental results show that our model outperforms previous state-of-the-art method in F1-score on two datasets with only a fraction of training time.
Xinyu Wang, Lin Gui, Yulan He
2023-05-30T10:33:05Z
http://arxiv.org/abs/2305.18926v1
# Document-Level Multi-Event Extraction with Event Proxy Nodes and Hausdorff Distance Minimization ###### Abstract Document-level multi-event extraction aims to extract the structural information from a given document automatically. Most recent approaches usually involve two steps: (1) modeling entity interactions; (2) decoding entity interactions into events. However, such approaches ignore a global view of inter-dependency of multiple events. Moreover, an event is decoded by iteratively merging its related entities as arguments, which might suffer from error propagation and is computationally inefficient. In this paper, we propose an alternative approach for document-level multi-event extraction with event proxy nodes and Hausdorff distance minimization. The event proxy nodes, representing pseudo-events, are able to build connections with other event proxy nodes, essentially capturing global information. The Hausdorff distance makes it possible to compare the similarity between the set of predicted events and the set of ground-truth events. By directly minimizing Hausdorff distance, the model is trained towards the global optimum directly, which improves performance and reduces training time. Experimental results show that our model outperforms previous state-of-the-art method in F1-score on two datasets with only a fraction of training time. 1 Footnote 1: Code is available at [https://github.com/xnyuwg/procnet](https://github.com/xnyuwg/procnet) ## 1 Introduction Event extraction aims to identify event triggers with certain types and extract their corresponding arguments from text. Much research has been done on sentence-level event extraction Du and Cardie (2020); Lin et al. (2020); Lu et al. (2021). In recent years, there have been growing interests in tackling the more challenging task of document-level multi-event extraction, where an event is represented by a cluster of arguments, which may scatter across multiple sentences in a document. Also, multiple events in the same document may share some common entities. For example, as shown in Figure 1, the two events, _Equity Pledge_ and _Equity Freeze_, have their arguments scattered across the document. The same entity mentions, _Yexiang Investment Management Co._, _Ltd._ and _13.07%_, are involved in both events, with the former taking different argument roles ('_Pledger_' and '_Equity Holder_'), while the latter having the same argument role ('_Total Holding Ratio_'). In such a setup, an event is not associated with a specific event trigger word or phrase, as opposed to the common setup in sentence-level event extraction. These challenges make it difficult to distinguish various events and link entities to event-specific argument roles. Document-level multi-event extraction can be typically formulated as a table-filling task that fills Figure 1: An example of a document that contains two events. [\(\cdot\)] denotes the sentence numbering. Words highlighted in colors denote different entities. the correct entities into a pre-defined event schema as shown in Figure 1. Here, an event is essentially represented by a cluster of arguments. Existing approaches (Zheng et al., 2019; Yang et al., 2021; Huang and Jia, 2021; Xu et al., 2021; Liang et al., 2022) usually involve two steps: (1) first model the entity interactions based on contextual representations; (2) then design a decoding strategy to decode the entity interactions into events and arguments. For example, Zheng et al. (2019) and Xu et al. (2021) transformed this task into sequential path-expanding sub-tasks. Each sub-task expands a path sequentially by gradually merging entities in a pre-defined order of event argument roles. The aforementioned approaches suffer from the following limitations: (1) They decode events from entity information and tend to produce local optimal results without considering the interdependency of multiple events globally in a document. (2) Event decoding by iteratively merging entities suffers from error propagation that an event type or an entity that has been incorrectly classified cannot be corrected later. (3) Every decoding decision requires iterating all entity mentions in a document, which is computationally inefficient. To address the above limitations, we propose an alternative approach for document-level multi-event extraction with event proxy nodes and Hausdorff distance minimization, named as **Proxy** Nodes **C**lustering **N**etwork (ProCNet). The event proxy nodes aim to capture the global information among events in a document. The Hausdorff distance makes it possible to optimize the training loss defined as the difference between the generated events and the gold standard event annotations directly. This is more efficient compared to existing decoding approaches. Our method involves two main steps: _Event Representation Learning_ and _Hausdorff Distance Minimization_. For _Event Representation Learning_, we create a number of proxy nodes, each of which represents a pseudo-event, and build a graph to update proxy nodes. Entities mentioned in text are treated as nodes connecting to the proxy nodes. All the proxy nodes are interconnected to allow information exchange among the potential events. We employ a Hypernetwork Graph Neural Network (GNN) (Ha et al., 2017) for updating proxy node representations. After _Event Representation Learning_, each proxy node essentially resides in a new event-level metric space by aggregating information from the entity-level space. For _Hausdorff Distance Minimization_, we regard the predicted events as a set and the ground-truth events as another set, and compute the Hausdorff distance between these two sets, which simultaneously consider all events and all their arguments. We then minimize the Hausdorff distance via gradient descent, where the model is trained to directly produce a globally optimal solution without the need of using decoding strategies as in existing approaches. In this way, our model learns globally and does not suffer from the problem of existing approaches that decode events based on local entity information. Each entity is linked to every proxy node, and the association between an entity and a proxy node is updated at each training iteration. As such, our model avoids the error propagation problem caused by the iterative decoding strategy. In addition, our approach naturally addresses the problem that the same entity mention may be involved in multiple events since the entity will be mapped to a different event-level metric space depending on its associated proxy node. Moreover, as our approach replaces iterative computation in decoding with parallel computation, it is computationally more efficient compared to existing path-expansion approaches, as will be shown in our experiments section. In summary, our main contributions are: * We propose a new framework for document-level multi-event extraction by learning event proxy nodes in a new event-level metric space to better model the interactions among events. * We propose to utilize the Hausdorff distance in our learning objective function to optimize the difference between the generated events and the gold standard events directly. The proposed mechanism not only simultaneously considers all events but also speeds up the training process. * Experimental results show that our model outperforms previous state-of-the-art method in F1 on two datasets with only a fraction of training time. ## 2 Related Work Early research on event extraction (EE) largely focused on sentence-level event extraction (SEE), aiming to classify the event trigger and arguments in a sentence. Chen et al. (2015) decomposes SEE into two sub-tasks: _event trigger detection_ and _event argument labeling_. More work has been done on joint-learning of the two sub-tasks (Nguyen and Nguyen, 2019; Lin et al., 2020). Recently, multi-turn Question-Answer (QA) methods have been investigated for EE with hand-designed or automatically generated questions (Du and Cardie, 2020; Li et al., 2020; Wang et al., 2020; Liu et al., 2020; Lyu et al., 2021). Apart from QA-based approaches, sequence-to-sequence learning has also been explored, where the event annotation is flattened as a sequence (Paolini et al., 2021; Lu et al., 2021; Li et al., 2021; Lu et al., 2022b). More recently, prompt-based learning has been explored using the knowledge in pre-trained language models (Lin et al., 2021; Hsu et al., 2021; Ma et al., 2022). Compared to SEE, document-level event extraction (DEE) appears to be more challenging. DEE requires methods to model long-term dependencies among entities across multiple sentences. Simply employing SEE approaches for DEE may lead to incomplete and uninformative extractions (Li et al., 2021). To address the problem, conditional generation have been proposed, which are conditioned on pre-specified templates or prompts (Du et al., 2021; Huang et al., 2021; Ma et al., 2022). DEE can also be formulated as a table-filling task where each event is represented as a cluster of arguments and an event type. In such a setup, it is usually not possible to associate a particular event trigger word or phrase with an event. Yang et al. (2018) proposed a key-event detection model. Zheng et al. (2019) transformed event tables into a directed acyclic graph with path expansion. Huang and Jia (2021) constructed a graph to build sentence communities. Lu et al. (2022a) captured event clues as a series of intermediate results. Xu et al. (2021) constructed a heterogeneous GNN with a tracker mechanism for partially decoded events. Liang et al. (2022) modeled the relation between entities with Relation-augmented Attention Transformer. These methods mainly focus on modeling entity inter-relations and rely on carefully-designed event decoding strategies. In contrast, we model events in the event-level metric space within a more global view and with less training time. ## 3 Methodology ### Problem Setup Different from the trigger-based event extraction task, where an event is represented by a trigger and a list of arguments, in our task, an event is defined by an event type category \(c\), a list of entities \(\{e_{i}\}\) and their corresponding argument types \(\{a_{i}\}\) as Figure 2: Overview of ProCNet with the example in Figure 1, where Entity 1, 5, 8, 9 are arguments of Event #1; Entity 1, 4, 5, 6, 7, 8 are arguments of Event #2; Entity 2, 3 do not belong to any events. Before training, proxy node embeddings are randomly initialized. Entities are first mapped to entity representations in the entity-level space by _Entity Representation Learning_. Then in _Event Representation Learning_, a hypernetwork heterogeneous graph is constructed with entity and context nodes connected with proxy nodes, and proxy nodes interconnected with each other. Proxy nodes are updated to represent pseudo-events. Afterwards, the proxy nodes and entity nodes are decoded into events, each of which is represented by an event types and a set of argument role-entity pairs in the _Event Decoding_ step. Finally, _Hausdorff Distance Minimization_ minimizes the distance between the set of predicted events and the set of ground-truth events to perform a global training in the new event-level metric space. shown in Figure 1. Therefore, the target output is a list of "entity-argument" pairs \(\{(e_{i},a_{i})\}\) and \(c\) as \(\big{(}c,\{(e_{i},a_{i})\}\big{)}\). Proxy node is defined as \(z\). An overview of ProCNet is shown in Figure 2. In what follows, we present each module in detail. ### Entity Representation Learning Given an input document, the first step is to identify the entities which might be potential arguments. This can be framed as a sequence labeling problem that, given a word sequence, the entity recognition model outputs a label sequence with the BIO (Beginning and Inside of an entity span, and Other tokens) tagging. We use BERT Devlin et al. (2019) as a sequence labeler to detect entities at sentence-level. As an entity span may contain multiple tokens, we drive its representation by averaging the hidden states of its constituent tokens. For a document, a total of \(|e|\) entity representations are extracted as \(\{\mathbf{h}_{e_{i}}\}_{i=1}^{|e|}\). The loss of the BIO sequence tagging is defined as \(\mathcal{L}_{\text{er}}\). In order to make the entity representations encode the knowledge of entity associations, we introduce a simple auxiliary learning task to predict whether two entities belong to the same event, where entity representations will be updated during learning. Specifically, it is a binary classification task, with the predicted output computed as: \[\hat{y}_{\text{epc}_{(i,j)}}=\phi\left(\text{MLP}([\mathbf{h}_{e_{i}};\mathbf{h}_{e_{j }}])\right), \tag{1}\] where \(\phi\) denotes the sigmoid function, \([;]\) denotes the concatenation, and \(\hat{y}_{\text{epc}_{i,j}}\) indicates the probability if entities \(i\) and \(j\) are from the same event. We use the binary cross-entropy (CE) loss here: \[\mathcal{L}_{\text{epc}}=-\sum_{i}\sum_{j}\text{CE}(y_{\text{epc}_{(i,j)}}, \hat{y}_{\text{epc}_{(i,j)}}) \tag{2}\] where \(y_{\text{epc}_{i,j}}\) is the label. The loss for entity representation learning is defined as \(\mathcal{L}_{\text{e}}=\mathcal{L}_{\text{er}}+\mathcal{L}_{\text{epc}}\). ### Event Representation Learning with Proxy Nodes In this section, we construct a graph to map entity representations in the entity-level space into event representations in a new event-level metric space. We define \(n\) proxy nodes, which serve as pseudo-events, and randomly initialize their embeddings \(\{\mathbf{h}_{z_{i}}^{(0)}\}_{i=1}^{n}\), which are only initialized once before training and will be updated during training. \(n\) is a hyper-parameter and can be simply set to a much larger value than the expected number of extracted events, as proxy nodes can also represent _null_ events (see Section 3.4). We initialize entity node embeddings \(\{\mathbf{h}_{e_{i}}\}_{i=1}^{|e|}\) and context node embeddings \(\{\mathbf{h}_{s_{i}}\}_{i=1}^{|s|}\) by their corresponding entity and [CLS] representations, respectively. We define the graph as \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), and the node set \(\mathcal{V}\) contains proxy nodes, entity nodes, and context nodes as: \(\mathcal{V}=\{z_{i}\}_{i=1}^{n}\cup\{e_{i}\}_{i=1}^{|e|}\cup\{s_{i}\}_{i=1}^{ |s|}\) with their embeddings \(\{\mathbf{h}_{z_{i}}^{(0)}\}_{i=1}^{n}\cup\{\mathbf{h}_{e_{i}}\}_{i=1}^{|e|}\cup\{\mathbf{ h}_{s_{i}}\}_{i=1}^{|s|}\). The edge set \(\mathcal{E}\) includes three kinds of edges as follows: Proxy\(\leftrightarrow\)Proxy EdgeThe bidirectional edge between all proxy nodes \(\{z_{i}\to z_{j}:0<i\leq n,0<j\leq n\}\) allows the information exchange between proxy nodes. Entity\(\rightarrow\)Proxy EdgeThe directed edge from all entity nodes \(e\) to all proxy nodes \(z\) as \(\{e_{j}\to z_{i}:0<i\leq n,0<j\leq|e|\}\) provides the entity information for pseudo-events. Context\(\rightarrow\)Proxy EdgeThe directed edge from all context node \(s\) to all proxy node \(z\) as \(\{s_{j}\to z_{i}:0<i\leq n,0<j\leq|s|\}\) provides the contextual information. In a typical setup for GNN, each node has its embedding updated by aggregating the neighborhood information. The aggregation weight matrix is shared across all nodes. But in our task here, each proxy node is expected to represent a distinct event. As such, we would like to have a unique aggregation function for each proxy node. To this end, we use the Graph Neural Network with Feature-wise Linear Modulation (GNN-FiLM) Brockschmidt (2020) to update the proxy node embeddings in \(\mathcal{G}\). It introduces Hypernetwork to enable each proxy node to compute a unique aggregation function with different parameters. More concretely, given a node \(v\in\mathcal{V}\) at the \((l+1)\)-th layer, its hidden representation \(\mathbf{h}_{v}^{(l+1)}\) is updated as: \[\mathbf{h}_{v}^{(l+1)}=\sigma\bigg{(}\sum_{u\xrightarrow{e}v}\gamma_{ \varepsilon,v}^{(l)}\odot\mathbf{W}_{\varepsilon}\mathbf{h}_{u}^{(l)}+\mathbf{\beta}_{ \varepsilon,v}^{(l)}\bigg{)}, \tag{3}\] \[\mathbf{\gamma}_{\varepsilon,v}^{(l)}=f_{\gamma}(\mathbf{h}_{v}^{(l)}; \mathbf{\theta}_{\gamma,\varepsilon}),\quad\mathbf{\beta}_{\varepsilon,v}^{(l)}=f_{ \beta}(\mathbf{h}_{v}^{(l)};\mathbf{\theta}_{\beta,\varepsilon}),\] where \(u\xrightarrow{\varepsilon}v\) denotes a neighboring node \(u\) connected with node \(v\) with the edge type \(\varepsilon\). \(\mathbf{W}_{\varepsilon}\in\mathbb{R}^{d_{h}\times d_{h}}\) is a learnable parameter for edge type \(\varepsilon\). \(\sigma\) and \(\odot\) denote the activation function and Hadamard product, respectively. \(\mathbf{\gamma}_{\varepsilon,v}^{(l)}\) and \(\mathbf{\beta}_{\varepsilon,v}^{(l)}\) define the message-passing function of edge type \(\varepsilon\) and node \(v\) at layer \(l\). They are computed by functions and \(f_{\beta}\) given \(\mathbf{h}_{v}^{(l)}\) as the input. \(\mathbf{\theta}_{\gamma,\varepsilon}\) and \(\mathbf{\theta}_{\beta,\varepsilon}\) are learnable parameters of \(f_{\gamma}\) and \(f_{\beta}\), respectively. To keep it simple, we only use one-layer GNN-FiLM with a single linear layer as the hyper-function in our experiments. With the above formulation, each proxy node \(z\) has its unique message-passing function to aggregate information from entity nodes and context nodes in different ways. In summary, the representations of proxy nodes \(\{\widehat{\mathbf{h}}_{z_{i}}\}_{i=1}^{n}\) are updated through GNN-FiLM learning: \[\{\widehat{\mathbf{h}}_{z_{i}}\}_{i=1}^{n}=\text{GNN-FiLM}(\mathcal{V},\mathcal{E}) \tag{4}\] where \(z_{i}\) represents a pseudo-event. The training with proxy nodes is challenging, which will be addressed in Section 3.5. ### Event Decoding In this section, each proxy node representation \(\widehat{\mathbf{h}}_{z_{i}}\) is decoded into an event, which is formulated into two parallel sub-tasks: _event type classification_ and _event argument classification_. Event Type ClassificationThe event type of proxy node \(z_{i}\) is inferred from \(\widehat{\mathbf{h}}_{z_{i}}\) with MLP as: \[\mathbf{p}_{\hat{c}_{i}}=\text{softmax}\left(\text{MLP}(\widehat{\mathbf{h}}_{z_{i}} )\right), \tag{5}\] where \(\mathbf{p}_{\hat{c}_{i}}\) denotes the event type probability distribution of \(z_{i}\). Event type labels includes a _null_ event type, denoting no correspondence between a proxy node and any events. The number of non-_null_ proxy nodes is the number of predicted events. Event Argument ClassificationIn this task, we need to associate an entity with an event under an event-specific argument type. As the same entity (e.g., a company name) may have multiple mentions in a document, we aggregate their representations by a Multi-Head Attention (MHA) mechanism using a proxy node as the query. More concretely, assuming \(\{\mathbf{h}_{e}\}_{e\in\bar{e}_{k}}\) denotes a set of mentions representations for the same entity \(\bar{e}_{k}\), we use MHA to derive the aggregated entity representation for \(\bar{e}_{k}\). The query, key and value are defined as \(\mathbf{Q}_{z_{i}}=\widehat{\mathbf{h}}_{z_{i}},\mathbf{K}_{\bar{e}_{k}}=\{\mathbf{h}_{e}\}_{e \in\bar{e}_{k}},\mathbf{V}_{\bar{e}_{k}}=\{\mathbf{h}_{e}\}_{e\in\bar{e}_{k}}\). The representation of \(\bar{e}_{k}\) is: \[\widehat{\mathbf{h}}_{z_{i},\bar{e}_{k}}=\text{MHA}(\mathbf{Q}_{z_{i}},\mathbf{K}_{\bar{e }_{k}},\mathbf{V}_{\bar{e}_{k}}), \tag{6}\] where \(\widehat{\mathbf{h}}_{z_{i},\bar{e}_{k}}\) denotes the aggregated representation for entity \(\bar{e}_{k}\) using the proxy node \(z_{i}\) as the query. Then the probability distribution \(\mathbf{p}_{\hat{a}_{i,k}}\) of argument types of entity \(\bar{e}_{k}\) with respect to proxy node \(z_{i}\) is: \[\mathbf{p}_{\hat{a}_{i,k}}=\text{softmax}\left(\text{MLP}([\widehat{\mathbf{h}}_{z_{i }};\widehat{\mathbf{h}}_{z_{i},\bar{e}_{k}}])\right), \tag{7}\] where \([;]\) denotes the concatenation. The argument type set includes a _null_ argument type, denoting that entity \(\bar{e}_{k}\) does not relate to proxy node \(z_{i}\). The final event type \(\hat{c}_{i}\) for proxy node \(z_{i}\) and argument type for entity \(\bar{e}_{k}\) under the event encoded by proxy node \(z_{i}\) are determined by: \[\begin{split}\hat{c}_{i}&=\text{argmax}(\mathbf{p}_{ \hat{c}_{i}})\\ \hat{a}_{i,k}&=\text{argmax}(\mathbf{p}_{\hat{a}_{i,k}}) \end{split} \tag{8}\] Each event is represented by an event type \(\hat{c}_{i}\) and a list of arguments \(\{\hat{a}_{i,k}\}\). Any predicted argument type which is not in the pre-defined schema for its associated event type will be removed. Proxy nodes classified as _null_ event or entities classified as _null_ arguments will be removed. If there are multiple entities predicted as the same argument, the one with the highest probability will be kept. ### Hausdorff Distance Minimization In this section, we construct a predicted pseudo-event set \(\mathcal{U}_{z}\) represented by proxy node and a ground-truth event set \(\mathcal{U}_{y}\). We define \(\mu_{z_{i}}\) as the \(i\)-th pseudo-event, represented by \(z_{i}\), with \(\big{(}\hat{c}_{i},\{(e_{k},\hat{a}_{i,k})\}\big{)}\), and \(\mu_{y_{i}}\) denotes the \(j\)-th ground-truth event \(\big{(}e_{j},\{(e_{k},a_{j,k})\}\big{)}\). We further define the distance \(d(\mu_{z_{i}},\mu_{y_{j}})\) between predicted event \(\mu_{z_{i}}\) and the ground-truth event \(\mu_{y_{i}}\) as: \[\begin{split} d(\mu_{z_{i}},\mu_{y_{j}})&=\text{CE} (\mathbf{p}_{\hat{c}_{i}},c_{j})\\ &+\frac{1}{|\bar{e}|}\sum_{k=1}^{|e|}\text{CE}(\mathbf{p}_{\hat{a}_{i,k }},a_{j,k})\end{split} \tag{9}\] where \(\text{CE}(.)\) is the cross-entropy loss; \(|\bar{e}|\) denotes the number of unique entities; \(k\) indicates different entities. \(d(\mu_{z},\mu_{y})\) is essentially computed by the total cross-entropy loss of event type classification and argument classification between the \(i\)-th proxy node and the \(j\)-th ground-truth event. We aim to minimize the Hausdorff distance between sets \(\mathcal{U}_{z}\) and \(\mathcal{U}_{y}\) to learn the model by considering all events and their arguments simultaneously. As the standard Hausdorff distance is highly sensitive to outliers, we use the average Hausdorff distance (Schutze et al., 2012; Taha and Hanbury, 2015): \[\begin{split} D_{H}(\mathcal{U}_{z},\mathcal{U}_{y})& =\frac{1}{|\mathcal{U}_{z}|}\sum_{\mu_{z}\in\mathcal{U}_{z}}\underset{\mu_{y} \in\mathcal{U}_{y}}{\text{min}}\,d(\mu_{z},\mu_{y})\\ &+\frac{1}{|\mathcal{U}_{y}|}\sum_{\mu_{y}\in\mathcal{U}_{y}} \underset{\mu_{z}\in\mathcal{U}_{z}}{\text{min}}\,d(\mu_{z},\mu_{y})\end{split} \tag{10}\] However, in our task, the average Hausdorff distance could suffer a problem that a predicted event, represented by a proxy node, may be guided to learn towards more than one different t event at the same training iteration when this proxy node is the closest neighbor of multiple ground-truth events. To address this problem, we add a constraint to the average Hausdorff distance that the distance computation of \(d(.)\) should only be performed no more than once on each \(\mu_{z}\) and \(\mu_{y}\), and we modify the average Hausdorff distance as: \[\widehat{D}_{H}(\mathcal{U}_{z},\mathcal{U}_{y})=\text{min}\left\{\sum_{(\mu _{z},\mu_{p})\in\mathcal{U}_{z}\times\mathcal{U}_{y}}d(\mu_{z},\mu_{y})\right\} \tag{11}\] For example, if \(d(\mu_{z_{1}},\mu_{y_{1}})\) has been computed, then \(d(\mu_{z_{2}},\mu_{y_{1}})\) is no longer allowed to perform, as \(\mu_{y_{1}}\) has been used in \(d(.)\) computation. To this end, Eq. (11) with the constraint becomes a minimum loss alignment problem. To better solve Eq. (11) under the constraint, we construct an undirected bipartite graph \(G=(\mathcal{U}_{z},\mathcal{U}_{y},\mathcal{T})\), where \(\mu_{z}\in\mathcal{U}_{z}\) and \(\mu_{y}\in\mathcal{U}_{y}\) are nodes of two parts representing the predicted events and the ground-truth events, respectively. \(t\in\mathcal{T}\) denotes edge, which only exists between \(\mu_{z}\) and \(\mu_{y}\). The weight of edge \(t\) between nodes \(\mu_{z}\) and \(\mu_{y}\) is defined as: \[w(t_{z,y})=d(\mu_{z},\mu_{y}) \tag{12}\] The first step is to find an edge set \(\mathcal{T}\) that achieves the minimum value in the following equation: \[\widehat{\mathcal{T}}=\text{argmin}\sum_{t_{z,y}\in\mathcal{T}}w(t_{z,y}), \tag{13}\] where the edge \(t\in\mathcal{T}\) must meet these conditions: (1) each \(\mu_{z}\) has exactly one edge connected to it; (2) each \(\mu_{y}\) has no more than one edge connected to it. Eq. (13) can be computed efficiently with (Ramakrishnan et al., 1991; Bertsekas, 1981). Then the final distance is computed by combining Eq. (11), (12), and (13) as: \[\widehat{D}_{H}(\mathcal{U}_{z},\mathcal{U}_{y})=\sum_{t_{z,y}\in\widehat{ \mathcal{T}}}w(t_{z,y}) \tag{14}\] Finally, we use \(\widehat{D}_{H}(\mathcal{U}_{z},\mathcal{U}_{y})\) to approximate average Hausdorff distance \(D_{H}(\mathcal{U}_{z},\mathcal{U}_{y})\). As \(n\) has been set to be a very large number, if the number of ground-truth events is less than the number of predicted events in a document, pseudo _null_ events are added to the ground-truth event set as negative labels to make the number of ground-truth events equals to the number of predicted events. In summary, \(\widehat{D}_{H}(\mathcal{U}_{z},\mathcal{U}_{y})\) is the distance between the predicted events set and the ground-truth events set, which considers all events with all of their arguments at the same time, essentially capturing a global alignment. ### Objective Function The final loss is the sum of approximate Hausdorff distance and entity representation loss: \[\mathcal{L}=\widehat{D}_{H}(\mathcal{U}_{z},\mathcal{U}_{y})+\mathcal{L}_{ \text{e}} \tag{15}\] ## 4 Experiments In this section, we present performance and runtime experiments in comparison with state-of-the-art approaches. We also discuss the ablations study. Entity and event visualisation results can be found in Appendix B. ### Experimental Setup DatasetWe evaluate ProCNet on the two document-level multi-event extraction datasets: (1) ChFinAnn dataset2(Zheng et al., 2019) consists of 32,040 financial documents, with 25,632, 3,204, and 3,204 in the train, development, and test sets, respectively, and includes five event types. The dataset contains 71% of single-event documents and 29% of multi-event documents. (2) DuEE-Fin dataset3(Han et al., 2022) has around 11,900 financial documents and 13 event types. As the released dataset does not contain the ground truth annotations for the test set, we follow the setting of (Liang et al., 2022) and use the original development set as the test set. We set aside 500 documents from the training set as the development set. Our final dataset has 6,515, 500, and 1,171 documents in the train, development, and test sets, respectively. There are 67% of single-event documents and 33% of multi-event documents. More details about the event types and their distributions are in Appendix A.1. Evaluation MetricsWe follow the same metrics in Zheng et al. (2019). For a predicted event of a specific event type, the most similar ground-truth event that is of the same event type is selected without replacement. Then the micro-averaged role-level precision, recall, and F1-score are calculated for the predicted event and the selected gold event. Implementation DetailTo keep it simple, we only use one-layer GNN-FiLM (Brockschmidt, 2020) with a single linear layer as the hyper-function. Specifically, we have \(f_{\gamma}(\mathbf{h}_{v}^{(l)};\mathbf{\theta}_{\gamma,\varepsilon})=\mathbf{W}_{\gamma, \varepsilon}\mathbf{h}_{v}^{(l)}\) and \(f_{\beta}(\mathbf{h}_{v}^{(l)};\mathbf{\theta}_{\beta,\varepsilon})=\mathbf{W}_{\beta, \varepsilon}\mathbf{h}_{v}^{(l)}\) in Eq. (3). The number of proxy nodes \(n\) is set to 16. More implementation details are in Appendix A.2 BaselinesThe baselines that we compare with are as follows: **DCFEE**Yang et al. (2018) uses an argument-completion strategy in the table-filling task. Two variants of DCFEE are **DCFEE-O** for single-event and **DCFEE-M** for multi-event. **Doc2EDAG**Zheng et al. (2019) utilizes a path-expansion decoding strategy to extract events like hierarchical clustering. **Greedy-Dec** is a variant of Doc2EDAG that decodes events greedily. **DE-PPN**Yang et al. (2021) uses Transformer to encode sentences and entities. **GIT**Xu et al. (2021) uses a Tracker module to track events in the path-expansion decoding. **PTPCG**Zhu et al. (2022) combines event arguments together in a non-autoregressive decoding approach with pruned complete graphs, aiming to consume lower computational resources. **ReDEE**Liang et al. (2022) is a Relation-augmented Attention Transformer to cover multi-scale and multi-amount relations. ### Overall Results Table 1 shows the results on the ChFinAnn and the DuEE-Fin datasets. For ChFinAnn, the baseline results are reported in Zheng et al. (2019); Yang et al. (2021); Xu et al. (2021); Zhu et al. (2022); Liang et al. (2022). For DuEE-Fin, the baseline results are either taken from Liang et al. (2022) or by running the published source code of the baselines. We can observe that a simple argument completion strategy (DCFEE-O and DCFEE-M) produces the worst results. Greedy-Dec with the greedy decoding strategy improves upon DCEFF variants, but it reached an F1-score lower than Doc2EDAG by 13.7% on ChFinAnn and 6.3% on DuEE-Fin due to only modeling entity-level representations without a global view. DE-PPN which uses the Transformer to encode sentences and entities performs worse compared to Doc2EDAG which utilizes a path-expansion decoding strategy. Extending DocEDAG with a Track module (GIT) or using a relation-augmented attention transformer (ReDEE) achieves better results compared to earlier approaches. ProCNet gives the best overall F1-score, outperforming the best baseline, ReDEE, \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{5}{c}{**ChFinAnn**} & \multicolumn{5}{c}{**DuEE-Fin**} \\ \cline{2-10} & **P.** & **R.** & **F1** & **F1** (S.)** & **F1** (M.)** & **P.** & **R.** & **F1** & **F1** (S.)** & **F1** (M.)** \\ \hline DCFEE-O & 68.0 & 63.3 & 65.6 & 69.9 & 50.3 & 59.8 & 55.5 & 57.6 & 62.7 & 53.3 \\ DCFEE-M & 63.0 & 64.6 & 63.8 & 65.5 & 50.5 & 50.2 & 55.5 & 52.7 & 57.1 & 49.5 \\ Greedy-Dec & 82.5 & 53.7 & 65.1 & 80.2 & 36.9 & 66.0 & 50.6 & 57.3 & 67.8 & 47.4 \\ Doc2EDAG & 82.7 & 75.2 & 78.8 & 83.9 & 67.3 & 67.1 & 60.1 & 63.4 & 69.1 & 58.7 \\ DE-PPN & 83.7 & 76.4 & 79.9 & 85.9 & 68.4 & 69.0 & 33.5 & 45.1 & 54.2 & 21.8 \\ PTPCG & 83.7 & 75.4 & 79.4 & 88.2 & - & 71.0 & 61.7 & 66.0 & - & - \\ GIT & 82.3 & 78.4 & 80.3 & 87.6 & 72.3 & 69.8 & 65.9 & 67.8 & 73.7 & 63.8 \\ ReDEE & 83.9 & 79.9 & 81.9 & 88.7 & 74.1 & 77.0 & 72.0 & 74.4 & 78.9 & 70.6 \\ \hline \hline \multicolumn{10}{l}{ProCNet (Ours)} & **84.1** & **81.9** & **83.0** & **89.6** & **75.6** & **78.8** & **72.8** & **75.6** & **80.0** & **72.1** \\ \hline \hline \end{tabular} \end{table} Table 1: Overall precision (P.), recall (R.), and F1-score (F1) on the ChFinAnn and DuEE-Fin datasets. F1 (S.) and F1 (M.) denote scores under Single-event (S.) and Multi-event (M.) sets. \begin{table} \begin{tabular}{l c c c c c} \hline \hline **Model** & **EF** & **ER** & **EU** & **EO** & **EP** \\ \hline DCFEE-O & 51.1 & 83.1 & 45.3 & 46.6 & 63.9 \\ DCFEE-M & 45.6 & 80.8 & 44.2 & 44.9 & 62.9 \\ Greedy-Dec & 58.9 & 78.9 & 51.2 & 51.3 & 62.1 \\ Doc2EDAG & 70.2 & 87.3 & 71.8 & 75.0 & 77.3 \\ DE-PPN & 73.5 & 87.4 & 74.4 & 75.8 & 78.4 \\ GIT & 73.4 & 90.8 & 74.3 & 76.3 & 77.7 \\ ReDEE & 74.1 & 90.7 & 75.3 & **78.1** & 80.1 \\ \hline \hline \multicolumn{10}{l}{ProCNet (Ours)} & **75.7** & **93.7** & **76.0** & 72.0 & **81.3** \\ \hline \hline \end{tabular} \end{table} Table 2: F1-score of five event types on ChFinAnn. by 1.1-1.2%, respectively on ChFinAnn and DuEE-Fin. It can also be observed that all models have better F1-scores for the single-event scenario than the multi-event one, verifying the difficulty of extracting multiple events from a document. When comparing results across the two datasets, we see better results achieved on ChFinAnn, possibly due to its larger training set and smaller set of event types compared to DuEE-Fin. ### Per-Event-Type Results Table 2 and Table 3 show the evaluation results on the 5 and 13 event types4 on ChFinAnn and DuEE-Fin, respectively. On ChFinANN, ReDEE outperforms the others on EO. On DuEE-Fin, ReDEE gives the best results on SR and PL, while GIT outperforms the others on SI. Some documents of these event types contain more than 40 sentences. A possible reason for ProCNet not performing well on these event types is its limited capability of capturing long-term dependencies across sentences, since ProCNet does not directly model the relations between sentences. On the contrary, ReDEE and GIT model the inter-relations of sentences directly. Nevertheless, ProCNet achieves superior results on other event types, resulting in overall better performance compared to baselines. Footnote 4: Please refer to Appendix A.1 for event type descriptions. ### Run-Time Comparison We compare the training time of the five baselines, Doc2EDAG, DE-PPN, PTPCG, GIT, and ReDEE, with ProCNet on a GPU server with NVIDIA Quadro RTX 6000 and the same setting. We record the average per epoch training time and the total time to reach convergence in Table 4. DuEE-Fin contains fewer data than ChFinANN, as such, Doc2EDAGE, GIT, and ProCNet trained faster on DuEE-Fin. However, ReDEE took longer time to converge on DuEE-Fin, because ReDEE models the relations of all argument-argument pairs. As the number of event types and argument types in DuEE-Fin is more than that in ChFinANN, the training time of ReDEE increases exponentially. DE-PPN runs faster than Doc2EDAG, GIT, and ReDEE but slower than ProCNet. In contrast, ProCNet avoids the time-consuming decoding by introducing the proxy nodes and HDM. Besides, ProCNet can process all proxy nodes and their arguments in parallel, which is more GPU-friendly. PTPCG has a shorter per-epoch run time, but took a longer time to converge on ChFinAnn; though it appears to be more run time efficient on DuEE-Fin compared to our approach. In summary, ProCNet is 0.5x-44.8x times faster than the baselines per epoch, and is 0.6x-45.4x times faster to reach convergence. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Per Epoch**} & \multicolumn{2}{c}{**Convergence**} \\ \cline{2-5} & **Time** & **Ratio** & **Time** & **Ratio** \\ \hline \multicolumn{5}{c}{**ChFinANN**} \\ \hline Doc2EDAG & 4:40 & 5.2x & 327:09 & 10.7x \\ DE-PPN & 1:54 & 2.1x & 87:27 & 2.8x \\ PTPCG & 0:26 & 0.5x & 39:04 & 1.3x \\ GIT & 4:48 & 5.3x & 317:35 & 10.3x \\ ReDEE & 8:12 & 9.1x & 525:33 & 17.1x \\ \hline ProCNet (Ours) & 0:54 & 1.0x & 30:34 & 1.0x \\ \hline \multicolumn{5}{c}{**DuEE-Fin**} \\ \hline Doc2EDAG & 1:53 & 11.3x & 249:35 & 16.5x \\ DE-PPN & 0:15 & 1.5x & 24:36 & 1.6x \\ PTPCG & 0:06 & 0.6x & 9:28 & 0.6x \\ GIT & 1:50 & 11.0x & 178:38 & 11.8x \\ ReDEE & 7:28 & 44.8x & 687:14 & 45.4x \\ \hline ProCNet (Ours) & 0:10 & 1.0x & 15:09 & 1.0x \\ \hline \hline \end{tabular} \end{table} Table 4: The GPU time (hh:mm) of each epoch and reaching convergence in average. \begin{table} \begin{tabular}{l c c c c c c c c c c c c c} \hline \hline **Model** & **WB** & **FL** & **BA** & **BB** & **CF** & **CL** & **SD** & **SI** & **SR** & **RT** & **PR** & **PL** & **EC** \\ \hline DCFEE-O & 54.0 & 65.4 & 44.0 & 27.3 & 58.2 & 42.0 & 48.8 & 53.9 & 76.7 & 32.9 & 63.3 & 58.3 & 40.6 \\ DCFEE-M & 49.2 & 68.0 & 40.4 & 28.4 & 51.2 & 35.1 & 42.3 & 45.9 & 74.0 & 51.0 & 55.8 & 56.4 & 37.4 \\ Greedy-Dec & 53.7 & 71.8 & 49.5 & 41.1 & 61.3 & 42.1 & 49.7 & 57.4 & 74.4 & 29.2 & 60.8 & 50.5 & 39.4 \\ Doc2EDAG & 60.0 & 78.3 & 50.6 & 40.1 & 63.2 & 51.5 & 50.7 & 52.9 & 83.7 & 51.2 & 64.8 & 61.7 & 51.2 \\ DE-PPN & 50.7 & 62.7 & 41.3 & 21.4 & 36.3 & 23.0 & 32.9 & 31.3 & 67.8 & 25.8 & 42.1 & 36.3 & 23.4 \\ GIT & 58.8 & 77.6 & 56.6 & 44.7 & 68.5 & 55.1 & 58.8 & **71.2** & 86.4 & 45.0 & 66.4 & 71.3 & 53.8 \\ ReDEE & 72.2 & 81.2 & 58.9 & 53.4 & 76.7 & 56.7 & 68.2 & 56.6 & **90.6** & 49.9 & 75.0 & **77.8** & 56.6 \\ \hline ProCNet (Ours) & **76.0** & **85.0** & **69.8** & **63.5** & **79.0** & **60.5** & **69.3** & 68.2 & 89.2 & **50.0** & **77.4** & 76.9 & **56.9** \\ \hline \hline \end{tabular} \end{table} Table 3: F1-score on the DuEE-Fin dataset with 13 event types. ### Ablation Study Table 5 shows how different components in ProCNet contribute to performance: -HypernetworkHypernetwork is removed by replacing GNN-FiLM with RGCN (Schlichtkrull et al., 2018), where all proxy nodes in RGCN share the same message-passing function. We see a drop of about 1% in F1 on both datasets, showing the importance of using different entity aggregation functions for different event proxy nodes. -Proxy NodeWe replace \(\{\mathbf{h}_{z_{i}}\}_{i=1}^{n}\) with \(\{\mathbf{h}_{z_{0}}\}_{i=1}^{n}\), where all proxy nodes share the same embedding \(\mathbf{h}_{z_{0}}\). In this way, \(\mathbf{h}_{z_{0}}\) acts as a common start node as in existing baselines. It can be observed that F1 drops significantly to 4.4% and 1.7%, respectively. The model learns almost nothing, which verifies the importance of the proxy nodes for ProCNet. -HdmInstead of minimizing the Hausdorff distance between the predicted set and the ground-truth set globally, we randomly initialize the edge set \(\widehat{\mathcal{T}}\) without employing Eq. (13), where the minimization is not performed towards the global minimum. We see a drastic decrease in performance. Without HDM, it is difficult for the the model to learn the alignment between a proxy node and a ground-truth event, showing that HDM is an indispensable component of ProCNet. ### Case Study Figure 3 shows an error case of ProCNet. _Event #1_ has its arguments spanned from sentence #9 to sentence #21. The text contains a few dates, making it difficult for our model to assign the correct dates to the _StartDate_ and _EndDate_ arguments. For _Event #2_, the detection of _LaterHoldingShares_ requires the model to associate the '_above-mentioned increase_' with _Event #2_. These errors show that ProCNetstill faces a difficulty in modeling long-distance dependencies. ## 5 Conclusion In this paper, we no longer focus on inter-entities relation modeling and decoding strategy as in previous methods, but directly learns all events globally through the use of event proxy nodes and the minimization of the Hausdorff distance in our proposed ProCNet. In our experiments, ProCNet outperforms state-of-the-art approaches while only requiring a fraction of time for training. \begin{table} \begin{tabular}{l|c c c c c c} \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{3}{c}{**ChFinANN**} & \multicolumn{3}{c}{**DuEE-Fin**} \\ \cline{2-7} & **P.** & **R.** & **F1** & **P.** & **R.** & **F1** \\ \hline ProCNet (Ours) & 84.1 & 81.9 & 83.0 & 78.8 & 72.8 & 75.6 \\ \hline –Hypernetwork & 82.7 & 81.6 & 82.1 & 77.0 & 72.2 & 74.5 \\ –Proxy node & 41.3 & 2.3 & 4.4 & 21.1 & 1.0 & 1.7 \\ –HDM & 17.0 & 19.8 & 18.3 & 13.3 & 8.2 & 10.1 \\ \hline \hline \end{tabular} \end{table} Table 5: Ablation study on ChFinAnn and DuEE-Fin. Figure 3: Error case study with incorrect arguments colored in red. [.] denotes the sentence numbering. ## Acknowledgements This work was supported in part by the UK Engineering and Physical Sciences Research Council (grant no. EP/T017112/2, EP/V048597/1, EP/X019063/1). YH is supported by a Turing AI Fellowship funded by the UK Research and Innovation (grant no. EP/V020579/2). ## Limitations In our proposed model, we introduce a hyper-parameter \(n\) as the number of event proxy nodes. The value of \(n\) needs to be pre-set. Setting \(n\) to a value larger than the actual event number in a document would lead to computational redundancy as more proxy nodes would be mapped to the _null_ event. However, setting \(n\) to a small value may miss some events in a document. We have experimented with automatically learning the value of \(n\) based on an input document in ProCNet. But we did not observe improved event extraction performance. As such, we simply set it to 16. In the ChFinAnn dataset, 98% documents have less than 7 events annotated. This results in the learning of many redundant proxy nodes for such documents. It remains an open challenge on automatically learning a varying number of event proxy nodes based on an input document. Reducing the number of redundant proxy nodes can reduce training time further. Another shortcoming is the limited capability of ProCNet in capturing the long-term dependencies of sentences, as have been discussed in the per-event-type results in Section 4.2 and 4.3. We observed a relatively worse performance of ProCNet in dealing with long documents with more than 40 sentences as it does not explicitly model the inter-relations of sentences. One possible direction is to explore the use of a heterogeneous graph which additionally models the entity-entity, entity-sentence, and sentence-sentence relations. We will leave it as the future work to study the trade-off between event extraction performance and training efficiency.
2304.11395
Topological Edge States in Nanoparticle Chains: Isolating Radiative Heat Flux
Recent advancements in the field of topological band theory have significantly contributed to our understanding of intriguing topological phenomena observed in various classical and quantum systems, encompassing both wave and dissipative systems. In this study, we employ the notion of band theory to establish a profound connection between the spatio-temporal evolution of temperatures and the underlying topological properties of radiative systems. These systems involve the exchange of energy through radiation among a collection of particles. By utilizing the eigenstates of the response matrix of the system, we establish a robust framework for the examination of topological properties in radiative systems, considering both symmetric and asymmetric response matrices. Our formalism is specifically applied to investigate the topological phase transition in a one-dimensional chain composed of an even number of spherical nanoparticles. We provide compelling evidence for the existence and robustness of topological edge states in systems characterized by an asymmetric response matrix. Moreover, we demonstrate that by manipulating the arrangement and volume of particles, it is possible to control the system's structure and achieve desired topological features. Interestingly, we showed that the radiative heat transfer can be controlled and prevented by topological insulation. Additionally, we conduct an analysis of the temperature dynamics and the associated relaxation process in the proposed system. Our research findings demonstrate that the interplay between bulk states and localized states is pivotal in the emergence of distinct eigenstates and provides significant insights into the spatio-temporal dynamics of temperature and the process of thermalization within a system.
Moladad Nikbakht, Farzam Bahmani
2023-04-22T13:12:35Z
http://arxiv.org/abs/2304.11395v2
# Topological Edge States in Nanoparticle Chains: Isolating Radiative Heat Flux ###### Abstract Recent advancements in the field of topological band theory have significantly contributed to our understanding of intriguing topological phenomena observed in various classical and quantum systems, encompassing both wave and dissipative systems. In this study, we employ the notion of band theory to establish a profound connection between the spatio-temporal evolution of temperatures and the underlying topological properties of radiative systems. These systems involve the exchange of energy through radiation among a collection of particles. By utilizing the eigenstates of the response matrix of the system, we establish a robust framework for the examination of topological properties in radiative systems, considering both symmetric and asymmetric response matrices. Our formalism is specifically applied to investigate the topological phase transition in a one-dimensional chain composed of an even number of spherical nanoparticles. We provide compelling evidence for the existence and robustness of topological edge states in systems characterized by an asymmetric response matrix. Moreover, we demonstrate that by manipulating the arrangement and volume of particles, it is possible to control the system's structure and achieve desired topological features. Interestingly, we showed that the radiative heat transfer can be controlled and prevented by topological insulation. Additionally, we conduct an analysis of the temperature dynamics and the associated relaxation process in the proposed system. Our research findings demonstrate that the interplay between bulk states and localized states is pivotal in the emergence of distinct eigenstates and provides significant insights into the spatio-temporal dynamics of temperature and the process of thermalization within a system. This interplay holds the potential to be leveraged for the development of structures, facilitating efficient heat transfer even in the presence of perturbation. Consequently, it enables precise experimental measurements of heat transfer and serves as a platform for the exploration of thermal topology, offering new avenues for scientific inquiry in this field. It is widely recognized that when two objects are in close proximity (compared to the thermal wavelength \(\lambda_{T}=2\pi\hbar c/k_{B}T\)), the radiative heat exchange between them is significantly larger than what is predicted by Stefan-Boltzmann's law [1]. As a result, the thermal transport and thermalization processes in a system of nanoparticles (NPs) are intricately linked to factors such as their geometrical arrangement, shape, composition, and inter-particle separation distances [2; 3; 4; 5; 6; 7; 8; 9; 10]. Theoretical advancements and experimental measurements have spurred significant efforts in thermal management and regulation within particle ensembles [11; 12; 13; 14; 15], as well as structures with planar geometries [16; 17; 18]. Over the past few years, numerous studies have focused on developing mechanisms to control the magnitude and direction of radiative heat exchange. The recent progress in passive and active control of radiative heat transfer has generated considerable interest in phenomena such as thermal rectifiers [19; 20; 21; 22], magneto-resistance [23; 24], persistent heat fluxes [25; 26], thermal barriers [27], heat shuttling[28], orientation-based regulators [29; 30], material-based regulators [31; 32; 33; 34], heat pumping[35; 36], and thermal switching [37; 38; 39; 40]. Recent studies have provided valuable insights into the influence of topology on dipolar interactions, topological invariance, and the resilience of localized modes in systems composed of particles with dipolar interaction. Downing and Weick conducted research on the topological properties of collective plasmonic excitations in bipartite chains of spherical metallic nanoparticles, shedding light on the interplay between topology and dipolar interactions [41; 42]. Ling et al. investigated the existence of topological edge plasmon modes in diatomic chains consisting of identical plasmonic nanoparticles, further elucidating their emission rate [43]. Pocock et al. performed a comprehensive analysis of wave systems, emphasizing the longitudinal and transverse modes, as well as the significance of disorder on the topological properties [44]. Wang and Zhao focused on the topological phases exhibited by one-dimensional dimerized doped silicon nanoparticle chains, providing crucial insights into the interplay between topology and material properties [45]. Nikbakht and Mahdieh examined the localization of dipolar modes in fractal structures [46]. In addition to the conventional studies mentioned earlier, recent research endeavors are also investigating the influence of topology on radiative heat exchange in the static regime. For instance, Nikbakht investigated the localization of thermal modes in fractal structures [47]. Additionally, Ott and Biehs investigated the thermal topology in plasmonic InSb nanoparticles [48; 49]. Luo et al. proposed a topological insulator analog in radiative systems [50]. Ott et al. studied the local density of states in plasmonic nanostructures. [51]. Herz and Biehs proposed thermal radiation and near-field thermal imaging of a plasmonic Su-Schrieffer-Heeger chain [52]. However, it is important to note that these non-trivial topological behaviors, arising from the localization of dipolar modes in the systems, may not have a significant impact on transient or steady-state heat transfer in the system. On the basis of recent theoretical studies, the temporal evolution of temperatures in collection of particles can be simplified by linearizing the energy balance equation [53; 54; 55]. In these approaches, the dynamic of temperatures becomes a simple eigenvalue problem. From the dynamical perspective, in addition to the features we mentioned earlier, the radiative heat transfer between NPs will be affected by spatio-temporal coupling of the temperatures. Thus, the temporal evolution of temperatures is not solely determined by the radiative heat exchanges, but also by the history dependent dynamics. Several theoretical and experimental works based on topological band theory has revealed that most of the unusual behaviors in the dynamics of wave systems can be governed by their topological properties [56; 57; 58; 59]. Unlike quantum mechanical or classical wave systems, thermal evolution in radiative systems is of dissipative nature. Recent studies have suggested that asymmetric analogues of the Su-Schrieffer-Heeger (SSH) model are relevant in describing one-dimensional topological behavior in dissipative systems [60; 61; 62; 63]. Ultimately, it will be of interest to propose a system showing topological features like insulation in radiative systems. In this study, we apply the concept of band topology, which is traditionally used in classical and quantum wave systems, to investigate heat transfer in radiative systems. We demonstrate how the utilization of the response matrix allows for an understanding of the topological behavior exhibited by these systems. Our calculations are carried out within the theoretical framework of fluctuational electrodynamics, specifically in the dipolar approximation regime. By employing a linearization approach, we determine the eigenvalues and eigenstates of the response matrix, which govern the temporal evolution of temperatures in the underlying radiative system. The topological phases observed are then interpreted in terms of these eigenstates. We utilize this methodology to illustrate a topological phase transition and the emergence of localized states with topological robustness in a chain of spherical nanoparticles. Subsequently, we investigate the thermalization process in the presence of localized states and unveil the existence of a simple topological insulator in radiative systems. ## I Theoretical model Consider a bipartite chain of spherical NPs along the \(x\) axis with lattice constant \(d\), as depicted in Fig. (1a). The intra-cell separation distance between NPs A and B is \(\beta d\), and volume of NPs are \(V_{i}(\beta)\). Here, the number of particles is taken to be even and \(\beta\in[0.45,0.55]\) is used as topological tuning parameter. Particles are initially at temperatures \(T_{i}\) and immersed in a thermal bath at constant temperature \(T_{b}=300\) K. Following previous studies, the differential equation governing the evolution of temperatures is [64; 65] \[\gamma_{i}\frac{dT_{i}}{dt}=\mathcal{P}_{i},\ \ (i=1,2,\cdots,N), \tag{1}\] where \(\mathcal{P}_{i}\) refers to the radiative power dissipated in particle \(i\), respectively. Moreover, \(\gamma_{i}=c_{i}V_{i}\) is the heat capacity of the particle, with volumetric specific heat capacity \(c_{i}\). Within the linear approximation we get a Schrodinger _like_ equation, which takes an elegant form when written in compact notation \[\frac{d}{dt}\Delta\mathbf{T}(t)=-\hat{H}\Delta\mathbf{T}(t), \tag{2}\] where \(\Delta\mathbf{T}=(\Delta T_{1},\Delta T_{2},\cdots,\Delta T_{N})^{\intercal}\) is the column vector representing the temperature state, with elements \(\Delta T_{i}=T_{i}-T_{b}\) that describe the temperature deviation of the NPs with respect to the environment. Moreover, \(\hat{H}=-\hat{\Gamma}^{-1}\hat{F}\) is \(N\times N\)_static_ response matrix with elements \(H_{ij}=-F_{ij}/\gamma_{i}\), and \(\hat{\Gamma}\) is a diagonal matrix with heat capacities along the diagonal, i.e., \(\hat{\Gamma}=\text{diag}\{\gamma_{1},\gamma_{2},\cdots,\gamma_{N}\}\). Following previous works [53; 54; 55], the elements of the conductance matrix are given by \[F_{ij}=\int_{0}^{\infty}\frac{d\omega}{2\pi}\tau_{ij}(\omega)\frac{\partial \Theta(\omega,T)}{\partial T}\Big{|}_{T_{\mathbb{L}}}. \tag{3}\] In this equation, \(\Theta(\omega,T)=\hbar\omega/[\exp(\hbar\omega/k_{B}T)-1]\) is the mean energy of a Planck harmonic oscillator in thermal equilibrium at frequency \(\omega\) and temperature \(T\). Moreover, \(\tau_{ij}=2\texttt{Im}\texttt{Tr}[\hat{\mathbb{A}}_{ij}\texttt{Im}(\hat{ \mathbb{A}}_{j})\hat{\mathbb{C}}_{ij}^{\dagger}]\) represents the radiative energy transmission coefficient between \(i\)-th and \(j\)-th NPs, which includes many-body effects (for more details see Refs. [12; 14; 15]). In this expression \(3N\times 3N\) block matrices are defined as \(\mathbb{A}=[1-\widetilde{\alpha}\mathbb{W}]^{-1}\), \(\hat{\mathbf{\chi}}=\widetilde{\alpha}+\widetilde{\alpha}\hat{\mathbb{G}}_{0}^{ \dagger}\widetilde{\alpha}^{\dagger}\), and \(\hat{\mathbb{C}}=[\mathbb{W}+\hat{\mathbb{G}}_{0}]\mathbb{A}\). Here, \(\mathbb{1}\) stands for the identity matrix, \(\hat{\mathbb{W}}\) is the dipole-dipole interaction matrix, \(\hat{\mathbb{G}}_{0}=i(k^{3}/6\pi)\mathbb{1}\), and \(\widetilde{\alpha}=\text{diag}\{\hat{\mathbf{\alpha}}_{1},\hat{\mathbf{\alpha}}_{2},\cdots,\hat{\mathbf{\alpha}}_{N}\}\) is the polarizability matrix. Furthermore, the \(3\times 3\) block matrices in \(\mathbb{W}\) represent the dipolar interaction between particles \(i\) and \(j\). This interaction is described by the free space Green's tensor \(\hat{\mathbb{G}}_{i\neq j}=\frac{k^{3}}{4\pi}\big{[}f(kr_{ij})\mathbb{1}+g( kr_{ij})\frac{\mathbf{r}_{ij}\otimes\mathbf{r}_{ij}}{r_{ij}^{2}}\big{]}\). In this equation, \(f(x)=[x^{-1}+ix^{-2}-x^{-3}]\exp(ix)\) and \(g(x)=[-x^{-1}-3ix^{-2}+3x^{-3}]\exp(ix)\), where \(k=\omega/c\) and \(r_{ij}\) represents the distance between the \(i\)-th and \(j\)-th nanoparticles, which are located at positions \(\mathbf{r}_{i}\) and \(\mathbf{r}_{j}\), respectively. In the case of identical particles, the heat capacity matrix simplifies to \(\hat{\Gamma}=\gamma\mathbb{1}\), where \(\gamma\) represents the heat capacities. Furthermore, since \(\hat{F}\) is symmetric, the response matrix becomes symmetric. However, in a system of NPs with different volumes, the heat capacity matrix takes the form \(\hat{\Gamma}=c\ \text{diag}\{V_{1},V_{2},\cdots,V_{N}\}\). As a result, the response matrix becomes asymmetric. Nonetheless, the response matrix remains diagonalizable, allowing us to express it in biorthonormal form as \(\hat{H}=\sum_{\mu=1}^{N}\lambda_{\mu}\mathbf{\psi}_{\mu}\mathbf{\phi}_{\mu}^{\dagger}\). Here, \(\mathbf{\psi}_{\mu}\) and \(\mathbf{\phi}_{\mu}\) represent the right and left eigenvectors of \(\hat{H}\), respectively, corresponding to the eigenvalue \(\lambda_{\mu}\). They satisfy the equations \(\hat{H}\mathbf{\psi}_{\mu}=\lambda_{\mu}\mathbf{\psi}_{\mu}\) and \(\mathbf{\phi}_{\mu}^{\intercal}\hat{H}=\lambda_{\mu}\mathbf{\phi}_{\mu}^{\intercal}\). Throughout the paper, the vector \(\mathbf{\psi}_{\mu}\) along with its corresponding eigenvalue is referred to as the \(\mu\)th mode. Additionally, for convenience, we use indices \(\{i,j,m,n\}\) to label the nanoparticles (i.e., lattice sites), and Greek indices \(\mu\) and \(\nu\) to denote the mode numbers. It is important to emphasize that in general, \(\mathbf{\psi}_{\mu}\) are not mutually orthogonal, and \(\mathbf{\psi}_{\mu}\neq\mathbf{\phi}_{\mu}\). However, due to the diagonalizability of the response matrix, its left and right eigenvectors satisfy biorthogonality, i.e., \(\mathbf{\phi}_{\mu}^{\intercal}\cdot\mathbf{\psi}_{\mu}=\delta_{\mu\nu}\). Additionally, it can be shown that these eigenvectors are related by \(\mathbf{\phi}_{\mu}=\hat{\Gamma}\mathbf{\psi}_{\mu}/(\mathbf{\psi}_{\mu}^{\intercal}\hat{ \Gamma}\mathbf{\psi}_{\mu})\). To examine the temporal changes in temperatures, it is valuable to examine the continuum basis denoted by \(\psi_{\mu}(x)\), where \(\psi_{\mu}(x)\) represents the eigenstate of the Hamiltonian operator \(\hat{H}\) in the spatial representation. Likewise, the temperature distribution \(\Delta T(x,t)\) can be referred to as the state of temperature. Through meticulous calculations (outlined in Section (A.1) of the appendix), the solution to Equation (2) is derived as follows: \[\Delta T(x,t)=\sum_{\mu=1}^{N}C_{\mu}(0)e^{-\lambda_{\mu}t}\psi_{\mu}(x), \tag{4}\] where \(C_{\mu}(0)=\mathbf{\phi}_{\mu}^{\intercal}\cdot\Delta\mathbf{T}(0)\) represents the initial weight of the temperature state in the \(\mu\)th mode. This equation provides the temperature state as a function of spatial and temporal coordinates. By knowing the initial temperature state, we can calculate the initial weight distribution as \(P_{\mu}=|\mathbf{\phi}_{\mu}^{\intercal}\cdot\Delta\mathbf{T}(0)|^{2}/[\Delta \mathbf{T}(0)^{\intercal}\cdot\Delta\mathbf{T}(0)]\). In the supplementary file, we discussed how the weight of the initial temperature state vector plays a significant role in the thermalization process of the system. ## II Localized modes in radiative systems The topological properties of the SSH chain shown in Fig.(1a) are determined by the eigenvalue spectrum of its response matrix. In the case of reciprocal nanoparticles, the eigenvalues of the response matrix are real and depend solely on the tuning parameter \(\beta\). In this study, we focus specifically on the thermal topology of systems consisting of reciprocal nanoparticles. As a concrete example, we consider Silicon-Carbide (SiC) as a typical material, characterized by a volumetric specific heat capacity of \(c=2.4075\times 10^{6}\mathrm{JK}^{-1}\mathrm{m}^{-3}\). The polarizability of the nanoparticles is given by \(\alpha_{i}=3V_{i}(\epsilon-1)/(\epsilon+2)\), where \(\epsilon\) represents the scalar permittivity defined as \(\epsilon=\omega_{\infty}(\omega_{L}^{2}-\omega^{2}-i\gamma\omega)/(\omega_{T} ^{2}-\omega^{2}-i\gamma\omega)\). Here, we adopt specific values for the SiC material, namely \(\epsilon_{\infty}=6.7\), \(\omega_{T}=1.495\times 10^{14}\mathrm{rad}/\mathrm{s}\), \(\omega_{L}=1.827\times 10^{14}\mathrm{rad}/\mathrm{s}\), and \(\gamma=0.9\times 10^{12}\) rad/s. A simple example of the radiative analogue of the SSH model is a chain of nanoparticles with identical volumes. In this case, the parameter \(\beta\) only affects the intra-cell separation distance, resulting in a symmetric response matrix. In Fig. (1b), we present the eigenvalue spectrum of the system for \(N=60\) nanoparticles with a lattice constant of \(d=500\) nm. The nanoparticles have a constant volume \(V_{i}=V_{0}=\frac{4}{3}\pi(50)^{3}\) nm\({}^{3}\), and the results are shown for typical values of \(\beta\) in the range \(\beta\in\{0.46,0.5,0.54\}\). The eigenvalue spectrum reveals a prominent gap at \(\beta=0.46\), indicating a substantial separation between relaxation rate levels. However, at the critical value \(\beta=\beta_{c}=0.5\), the band gap in the spectrum closes, suggesting a transition in the system's states. Subsequently, for values of \(\beta>\beta_{c}\), as exemplified by \(\beta=0.54\), the gap reopens, indicating a similar relaxation configuration. To comprehensively assess the topological properties of the system, we calculate the Zak phase and winding number (refer to section (A.4.2) in the Appendix). Our calculations demonstrate that the structure undergoes a topological phase transition across Figure 1: (a) A schematic representation of a bipartite chain of spherical nanoparticles (NPs) aligned along the \(x\) axis with a lattice constant of \(d\). The NPs are divided into \(A\) and \(B\) sub-lattices, with an intra-cell separation distance of \(\beta d\). The chain is immersed in a thermal bath at a temperature of \(T_{b}=300\) K. (b) Eigenvalue spectrum of the SSH chain with a symmetric response matrix, showcasing typical values of \(\beta\in\{0.46,0.5,0.54\}\). The lattice constant is \(d=500\) nm, the number of particles is \(N=60\), and the volumes of the NPs are constant at \(V_{i}=V_{0}=\frac{4\pi}{3}(50)^{3}\) nm\({}^{3}\). (c) Eigenvalue spectrum of the SSH chain with an asymmetric response matrix. The parameters are the same as in panel (b), except for the volume of particles \(1\) and \(60\), which depend on \(\beta\) according to \(V_{1}(\beta)=V_{60}(\beta)=\frac{4\pi}{3}[1250000\beta-561000]\) nm\({}^{3}\). the critical value of \(\beta=0.5\). Importantly, this transition does not coincide with the emergence of localized states within the system. To explore the potential of localized states in one-dimensional radiative systems exhibiting topological features, we investigate systems characterized by an asymmetric response matrix (refer to Section. (A.4.3) for the conditions under which localized states exist). Specifically, our focus lies on systems possessing mirror reflection symmetry, commonly referred to as P-symmetry. For simplicity, we assume that the volumes of particles \(m\) and \(n\) in the chain are dependent on \(\beta\), denoted as \(V_{m}=V_{n}=V(\beta)\), where \(n=N+1-m\). Through the appropriate selection of a function \(V(\beta)\), it becomes possible to construct an asymmetric response matrix that facilitates the description of the emergence of localized states as influenced by the coupling parameter \(\beta\). In our study, we consider a simplified scenario where the volumes of both the \(m\)th and \(n\)th nanoparticles are given by \(V_{m}=V_{n}=\frac{4\pi}{3}[1250000\beta-561000]\)nm\({}^{3}\), while all other nanoparticles have constant volumes \(V_{i\neq m,n}=\frac{4\pi}{3}50^{3}\)nm\({}^{3}\). This choice results in a linear dependence of the particle volumes \(m\) and \(n\) on the intra-cell distance factor \(\beta\). The specific form of \(V(\beta)\), detailed in Section (A.4.3) of the Appendix, is designed to maintain flat bands associated with localized states for \(\beta<0.5\). Figure (1c) displays the eigenvalue spectrum of the response matrix for a specific configuration \((m,n)=(1,60)\), employing the same parameters as in Figure (1b), except for \(V_{m}=V_{n}=\frac{4\pi}{3}[1250000\beta-561000]\) nm\({}^{3}\). At \(\beta=0.54\), the spectrum closely resembles the previous case, exhibiting a discernible gap. However, at \(\beta=0.46\), a band gap also emerges, effectively dividing the spectrum into lower and upper bands. Notably, within this band gap, the presence of edge states becomes apparent. As subsequently demonstrated, these edge states exhibit localization at the boundaries of the chain and showcase resilience against perturbations. The existence of these edge states signifies the nontrivial topological nature of the system, which we will further elucidate. The full eigenvalue spectrum of the response matrix of the system with an asymmetric response matrix is depicted in Fig. (2). The spectrum reveals distinct features depending on the value of the intra-cell distance factor, \(\beta\). For values of \(\beta\) below \(0.5\), localized states can be observed in the spectrum. These localized states coexist with the bulk states, giving rise to a rich and diverse eigenvalue spectrum. The _bulk state_ refers to the thermal modes that could mainly be excited in the system within its bulk or bulk region, away from any surfaces, interfaces, defects, or edges. These state form allowed allowed relaxation time levels that represents the thermal properties of the system asa whole. In contrast, _edge state_, _localized state_. or _defect states_ refers to a unique state that arises at the boundary, or defected position in a system with nontrivial topological properties. Due to their localized nature, they are often confined to a narrow region in the system. They exhibit unique properties such as unidirectional transport, robustness against disorder, and the presence of relaxation time levels within the band gap. We designate this regime as the topologically nontrivial phase (TNP). Conversely, for values of \(\beta\) above \(0.5\), the spectrum exhibits a different behavior, devoid of distinct localized states. We refer to this regime as the topologically trivial phase (TTP). In the eigenvalue spectrum, the eigenstates are denoted by subscripts \(F\) and \(S\), representing the eigenstate with the largest and smallest eigenvalue, respectively. These eigenstates are commonly referred to as the fast and slow modes, respectively, based on their contributions to the decay rate and thermalization process in the chain. The presence of both bulk states and localized states in the spectrum highlights the complex interplay between the topological properties of the system and the tuning parameter \(\beta\). This interplay governs the emergence of different types of eigenstates and provides insights into the system's thermal dynamics and behavior. ## III Profile of the eigenstates for a chain with asymmetric response matrix The temporal evolution of temperatures, as described by Eq.(4), is influenced not only by the eigenvalues but Figure 2: The eigenvalue spectrum of the response matrix \(H\) is shown as a function of \(\beta\) for an SSH chain consisting of \(N=60\) SiC nanoparticles with a lattice constant of \(d=500\) nm. The volume of the first and last nanoparticles depends on \(\beta\) according to \(V_{1}(\beta)=V_{60}(\beta)=\frac{4\pi}{3}[1250000\beta-561000]\) nm\({}^{3}\), while all other nanoparticles have constant volumes \(V_{i\neq 1,60}=\frac{4\pi}{3}50^{3}\) nm\({}^{3}\). The modes are sequentially labeled using Greek indices, and the key bands include \(S\) (slowest band), \(F\) (fastest band), \(U\) (higher localized band), and \(D\) (lower localized band). The color bar on the right represents the Inverse Participation Ratio (IPR) of the eigenstates. also by the eigenstates of the response matrix. In this section, we focus on examining the eigenstates of a chain of nanoparticles (NPs) with an asymmetric response matrix in its topologically nontrivial phase. To illustrate this, we consider a specific case with \(\beta=0.46\), and present the profiles of the eigenstates in Fig. (3). The chain of NPs under investigation is the same as in the previous section, but with an extended length of \(N=100\). As described in the Appendix, the system demonstrates parity preservation, characterized by the commutation relation \([\hat{\Pi},\hat{H}]=0\), where \(\hat{\Pi}\) represents the parity operator. Consequently, the eigenstates can be classified as either even or odd with respect to \(l/2\), where \(l=x_{N}-x_{1}\). The profiles of selected modes are displayed in Fig. (3). We observe that the eigenstates exhibit a distinct pattern of even or odd symmetry with respect to the center of the chain. Notably, the slowest mode \(\psi_{1}(x)\) exhibits an even-parity profile. Furthermore, the lower and upper edge states exhibit symmetric and asymmetric profiles, respectively, with both states localized at sites \(i=1\) and \(i=100\). As described in Sec. (A.4.4) of the appendix, these modes exhibit topological robustness against perturbations that preserve the mirror reflection symmetry within the system. ## IV L-type and R-type eigenstates Upon calculating the eigenstate profiles, we can construct L-type and R-type eigenstates as \(\psi_{L}(x)=[\psi_{D}(x)-\psi_{U}(x)]/\sqrt{2}\) and \(\psi_{R}(x)=[\psi_{D}(x)+\psi_{U}(x)]/\sqrt{2}\), respectively. These L-type and R-type states exhibit the characteristics of maximum localization and have a significant influence on the thermalization process, particularly in the topologically nontrivial phase of the system. Therefore, we conduct a comprehensive investigation of their properties. In Fig. (4), we present the profiles of these hybridized eigenstates for a chain of \(N=60\) nanoparticles, considering the special cases of \((m,n)=(1,60)\) and \((m,n)=(15,46)\). The figure depicts the profiles for a specific value of \(\beta=0.46\), representing the TNP of the chain. It is evident that the L-type state is fully localized at site \(m\), while the R-type state is localized at site \(n\). It is important to emphasize that the L-type and R-type states are pure states, serving as simultaneous eigenstates of the response matrix in the limit of \(\beta\ll\beta_{c}\) (for the definition of pure, mixed, and random states, refer to the Sec. (A.3) of the Appendix). However, since the L-type and R-type states do not possess even or odd parity, they are not parity eigenstates. In Fig. (5a), we illustrate the weight distribution of the L-type states in the topologically nontrivial phase for \(\beta=0.46\) and \((m,n)=(15,46)\). The weight is equally distributed over the localized modes \(\psi_{U}\) and \(\psi_{D}\), i.e., \(P_{\mu}=0.5[\delta_{\mu,D}+\delta_{\mu,U}]\), where \(D=N/2+1\) and \(U=N/2+2\). However, as we slightly increase \(\beta\), both the L-type and R-type states extend over neighboring sites, no longer remaining eigenstates of the response matrix. This behavior indicates that they become delocalized. In Fig. (5b), we show the profile of \(|\psi_{L}(x_{i})|^{2}\) for different values of \(\beta\). It is observed that this mode gradually extends as \(\beta\) increases and eventually becomes Figure 3: The eigenmode profiles are shown for selected modes of the response matrix in a chain of \(N=100\) nanoparticles with an inter-particle distance of \(d=500\) nm. These profiles specifically correspond to the case of \(\beta=0.46\), representing the system in its topologically nontrivial phase. In this setup, the volumes of the nanoparticles, except for the first and last ones, are kept constant at \(V_{i\neq 1,100}=\frac{4}{3}\pi(50)^{3}\)nm\({}^{3}\). However, the first and last particles, denoted as \(V_{1}(\beta)\) and \(V_{100}(\beta)\) respectively, exhibit volume dependence on the parameter \(\beta\). The volume of these specific nanoparticles is given by \(V_{1}(\beta)=V_{100}(\beta)=\frac{4\pi}{3}[1250000\beta-561000]\)nm\({}^{3}\). Figure 4: The profiles of the L-type and R-type modes are shown for a chain of \(N=60\) nanoparticles with an asymmetric response matrix in the topologically nontrivial phase of the system, where \(\beta=0.46\). The specific cases depicted are (a) \((m,n)=(1,60)\) and (b) \((m,n)=(15,46)\). completely delocalized over the entire chain beyond the critical point. Similar arguments apply to the localized states \(\psi_{U}\) and \(\psi_{D}\). For more detailed information, please refer to Sec.(A.5.1) of the Appendix. The inverse participation ratio (IPR) is a valuable metric for characterizing the localization properties of a given state. In our study, where we have established the presence of edge states or localized states, we utilize the IPR defined as \(\text{IPR}\mu=\sum_{i}|\psi_{\mu}(x_{i})|^{4}/\left(\sum_{i}|\psi_{\mu}(x_{i})| ^{2}\right)^{2}\)[66]. Here, \(\psi_{\mu}(x_{i})\) represents the amplitude of the \(\mu\)th eigenstate at the \(i\)th nanoparticle location. The inverse of the IPR provides an estimate of the number of nanoparticles involved in the thermalization process through that particular mode. To illustrate the variation in localization quality among the eigenstates, we present the eigenstate spectrum in Fig. (2) using a color scale. The color bars on the right side of the figure indicate the IPR values for the eigenstates. It can be observed that most eigenstates exhibit extended behavior, except for two localized modes around \(\lambda_{e}\) when \(\beta<0.5\). Notably, as \(\beta\) decreases, the localization of eigenstates in the upper localized band (UEB) and lower localized band (DEB) significantly increases. To demonstrate the delocalization of the L-type state at the critical point, we analyze the IPR of \(\psi_{L}(x)\) for various system sizes in Fig. (5c). From this plot, it can be observed that \(\text{IPR}_{L}\) is relatively large (\(\sim 1\)) at \(\beta=0.46\) and decreases as \(\beta\) increases. The minimum value of \(\text{IPR}_{L}\) occurs at the critical point \(\beta_{c}=0.5\) for different system sizes \(N\), indicating the transition between edge and bulk states. In the topologically trivial phase, the IPR of \(\psi_{L}(x)\) remains approximately constant, suggesting complete delocalization of this state throughout the chain. It is evident from the figure that, in this phase, the state \(\psi_{L}(x)\) becomes more delocalized with increasing system size. Additionally, we observed a decrease in \(\text{IPR}_{L}\) following a scaling law of \(\text{IPR}_{L}\sim N^{-0.9}\) with respect to the chain length \(N\), as shown in the inset of Fig. (5c). ## V Dynamic of temperatures in topologically trivial and non-trivial phases. We are now ready to explore the dynamics of temperature in the presence of a localized state. To achieve this, we investigate the temporal evolution of temperatures by exciting edge or bulk states in a chain of \(N=60\) nanoparticles with \((m,n)=(1,60)\). The initial condition for temperatures is defined such that only one of the particles (either particle 1 or particle 30) is heated up to 350 K, denoted as \(\Delta T=50\) K, while all other particles are initially at room temperature of 300 K, denoted as \(\Delta T=0\) K. The thermalization process is presented in Fig. (6) for both the topologically non-trivial phase (e.g., \(\beta=0.46\)) and the topologically trivial phase (e.g., \(\beta=0.54\)). The inset in the figures represents the weight distribution of the initial temperature state over modes. The temporal evolution of temperatures in case where only particle 1 is heated up to 350 K is shown in Figs. (6a) for \(\beta=0.46\). Initially we may think the initial temperature state vector \(\Delta T_{i}(0)\equiv\Delta T(x_{i},0)=50\delta_{i1}\) is a mixed state. However, we observe the weight distribution of the initial temperature state is highly localized on edge state modes, i.e., \(P_{\mu}\simeq 0.5[\delta_{\mu D}+\delta_{\mu U}]\), see the inset in Fig. (6a). Comparing this distribution with that in Fig. (5a), we can conclude that the initial temperature state is similar to \(\psi_{L}(x)\), i.e., \(\Delta T(x_{i},0)\simeq 50\psi_{L}(x_{i})\), and therefore it is semi-pure. As a result, we expect a relatively isolated thermalization (compared to the following cases) with decay rate \(\sim\lambda_{e}\) in this case. Figure (6b) illustrates the scenario corresponding to the topologically trivial phase (\(\beta=0.54\)) with the same initial condition: \(\Delta T_{i}(0)=50\delta_{i1}\). Upon examining the inset of Fig. (6b), we observe that the distribution of \(P_{\mu}\) extends across several modes, primarily \(\mu\leq U\). Consequently, this initial temperature state can be regarded as a mixed state. In this particular case, the slowest mode \(\psi_{S}(x)\) dominates the temporal evolution of temperatures, leading to an increased thermalization time. To further elucidate the thermalization process, Figs. (6c) and (6d) present the evolution when the 30th particle is heated to 350 K initially, denoted as \(\Delta T_{i}(0)=50\delta_{i30}\). Irrespective of the system's topological phase, the initial temperature state in this scenario is mixed, with contributions from almost all modes, as depicted in the insets of Figs. (6c) and (6d). Similar to the previous case, the thermalization process is primarily determined by the eigenvalue of the slowest mode, resulting Figure 5: Key features of the L-type eigenstate, specifically for the case of \((m,n)=(15,46)\), in an asymmetric SSH chain of nanoparticles are presented. (a) The weight distribution of \(|\psi_{L}(x)|^{2}\) is shown for a chain of \(N=60\) nanoparticles at \(\beta=0.46\). (b) The probability distribution of \(\psi_{L}(x)\) is displayed for \(\beta\) values ranging from 0.46 to 0.54 in a chain with an asymmetric response matrix and \(N=60\) nanoparticles. (c) The Inverse Participation Ratio (IPR) of \(\psi_{L}(x)\) is plotted as a function of \(\beta\) for chains of nanoparticles with different lengths \(N\). The inset exhibits the scaling behavior of IPR vs \(N\) in the topologically trivial phase (TTP) in \(\Delta T(x_{30},t>\lambda_{S}^{-1})\propto\exp(-\lambda_{S}t)\). Consequently, the thermalization time is significantly prolonged. A thorough comparison of the obtained results with those derived from a system with symmetric response matrix, as detailed in Sec. (A.6) of the Appendix, highlights an important observation. The absence of localized states in chian of identical nanoparticles yields a distinctive long-range characteristic in the propagation of heat flux along the chain. This finding implies that the presence or absence of localized states significantly impacts the transport properties and dynamics of temperatures in the system. ## VI Thermalization time To gain a deeper understanding of the impact of structural topology on the transient regime of temperature dynamics, we conducted calculations to determine the thermalization time of the system. Similar to the approach in the previous section, we utilized an initial temperature state to excite the desired states. For this purpose, the bulk states were excited by heating up particle \(j\) in the chain, where \(j\notin\{m,n\}\) in the topologically non-trivial phase. Likewise, the edge states were excited by heating up particle \(j\in\{m,n\}\) in topologically non-trivial phase. In both scenarios, particle \(j\) was heated to 350 K, while all other particles were initially at room temperature of 300 K. Subsequently, we tracked the thermalization process until the system reached its equilibrium state, known as the stable state. To quantify the results, the thermalization time \(\tau\) in each case was defined as the time at which \(\Delta T(x,\tau)<10^{-12}\) K. In Fig. (7a), we present the thermalization time in a chain consisting of \(N=60\) NPs with \((m,n)=(1,60)\) for selected values of \(j\), specifically \(j\in\{1,2,30\}\). The thermalization time for \(j=1\) exhibits a sharp increase for \(\beta\ll\beta_{c}\), followed by a more gradual rise in the topologically trivial phase. Interestingly, the thermalization times \(\tau_{2}\) and \(\tau_{30}\) are approximately equal and do not significantly depend on the topological phase of the system. They exhibit a slow decrease and approach the value of \(\tau_{1}\) as \(\beta\gg\beta_{c}\). Notably, this behavior is consistent for chains with localized states at arbitrary positions \((m,n)\). The impact of chain length on the thermalization time is illustrated in Fig. (7b) for the specific case of \(j=1\). Heating up particle 1 in topologically nontrivial phase, particularly when \(\beta\ll\beta_{c}\), excites the L-type edge state. Consequently, the thermalization time exhibits little dependence on the chain length in this limit. On the other hand, in the topologically trivial phase, we observe that the thermalization time is enhanced compared to topologically nontrivial phase and reaches a saturation point for longer chain lengths. To gain insights into the underlying physics of the thermalization process, we examine the color maps of the weight distribution \(P_{\mu}\) for the initial temperature state \(\Delta T(x_{i},0)=50\delta_{i,j}\) as a function of \(\beta\). Figures (7c)-(7e) present these color maps, displayed in a logarithmic scale for clarity. Notably, heating up particle 1 efficiently excites the edge state in the topologically non-trivial phase, as depicted in Figure (7c). The weight distribution of this initial temperature state, particularly for \(\beta\ll\beta_{c}\), exhibits localization around \(\lambda_{e}\), similar to the behavior of \(\psi_{L}(x)\) shown in Figure (5a). Consequently, the initial temperature state can be regarded as semi-pure, primarily exciting the localized L-type state. As a result, we expect a rapid and nearly isolated temperature decay for the excited edge site, characterized by \(\Delta T(x_{1},t)\sim 50\exp(-\lambda_{e}t)\), while \(\Delta T(x_{i\neq 1},t)\simeq 0\). This finding excellently agrees with the results in Figs. (7a)-(7b) and confirms the small values of \(\tau_{1}\) for \(\beta\ll\beta_{c}\). However, as \(\beta\) increases beyond \(\beta_{c}\), other modes in the bulk become populated. This is attributed to the transition of the initial temperature state from semi-pure to mixed as \(\beta\) increases. Figure (7c) demonstrates that the states in the lower bulk band are fully excited, leading to a longer thermalization time for \(\beta>\beta_{c}\). From Fig. (7d), it is evident that heating up particle 2 results in a slight excitation of the edge states in TNP. This is expected since particle 2 is located in close proximity to the localization site at \(m=1\). Simultaneously, a significant portion of the modes in the lower bulk band Figure 6: Temperature evolution in a chain with asymmetric response matrix comprising \(N=60\) nanoparticles in both the topologically trivial and non-trivial phases. The volume of particles is determined as follows: \(V_{i\neq 1,60}=\frac{4}{3}\pi\times(50)^{3}\)nm\({}^{3}\), and \(V_{1}(\beta)=V_{60}(\beta)=\frac{4\pi}{3}\left[1250000\beta-561000\right]\) nm\({}^{3}\). The initial temperature condition is set such that only the \(j\)-th particle is heated to 350 K, while all other particles are initially at room temperature of 300 K. It should be noted that the edge state is localized at sites 1 and 60 in the topologically non-trivial phase. (a) Temperature evolution for \(j=1\) in the topologically non-trivial phase with \(\beta=0.46\). (b) Temperature evolution for \(j=1\) in the topologically trivial phase with \(\beta=0.54\). (c) Temperature evolution for \(j=30\) in the topologically non-trivial phase with \(\beta=0.46\). (d) Temperature evolution for \(j=30\) in the topologically trivial phase with \(\beta=0.54\). is also excited. Consequently, the contribution of the slowest mode \(\mu=S\) is sufficient to explain the longer thermalization time observed in TNP. In the topologically trivial phase, for any value of \(\beta\), heating up particle 2 approximately populates all bulk states in both the upper and lower bands. Once again, the thermalization time is primarily determined by mode \(\mu=S\), resulting in an approximately equal value of \(\tau_{2}\) as in TNP. The weight distribution of the initial temperature state for \(\Delta T(x_{i},0)=50\delta_{i,30}\) is considerably extended, as depicted in Fig. (7e). Regardless of the value of \(\beta\), heating up this particle leads to excitation of all bulk states. As in the previous cases, the slowest mode dominates the thermalization time, thus suggesting a similar \(\tau_{2}=\tau_{30}\simeq\) constant behavior, as observed in Fig. (7a). ## VII Radiative insulator To investigate the concept of topological insulation in radiative systems, we present the spatio-temporal evolution of the temperature field \(\Delta T(x,t)\) in a chain of \(N=60\) nanoparticles with localized modes located at \((m,n)=(15,46)\) and \(\beta=0.45\), as depicted in Fig. (8). In Figure. (8a), we fix the temperature of particle \(j=5\) at 350 K, while initially all other particles are at room temperature 300 K. Similarly, the time evolution of the temperature field for \(j=30\) is displayed in Fig. (8b). Since \(\beta=0.45\), the system is in the topologically nontrivial phase with localized modes positioned at \(x_{15}\) and \(x_{46}\). Interestingly, we observe that the localized states exhibit topological insulation behavior, effectively blocking the flow of radiative energy within the system. This leads to the formation of a sharp temperature gradient at the positions of the localized modes, with the insulation strength decreasing as \(\beta\) increases. In Fig. (8a), where the hot source is located at \(x_{5}\), the localized state at \(x_{15}\) prevents the radiative energy from flowing across it. The same feature is observed when the hot source is positioned between the localization points of the localized states. Figure. (8b) clearly demonstrates that topological insulation hinders the transport of radiative energy across \(x_{15}\) and \(x_{46}\). It is important to note that since radiative heat transfer has a long-range nature, the insulation is not perfectly 100%. However, the localized modes effectively block the diffusive component of the thermal flow in the limit of \(\beta\ll\beta_{c}\). To further explore the influence of structure topology and the presence of localized states on the radiative heat flux, we analyze the spatio-temporal evolution of the temperature field in chains with localized states positioned at the middle of the chain, i.e., \((m,n)=(\frac{N}{2},\frac{N}{2}+1)\). Figure. (9) illustrates the results for chains with \(N=20\) and \(N=30\) NPs, showcasing the spatio-temporal evolution of the temperature field for various values of \(\beta\). In both cases, the first and last particles in the chain are maintained at constant temperatures \(T(x_{1},t)=350\) K and \(T(x_{N},t)=300\) K, respectively, while all other particles initially have a temperature of 300 K and are allowed to vary with time. Remarkably, we observe that for \(\beta=0.45\), thermal energy is not permitted to flow across the localized sites, resulting in a sharp temperature gradient at the positions of the localized modes. However, as \(\beta\) increases towards Figure 7: (a) Thermalization time of a chain with a length of \(N=60\), representing the same setup as shown in Fig. (1c), but with different initial temperature states, specifically \(\Delta T(x_{i},0)=50\delta_{i,j}\) for \(j\in\{1,2,30\}\). (b) Thermalization time for the initial temperature state \(\Delta T(x_{i},0)=50\delta_{i,1}\) in a chain of nanoparticles with different lengths, \(N\in\{20,40,60,80,100\}\). (c-e) Color map depicting the weight distribution \(P_{\mu}\) for the initial temperature state \(\Delta T(x_{i},0)=50\delta_{i,j}\), where \(j\) is equal to 1, 2, and 30, respectively. Figure 8: Spatio-temporal evolution of the temperature field \(\Delta T(x_{i},t)\) in a chain of \(N=60\) nanoparticles for the same setup as shown in Fig. (1c). The localized states positioned at \((m,n)=(15,46)\) are indicated by arrows. Particle \(j\) is maintained at a constant temperature \(T_{j}=350\) K, while all other particles are initially at room temperature 300 K. (a) \(j=5\). (b) \(j=30\). the trivial topological phase, the localized modes become more extended, enabling the thermal energy to freely diffuse throughout the system. This behavior is evident in both panels (a) and (b) of Fig. (9), corresponding to chains with \(N=20\) and \(N=30\) NPs, respectively. ## VIII Conclusion In summery, we have provided a theoretical investigation of topological phase transitions and the presence of topological modes, including edge states and conventional bulk states, in radiative systems. Using the response matrix formalism within the framework of fluctuational electrodynamics, we have explored the topological behavior of these systems. By examining the eigenvalue spectrum, we have observed distinct behaviors in systems with symmetric and asymmetric response matrices. Specifically, for systems with a symmetric response matrix, which corresponds to identical nanoparticles, we have identified topological phase transitions. However, we have not observed the existence of edge states in this case. On the other hand, by considering systems with an asymmetric response matrix and mirror reflection symmetry (P-symmetry), we have discovered the emergence of localized states within the band gap for certain parameter values. This finding indicates the presence of a topologically nontrivial phase in the system. These localized states, known as edge states or defect modes, are confined to the boundaries and demonstrate robustness against perturbations that maintain the mirror reflection symmetry of the system. Additionally, we examined the temporal evolution of temperatures in the presence of a localized state by exciting either edge or bulk states in a chain of nanoparticles with an asymmetric response matrix. Our observations of temperature dynamics for the topologically nontrivial and topologically trivial phases highlighted the influence of the system's topology on thermal behavior. Furthermore, our investigation of the spatio-temporal evolution of the temperature field in the presence of localized states revealed the topological insulation characteristics of these states. The localized states effectively blocked the flow of radiative energy within the system, resulting in the formation of sharp temperature gradients at their positions. Overall, our research enhances the understanding of heat transfer in radiative systems from a topological perspective. It provides valuable insights into topological phase transitions, the existence of localized states, and the impact of topology on the thermalization process. The approach we propose has the potential to inspire future research in nanoparticle ensembles, both in two-dimensional and three-dimensional settings, as well as in structures with planar geometry that rely on radiation for energy transfer. Furthermore, our approach can be extended to investigate the active control of topological features using external electric or magnetic fields. We believe that our findings reveal new and fascinating aspects of topological phases in radiative systems and offer valuable insights into the topological insulation of thermal energy, which may find applications in the field of radiative heat transfer. ## Appendix In this note, we present a scientific formulation of temperature dynamics in a system of particles interacting Figure 9: Spatio-temporal evolution of the temperature field \(\Delta T(x_{i},t)\) in a chain of \(N\) nanoparticles (NPs) for the same setup as depicted in Fig. (1c). The positions of the localized states, indicated by dashed rectangles, are located at \((m,n)=(N/2,N/2+1)\). The first and last particles are maintained at constant temperatures \(T_{1}=350\) K and \(T_{N}=300\) K, respectively, while all other particles are initially at room temperature \(300\) K. (a) Chain length \(N=20\). (b) Chain length \(N=30\). through radiation. Our approach successfully describes the observation of edge states and topological phase transitions in many-body systems. The calculations are conducted within the theoretical framework of fluctuational electrodynamics, specifically employing the dipolar approximation. By applying linear response theory, we introduce a theoretical framework to analyze the temporal evolution of temperatures in a system of spherical nanoparticles (NPs). This formalism allows us to describe the dynamics in terms of eigenstates of the response matrix. Consequently, we can classify the system's dynamical properties based on their initial temperature state. To demonstrate the occurrence of topological phase transitions, we utilize the introduced eigenmode representation in a chain of particles with an asymmetric response matrix. Additionally, we explore techniques for manipulating the spatial position of localized states within the chain. To investigate the localization of modes and the impact of chain size on the localization of topological edge states, we employ the inverse participation ratio (IPR) as a useful tool. Finally, we analyze the thermalization process in the NPs chain and compare the thermalization time between topologically trivial and non-trivial phases of the system. Our formalism establishes a strong foundation for exploring the topology and related phenomena in radiative systems. ### Theory and Model Let us consider a system comprising \(N\) spherical nanoparticles, each characterized by its volume \(V_{i}\), volumetric specific heat capacity \(c_{i}\), and initial temperatures \(T_{i}(0)\). These nanoparticles are immersed in a thermal bath maintained at a temperature of \(T_{b}=300\) K, as depicted in Fig. (A.1a). Following previous studies, the evolution of nanoparticle temperatures can be described by a set of coupled differential equations given as follows: \[\gamma_{i}\frac{dT_{i}}{dt}=\mathcal{P}_{i}\ \ \ \ (i=1,2,\cdots,N).\] (A.1) Here, \(\gamma_{i}=c_{i}V_{i}\) represents the heat capacity of the \(i\)-th nanoparticle, and \(\mathcal{P}_{i}\) denotes the total power dissipated in that particular nanoparticle. To calculate \(\mathcal{P}_{i}\), we utilize the fluctuation-dissipation theorem and the dipole approximation, yielding: \[\mathcal{P}_{i}=\int_{0}^{\infty}\frac{d\omega}{2\pi}\sum_{j=1}^{N}\Big{[} \tau_{ij}\Theta(\omega,T_{j})-\tau_{ij}\Theta(\omega,T_{b})\Big{]}.\] (A.2) In linear approximation, we can expand \(\Theta(\omega,T_{j})\) in Eq. (A.2) about \(T_{b}\) by writing \(T_{j}=T_{b}+\Delta T_{j}\) to get [53; 54; 55] \[\mathcal{P}_{i}=\sum_{j=1}^{N}F_{ij}\Delta T_{j},\] (A.3) where \[F_{ij}=\int_{0}^{\infty}\frac{d\omega}{2\pi}\tau_{ij}(\omega)\frac{\partial \Theta(\omega,T)}{\partial T}\Big{|}_{T_{b}}.\] (A.4) By substituting Eq. (A.3) into Eq. (A.1), we get \[\frac{d}{dt}\Delta T_{i}(t)=\gamma_{i}^{-1}\sum_{i=1}^{N}F_{ij}\Delta T_{j}.\] (A.5) Now, by defining temperature state vector \(\Delta\mathbf{T}=(\Delta T_{1},\Delta T_{2},\cdots,\Delta T_{N})^{\intercal}\), we can write Eq. (A.5) as \[\frac{d}{dt}\Delta\mathbf{T}(t)=-\hat{H}\Delta\mathbf{T}(t).\] (A.6) In the provided equation, the linear response matrix is denoted as \(\hat{H}=-\hat{\Gamma}^{-1}\hat{F}\)[54], where \(\hat{\Gamma}\) represents a diagonal matrix with heat capacities along the diagonal, given by \(\hat{\Gamma}=\text{diag}\{\gamma_{1},\gamma_{2},\cdots,\gamma_{N}\}\). In a system where all particles are of the same material, i.e., \(c_{1}=c_{2}=\cdots=c_{N}\equiv c\), we can express \(\hat{\Gamma}\) as \(c\:\text{diag}\{V_{1},V_{2},\cdots,V_{N}\}\). Furthermore, the conductance matrix \(\hat{F}\) is always real and symmetric, irrespective of whether the particles are identical or not. Notice the similarity in structure between Equation (A.6) and the Schrodinger equation in quantum mechanics, given by \(|\dot{\psi}\rangle=-i\hbar^{-1}\hat{H}|\dot{\psi}\rangle\). To establish the Hamiltonian for dissipative systems, we can define \(\hat{H}^{\prime}=-i\hat{H}\) with purely imaginary eigenvalues. However, for simplicity and without loss of generality, we present the theory in terms of the response matrix with real eigenvalues. In the case of identical particles, where \(\hat{\Gamma}\) is diagonal and \(\hat{F}\) is a symmetric matrix, the response matrix \(\hat{H}\) is symmetric. However, in general, the heat capacity of particles may differ (\(\gamma_{m}\neq\gamma_{n}\)), especially when the volumes (\(V_{m}\neq V_{n}\)) are unequal. This non-identical nature renders the response matrix asymmetric, represented as \([H,H^{\dagger}]\neq 0\). Figure A.1: (a) Sketch of a radiative system of \(N\) spherical NPs with initial temperature \(T_{i}(0)\) (\(i=1,2,\cdots,N\)) immersed in a thermal bath at \(T_{b}\). (B) A schematic illustration of bipartite chain of spherical NPs along the \(x\) axis with lattice constant \(d\) and immersed in a thermal bath at \(T_{b}=300\) K. The volume of particles is \(V_{i}(\beta),\ (i=1,2,\cdots,N)\) and the intra-cell separation distance between NPs A and B is \(\beta d\). Our approach involves determining the eigenvalues and eigenvectors of the response matrix \(\hat{H}\) and constructing the propagator \(\hat{U}(t)\) using these eigenvalues and eigenvectors. Since the response matrix is time-independent, this step is relatively straightforward. After obtaining \(\hat{U}(t)\), we express the state vector \(\Delta\mathbf{T}(t)\) as \(\hat{U}(t)\Delta\mathbf{T}(0)\). To proceed, we expand the vector representing the initial temperature state using the set of eigenvectors \(\mathbf{\psi}\mu\): \[\Delta\mathbf{T}(0)=\sum_{\mu=1}^{N}\mathbf{\psi}_{\mu}[\mathbf{\phi}_{\mu}^{\intercal} \cdot\Delta\mathbf{T}(0)],\] (A.7) Here, "\(\cdot\)" denotes the inner product, and we assume the linear independence of the eigenvectors \(\mathbf{\psi}_{\mu}\) while considering that the corresponding left eigenvectors \(\mathbf{\phi}_{\mu}\) are normalized such that \(\mathbf{\phi}_{\mu}^{\intercal}\cdot\mathbf{\psi}_{\mu}=1\). In our formalism, \(C_{\mu}(0)=\mathbf{\phi}_{\mu}^{\intercal}\cdot\Delta\mathbf{T}(0)\) represents the weight of the initial temperature state in the \(\mu\)th mode. Expanding \(\Delta\mathbf{T}(t)\) in a similar manner: \[\Delta\mathbf{T}(t)=\sum_{\mu=1}^{N}C_{\mu}(t)\mathbf{\psi}_{\mu},\] (A.8) By substituting Eqs.(A.7) and (A.8) into Eq.(A.6), we can obtain the expression for \(\hat{U}(t)\) as follows: \[\hat{U}(t)=\sum_{\mu=1}^{N}\mathrm{e}^{-\lambda_{\mu}t}\mathbf{\psi}_{\mu}\mathbf{\phi }_{\mu}^{\intercal}.\] (A.9) Therefore, the time evolution of the temperature state is given by: \[\Delta\mathbf{T}(t)=\hat{U}(t)\Delta\mathbf{T}(0)=\sum_{\mu=1}^{N}C_{\mu}(0)e^ {-\lambda_{\mu}t}\mathbf{\psi}_{\mu},\] (A.10) which leads to the equation: \[C_{\mu}(t)=C_{\mu}(0)e^{-\lambda_{\mu}t}.\] (A.11) The temporal behavior of temperatures can be effectively described using continuous notation. Let us consider a linear chain of nanoparticles positioned along the \(x\)-axis, with a total length of \(l=x_{N}-x_{1}\). The one-dimensional temperature field, denoted as \(\Delta T(x,t)\), represents the variation in temperature within the chain relative to the thermal bath, given by \(\Delta T(x,t)=T(x,t)-T_{b}\). Thus, the evolution of the temperature field over time can be expressed as: \[\Delta T(x,t)=\sum_{\mu=1}^{N}C_{\mu}(0)e^{-\lambda_{\mu}t}\psi_{\mu}(x),\] (A.12) In this equation, \(\psi_{\mu}(x)\) corresponds to the right eigenstate of the response matrix associated with the eigenvalue \(\lambda_{\mu}\). The probability density of mode \(\mu\) is defined as \(\rho_{\mu}(x)=|\psi_{\mu}(x)|^{2}\). Additionally, we introduce the inverse participation ratio (IPR), which is defined as: \[\text{IPR}\mu=\frac{\sum i=1^{N}|\psi_{\mu}(x_{i})|^{4}}{\left[\sum_{i=1}^{N} |\psi_{\mu}(x_{i})|^{2}\right]^{2}},\] (A.13) to quantify the spatial localization of the eigenstates. It is worth noting that the IPR of the eigenstates lies within the range of \([1/N,0.5]\), where the lower limit corresponds to a completely extended mode and the upper limit corresponds to a completely localized mode. As we will demonstrate, the first eigenmode \(\psi_{1}(x)\) is completely extended, resulting in \(\text{IPR}_{1}=1/N\). Moreover, if we construct a combination of eigenstates, the IPR of the constructed state will fall within the range of \([1/N,1]\). To investigate the topological phase transition in radiative systems, we examine a bipartite chain of spherical nanoparticles arranged along the \(x\)-axis, with volumes \(V_{i}\) for \(i=1,2,\cdots,N\). These NPs are immersed in an external thermal bath at a temperature of \(T_{b}=300\) K. As illustrated in Figure (A.1b), the separation distance between cells remains constant at \(d=500\) nm, while the intra-cell distance factor \(\beta\) is employed to manipulate the system's topological properties. We begin by emphasizing key aspects of the introduced formalism, which aid in comprehending the temporal evolution of temperatures in the topological edge modes. ### Temporal Evolution of Temperatures in the Presence of Constraint. In the previous section, we examined the temporal evolution of temperature in the absence of external power, where the temperature in the phase space always reached a steady state of \(\Delta T_{i}(t\rightarrow\infty)=0\), regardless of the initial temperature state. Now, let us consider a scenario where we want to maintain the temperature of certain particles at a constant value and study the temporal evolution of the temperatures of the remaining particles. Therefore, Equation (1) can be expressed in a more general form as: \[\gamma_{i}\frac{dT_{i}}{dt}=\mathcal{P}_{i}+\mathcal{F}_{i}^{e}(t),\ \ (i=1,2,\cdots,N),\] (A.14) where \(F_{i}^{e}(t)\) is in general a time-dependent external power applied to particle \(i\) to keep its temperature fixed. For sake of simplicity, we assume that the temperature of \(p\)th particle is taken to be fixed at \(T_{p}(t)=T_{p}(0)\). Equation. (A.15) then can be written as \[\gamma_{i}\frac{dT_{i}}{dt}=\mathcal{P}_{i}+\mathcal{F}_{p}^{e}(t)\delta_{i,p}, \ \ (i=1,2,\cdots,N),\] (A.15) The functionality of \(F_{p}^{e}(t)\) is not known at this stage but it certainly depend on time. In order to calculate the temporal evolution of temperatures as well as the functionality\(F_{p}^{e}(t)\), we develop a formalism in this section that can simply extend for case where more than one object's temperature is kept fixed. The number of dynamical variables in our system, specifically temperatures, is reduced to \(N-1\). This reduction is achieved by defining a rearranged temperature state denoted as \(\Delta\mathbf{T}^{\prime}=(\mathbf{T}-\mathbf{T_{b}})^{\intercal}=(\Delta T_{1} ^{\prime},\Delta T_{2}^{\prime},\cdots,\Delta T_{N-1}^{\prime})^{\intercal}\), where \(\Delta T_{p}^{\prime}\) is excluded. By making this adjustment, Eq. (A.6) can be expressed as follows: \[\frac{d}{dt}\Delta\mathbf{T}^{\prime}(t)=-\hat{H}^{\prime}\Delta\mathbf{T}^{ \prime}(t)+\mathbf{F},\] (A.16) In this equation, \(\hat{H}^{\prime}\) ia an \((N-1)\times(N-1)\) dimensional matrix that represents the reduced response matrix obtained by eliminating the \(p\)th row and \(p\)th column of the original matrix \(\hat{H}\). The matrix \(\hat{H}^{\prime}\) can be represented in a biorthogonal form as \(\hat{H}^{\prime}=\sum_{\mu=1}^{N-1}\lambda_{\mu}^{\prime}\mathbf{\psi}_{\mu}^{ \prime}\mathbf{\phi}_{\mu}^{\prime\intercal}\), where \(\lambda_{\mu}^{\prime}\) are the eigenvalues of \(\hat{H}^{\prime}\), and \(\mathbf{\psi}_{\mu}^{\prime}\) and \(\mathbf{\phi}_{\mu}^{\prime}\) are the corresponding right and left eigenvectors, respectively. The eigenvectors \(\mathbf{\psi}_{\mu}^{\prime}\) and \(\mathbf{\phi}_{\mu}^{\prime}\) play a crucial role in the analysis of the system dynamics. They are orthogonal to each other and provide insight into the specific modes of behavior in the reduced system. The eigenvalues \(\lambda_{\mu}^{\prime}\) represent the associated frequencies or rates of change for each mode. Furthermore, in the given system, the power received by particles from particle \(p\) can be mathematically expressed as \(F_{i}=H_{pi}\Delta T_{p}^{\prime}(0)\), where \(\Delta T_{p}^{\prime}(0)\) represents the temperature difference between \(T_{p}(0)\) and \(T_{b}\). To obtain the general form of Eq.(A.12) while considering the maintained constraint, we expand \(\Delta\mathbf{T}^{\prime}(t)\), \(\Delta\mathbf{T}^{\prime}(0)\), and \(\mathbf{F}\) using the basis vectors of \(\hat{H}^{\prime}\). By utilizing Eq.(A.16), the resulting equation is as follows: \[\Delta T_{i}^{\prime}(t)=\sum_{\mu=1}^{N-1}\left[\frac{C_{\mu}^{F}}{\lambda_{ \mu}^{\prime}}+\left(C_{\mu}^{\prime}(0)-\frac{C_{\mu}^{F}}{\lambda_{\mu}^{ \prime}}\right)e^{-\lambda_{\mu}^{\prime}t}\right]\psi_{\mu}^{\prime}(x_{i}),\] (A.17) In Equation (A.17), \(C_{\mu}^{\prime}(0)\) represents the initial weight of the temperature state in the \(\mu\)th mode, given by \(C_{\mu}^{\prime}(0)=\mathbf{\phi}_{\mu}^{\intercalintercal}\cdot\Delta\mathbf{T}^{ \prime}(0)\). The quantity \(C\mu^{F}\) is determined as \(C_{\mu}^{F}=\mathbf{\phi}_{\mu}^{\intercalintercal}\cdot\mathbf{F}\). By computing the values of \(\Delta T_{i}(t)\), the power required to maintain the temperatures of particle \(p\) fixed can be obtained using the unused equation in Eq. (A.15) as \(F_{p}^{e}(t)=\sum_{i=1}^{N}H_{pi}\Delta T_{i}^{\prime}(t)\). ### Temporal Evolution: Classification of the Initial Temperature State Vector. To demonstrate the applicability of our developed formalism, we consider a specific configuration of nanoparticles as an illustrative example. However, it is important to note that the conclusions drawn are general and can be applied to any ensemble of NPs. We examine a periodic array consisting of \(N=60\) identical SiC NPs with a radius of \(R=50\) nm, inter-cell separation distance of \(d=500\) nm, and an intra-cell distance factor of \(\beta=0.5\). By utilizing Eq. (A.12), we can obtain the spatio-temporal evolution of the temperature field \(\Delta T(x,t)\) given an initial distribution \(\Delta T(x,0)\). Through this analysis, we reveal that the dynamic behavior of the radiative system is highly dependent on the choice of initial temperature values. ### Pure state A particular case of interest is a "pure" state, where the initial temperature field coincides with one of the eigenstates \(\psi_{\mu}(x)\). In other words, if we initialize the system with \(\Delta T(x,0)=\psi_{\nu}(x)\), we obtain \(C_{\mu}(0)=\delta_{\mu\nu}\), and at a later time, \(\Delta T(x,t)=\psi_{\nu}(x)e^{-\lambda_{\nu}t}\). Consequently, temperatures in an initially pure state exhibit exponential decay with a rate determined by \(\lambda_{\nu}\). In this scenario, the particles are decoupled, and their temperatures evolve independently. As a result, inter-particle thermalization does not occur, and all particles eventually reach thermal equilibrium with the external bath simultaneously. The time required for the system to reach equilibrium in this case is highly sensitive to the specific initial pure state. Therefore, a system initially in the "fastest" state (\(\psi_{F}=\psi_{N}\) with \(\lambda_{F}=\max\{\lambda_{\mu}\}\)) achieves thermalization in the shortest amount of time, while the "slowest" state (\(\psi_{S}=\psi_{1}\) with \(\lambda_{S}=\min\{\lambda_{\mu}\}\)) experiences the longest thermalization time. As an illustrative example, we present the temperature dynamics for two distinct pure cases in panel (a) of Fig. (A.2). In the first case, we consider \(\Delta T(x,0)=50\times\psi_{3}(x)\), while in the second case, we set \(\Delta T(x,0)=50\times\psi_{50}(x)\). It can be observed that both cases exhibit temporal temperature profiles characterized by pure exponential decay. The decay rate in each case corresponds to the eigenvalue of the respective initial pure state, namely \(\lambda_{3}\) and \(\lambda_{50}\). The insets of these figures display the weight distribution of the initial states. An interesting characteristic of the pure states is that \(P_{\mu}\) takes the value of \(\delta_{\mu 3}\) and \(\delta_{\mu 50}\) for the first and second case, respectively. ### Mixed state The initial state can be a mixed state representing an arbitrary initial temperature distribution. In this case, the system is described by a probabilistic mixture of pure states, and the combination of modes contributes to the thermalization process with an external thermal bath. The time evolution of the temperature state in this scenario involves multiple exponentially decaying terms, indicating several inter-particle thermalization events prior to complete thermalization with the bath. Consequently, the thermalization process exhibits multiple characteris tic times, with the thermalization time primarily determined by the weight of the mode with the lowest eigenvalue. As a representative example, we present in panel (b) of Fig. (A.2) the temperature dynamics for two different types of mixed states. In the first case, the initial temperature state is expressed as a linear combination of two pure states, specifically \(\Delta T(x,0)=50\times\left(\frac{3}{5}\psi_{3}(x)+\frac{4}{5}\psi_{50}(x)\right)\). We observe that the system exhibits an energy relaxation timescale before thermalizing with the environment. This behavior is absent for initial states with even symmetry, such as \(\Delta T(x,0)=50\times\left(\frac{3}{5}\psi_{3}(x)+\frac{4}{5}\psi_{51}(x)\right)\), or odd symmetry, such as \(\Delta T(x,0)=50\times\left(\frac{3}{5}\psi_{4}(x)+\frac{4}{5}\psi_{50}(x)\right)\), due to parity conservation. Consequently, we conclude that the temperature distribution of the chain remains symmetric (asymmetric) for initially symmetric (asymmetric) temperature profiles. As depicted in the inset, the weight distribution of the initial state shows a combination weighted on modes 3 and 50, i.e., \(P_{\mu}=\left|\frac{3}{5}\right|^{2}\delta_{\mu 3}+\left|\frac{4}{5}\right|^{2} \delta_{\mu 50}\). Another interesting case is when only the 15th particle is initially heated to 350 K, while all other particles start at room temperature 300 K, i.e., \(\Delta T(x_{i},0)=50\times\delta_{i,15}\). This case does not possess even or odd symmetry. As shown in the second row of panel (b) in Fig. (A.2), _all_ eigenstates are initially populated. Moreover, the system undergoes \(N-1=50\) inter-particle thermalization events before reaching the equilibrium state. ### Random state Finally, we consider the case of a random distribution for the initial temperatures (referred to as a "random" state), where all modes are expected to contribute to the temporal evolution of temperatures in the system. Despite the high coupling between particles in this case, the thermalization time is primarily determined by the slowest mode \(\mu=1\). Similar to the mixed case, the thermalization process may involve multiple inter-particle thermalization events before the system reaches its equilibrium state. As a representative example within this scenario, we investigate two cases and present the results in panel (c) of Fig. (A.2). In the first case, the initial temperatures are randomly distributed within the range \(\Delta T(x,0)\in\mathrm{Rand}[-50,50]\), while in the second case, they are randomly distributed within the range \(\Delta T(x,0)\in\mathrm{Rand}[0,50]\). The main difference between these cases lies in the average initial temperature. In the first case, the average initial temperature is \(\bar{\Delta}T(0)=0\) K, while in the second case, it is \(\bar{\Delta}T(0)=25\) K. For the first case, the initial temperature state is highly mixed, and most of the eigenstates are initially populated. However, the non Figure A.2: Dynamics of temperature in a periodic array of \(N=60\) identical NPs with \(R=50\) nm, \(d=500\) nm, and \(\beta=0.5\). The insets show the weight distribution of the initial temperature state vector. Panel. (a) Pure initial state: In the first row \(\Delta T(x,0)=50\times\psi_{3}(x)\) and in the second row \(\Delta T(x,0)=50\times\psi_{50}(x)\). Panel. (b) Mixed initial state: In the first row \(\Delta T(x,0)=50\times[\frac{3}{5}\psi_{3}(x)+\frac{4}{5}\psi_{50}]\) and in the second row \(\Delta T(x,0)=50\times\delta(x-x_{15})\). Panel. (c) Random initial state: In the first row \(\Delta T(x,0)\in\mathrm{Rand}[-50,50]\) and in the second row \(\Delta T(x,0)\in\mathrm{Rand}[0,50]\). zero average of the initial temperature distribution in the second case causes the weight distribution to be highly concentrated on mode \(\psi_{1}\). As we will demonstrate in the next section, the first mode is constant, i.e., \(\psi_{1}=\text{const}\). Therefore, this result is reasonable, as a non-zero average for the initial temperature state leads to a semi-pure state that predominantly populates mode 1. Equipped with the appropriate formalism described above, we are now ready to investigate the topological behaviors of the radiative system. The fundamental question is: under what circumstances does a topological phase transition occur in a chain of nanoparticles? ### Existence of the edge state The presence of spectrally fixed edge states at the midgap value of the spectrum requires the presence of parity symmetry. In the case of identical particles, we have \(\hat{\Gamma}=\gamma\mathds{1}\), which implies that both \(\hat{\Gamma}^{-1}\) and \(\hat{F}\) exhibit even parity. As a result, the response matrix \(\hat{H}\) remains invariant under parity. Consequently, we can observe that \(\hat{H}\hat{\Pi}=+\hat{\Pi}\hat{H}\), and it is evident that \([\hat{H},\hat{\Pi}]=0\), where \(\hat{H}=\hat{H}^{\dagger}\). Here, \(\hat{\Pi}\) represents the parity operator relative to the middle of the chain. What happens if the particles are not identical? In general, the response matrix will be asymmetric, \(\hat{H}\neq\hat{H}^{\dagger}\). However, with a properly designed structure, there is a possibility for the system to preserve parity symmetry. Once again, \(\hat{F}\) remains invariant under parity. However, for \(\hat{\Gamma}^{-1}\) and, consequently, the response matrix \(\hat{H}\) to be even under parity, we must satisfy the condition: \[\gamma_{i}=\gamma_{N+1-i}.\] (A.18) On the other hand, there must be a mirror reflection symmetry with respect to the middle of the chain. Thus, \(\hat{\Gamma}\) will be symmetric (with respect to its minor diagonal), leading to \(H_{ii}=H_{N+1-i,N+1-i}\). This condition is trivially satisfied for symmetric response matrices. Since \(\gamma_{i}=c_{i}V_{i}\), in the special case of particles with the same material, we must have: \[V_{i}=V_{N+1-i}.\] (A.19) It is important to emphasize that even in this case, the eigenvalues are real and positive. Moreover, \(\mathbf{\psi}_{\mu}\) are simultaneous eigenkets of \(\hat{H}\) and \(\hat{\Pi}\), just like in the symmetric case. This situation also applies to the expected edge states \(\mathbf{\psi}_{D}\) and \(\mathbf{\psi}_{U}\), where the indices \(D\) and \(U\) refer to the localized band (as discussed in the next section). As we will see, \(\mathbf{\psi}_{D}\) and \(\mathbf{\psi}_{U}\) are orthogonal, and we can affirm with certainty that one of them is even while the other is odd. In order for the system to exhibit topological edge states, we require the response matrix to be degenerate, in addition to possessing parity symmetry. Furthermore, to achieve a flat spectrum of edge states in the topologically non-trivial phase, the degenerate eigenvalues must remain constant within a certain range of \(\beta\). The first condition necessitates the existence of at least two states that are simultaneous eigenvectors of \(\hat{H}\) with the same eigenvalue \(\lambda_{e}\), namely \(\hat{H}\mathbf{\psi}_{D}=\lambda_{D}\mathbf{\psi}_{D}\) and \(\hat{H}\mathbf{\psi}_{U}=\lambda_{U}\mathbf{\psi}_{U}\), where \(\lambda_{D}=\lambda_{U}=\lambda_{e}\). The latter condition establishes the presence of an edge in the system, provided that \(d\lambda_{e}/d\beta\simeq 0\) within the desired range of \(\beta\) (referred to as the "flatness rule"). Considering localized modes that are localized at lattice sites \(m\) and \(n\), with the special case of \((m,n)=(1,N)\) representing the edge state, we can utilize mirror reflection symmetry to derive the following relationships: \(n=N+1-m\), \(\gamma_{m}=\gamma_{n}\), \(H_{mm}=H_{nn}\). As a result, the weight distribution of the edge states is expected to be localized on both of these sites, i.e., \(|\psi_{D}(x)|^{2}=|\psi_{U}(x)|^{2}\simeq[\delta(x-x_{m})+\delta(x-x_{n})]/2\). In an extremely dilute system, the response matrix takes on a diagonal form, with diagonal elements \(H_{ii}\propto V_{i}/\gamma_{i}\). This accounts for the interaction with the thermal bath but does not include the effects of inter-particle interactions. For a chain of particles composed of the same material, we can express \(H_{ii}\propto c_{i}^{-1}\equiv A\), where \(A\) is a positive constant. The diagonal structure of \(\hat{H}\) implies that \(\lambda_{e}=H_{mm}=H_{nn}=A\). However, beyond the dilute limit of particles, the interactions between cells and the inter-particle interactions (many-body effects) become significant, leading to deviations from the diagonal form of the response matrix. Nonetheless, in the diagonal representation, we can approximate the eigenvalues of \(\hat{H}\) as \(\lambda_{e}\approx A+f_{m}(\beta,V_{1},V_{2},\cdots,V_{N})/V_{m}=A+f_{n}(\beta,V_{1},V_{2},\cdots,V_{N})/V_{n}\), where \(f(\beta,V_{1},V_{2},\cdots,V_{N})\) incorporates the interactions among the particles. Since \(f_{m}=f_{n}\equiv f_{e}\) and \(V_{m}=V_{n}\equiv V_{e}\), we can write \(\lambda_{e}\approx A+f_{e}/V_{e}\). Therefore, assuming that \(U\) and \(D\) are localized modes located on both sites \(m\) and \(n\) within the desired range of \(\beta\) (e.g., below the critical point \(\beta<\beta_{c}\)), the degeneracy and flatness rules discussed above imply that we must have \[\lambda_{e}\simeq A+f_{e}(\beta,\mathbf{V})/V_{e}\simeq\text{ cte},\] (A.20a) \[\frac{d\lambda_{e}}{d\beta}\simeq 0\Downarrow\] (A.20b) \[\Big{[}\big{(}\frac{df_{e}}{d\beta}+\sum_{i}\frac{\partial f_{ e}}{\partial V_{i}}\frac{dV_{i}}{d\beta}\big{)}V_{e}-\frac{dV_{e}}{d\beta}f_{e} \Big{]}/V_{e}^{2}\simeq 0.\] ### symmetric Vs asymmetric Response Matrix Let us now consider a specific scenario involving a chain composed of identical particles, where the volume of each particle, denoted as \(V_{i}\), remains constant throughout the chain. We will refer to this constant volume as \(V_{0}\). As discussed earlier, in this case, the response matrix is symmetric. We can simplify Eq. (A.20a) to \(\lambda_{e}\simeq g_{e}(\beta)\), where \(g_{e}(\beta)=A+f_{e}(\beta)/V_{0}\). The condition for the flat ness of the edge band in a topologically non-trivial phase can be reduced to \(\frac{df_{z}}{d\beta}\simeq 0\). However, as depicted in Fig. (A.3), we find that this equation holds true only at the critical point \(\beta_{c}=0.5\). In the figure, we present the eigenvalue spectrum of a chain composed of \(N=60\) SiC nanoparticles, all having the same volume \(V_{0}=\frac{4}{3}\pi(50)^{3}\)nm\({}^{3}\) and a lattice constant \(d=500\)nm. It is evident that no edge states can be found in this case, and the flatness condition is satisfied only at the critical point \(\beta_{c}=0.5\) for \(N-1\) modes. Furthermore, none of these modes reside in the midgap region, which is consistent with our theoretical prediction. Moreover, the bulk states we observe in the spectrum remain gapless only at \(\beta_{c}\). This conclusion applies in general, suggesting that we should not expect the presence of edge modes or localized modes in a chain consisting of identical nanoparticles. However, it is important to note that the existence of localized edge states is not a requirement for a topological phase transition. It is possible to have a topological phase transition without the presence of edge states. Therefore, while the absence of localized edge states in a chain of identical nanoparticles is expected, it does not undermine the possibility of a topological phase transition in such systems. The existence of a topological phase transition is determined by other topological characteristics and can be captured by appropriate topological invariants, which we will explore in subsequent section. ### Topological Invariants: Zak Phase and Bulk Winding Number To develop a comprehensive understanding of the topological phase transition, this section aims to investigate the topology of the system in reciprocal space. In order to mathematically represent the response matrix in reciprocal space, it is necessary to define the configuration of the unit. Referring to the configuration depicted in Fig. (A.4a), we can express the response matrix in reciprocal space in a concise manner as follows: \[\hat{\mathcal{H}}(k)=\left[\begin{array}{cc}H_{11}+2H_{13}\cos(k)&H_{12}+H_ {23}\mathrm{e}^{ik}+H_{14}\mathrm{e}^{-ik}\\ H_{12}+H_{23}\mathrm{e}^{-ik}+H_{14}\mathrm{e}^{ik}&H_{22}+2H_{24}\cos(k)\\ \end{array}\right]\] (A.21) represented compactly in notation as \[\hat{\mathcal{H}}(k)=h_{x}(k)\hat{\sigma}_{x}+h_{y}(k)\hat{\sigma}_{y}+h_{z}(k )\hat{\sigma}_{z}+h_{0}(k)\hat{\sigma}_{0}\] (A.22) where \(\hat{\sigma}_{i}\) (\(i=x,y,z\)) is the Pauli matrix, \(\hat{\sigma}_{0}\) is \(2\times 2\) identity matrix, and \(\tilde{h}=h_{x}\tilde{i}+h_{y}\tilde{j}+h_{z}\tilde{k}\) with \[h_{x}(k)=t_{1}+t_{2}\cos(k)+t_{4}\cos(k),\] (A.23a) \[h_{y}(k)=t_{2}\sin(k)-t_{4}\sin(k),\] (A.23b) \[h_{z}(k)=t_{11}/2-t_{22}/2,\] (A.23c) \[h_{0}(k)=t_{11}/2+t_{22}/2+2t_{3}\cos(k).\] (A.23d) Based on the schematic representation of the hopping terms shown in Fig. (A.4a), the parameters \(t_{1}=H_{i,i+1}\), \(t_{2}=H_{i+1,i+2}\), \(t_{3}=H_{i,i+2}=H_{i+1,i+3}\), and \(t_{4}=H_{i,i+3}=H_{i+1,i+4}\) represent the hopping parameters that govern the power exchange between adjacent cells in the chain. These parameters control both the inter-cell and intra-cell hopping processes. Additionally, \(t_{11}=H_{11}\) and \(t_{22}=H_{22}\) correspond to the on-site terms of the response matrix and represent the cooling power contributions from the respective objects. It is worth noting that the presence of the identity matrix \(\sigma_{0}\) in Equation (A.22) results in a shift in the eigenvalues of the matrix \(\hat{\mathcal{H}}(k)\). Additionally, in the given configuration, it is observed that \(t_{11}\) and \(t_{22}\) have similar magnitudes, resulting in the negligible contribution of the \(h_{z}\) term. Conversely, we have \(\min\{t_{1},t_{2}\}\gg\max\{t_{3},t_{4}\}\), which can be referred to as the nearest-neighbor interaction limit. In this limit, the model exhibits chiral symmetry, which implies that \(\sigma_{z}\hat{\mathcal{H}}\sigma_{z}^{\dagger}=-\hat{\mathcal{H}}\). The eigenvalues of the response matrix in Eq. (A.22), which we refer to as thermal relaxation bands, can be expressed as: \[\lambda_{k}^{\pm}=\pm|\tilde{h}|+h_{0}.\] (A.24) Here, \(|\tilde{h}|=\sqrt{h_{x}^{2}+h_{y}^{2}+h_{z}^{2}}\) is the magnitude of the vector \(\vec{h}\). In Fig. (A.4b), the thermal relaxation bands for typical values of \(\beta\in\{0.45,0.5,0.53\}\) are depicted. The thermal relaxation bands in the system exhibit a gap, denoted as \(\Delta=\min\sqrt{h_{x}^{2}+h_{y}^{2}+h_{z}^{2}}\). This minimum gap condition plays a crucial role in determining the system's topological properties. It can be observed that the two bands touch at \((k,\beta)=(\pm\pi,0.5)\), while a gap appears for \(\beta\neq 0.5\). This information is significant as it allows us to define a topological invariant and investigate the transition of the system's topological phase. The precise right eigenstate of the infinite system can be expressed as: \[|\psi^{+}(k)\rangle =\begin{pmatrix}\cos(\frac{\theta}{2})\ \mathrm{e}^{-i\varphi}\\ \sin(\frac{\theta}{2})\end{pmatrix},\] (A.25a) \[|\psi^{-}(k)\rangle =\begin{pmatrix}\sin(\frac{\theta}{2})\ \mathrm{e}^{-i\varphi}\\ -\cos(\frac{\theta}{2})\end{pmatrix}.\] (A.25b) In these equations, \(\theta(k)=\arccos(h_{z}/|\vec{h}|)\) and \(\varphi(k)=\arctan(h_{y}/h_{x})\). Additionally, the left eigenvectors are denoted as \(\langle\phi^{\pm}(k)|=|\psi^{\pm}(k)\rangle^{\dagger}\). The Zak phase provides a characterization of the topological properties of Bloch wave functions within the system. For the upper and lower bands, the geometric Zak phases are defined as follows: \[\Phi^{\pm}_{\mathbb{Z}}(\beta)=i\int_{-\pi}^{+\pi}\langle\phi^{\pm}(k)| \partial_{k}|\psi^{\pm}(k)\rangle dk.\] (A.26) In Equation (A.26), \(\Phi^{\pm}_{\mathbb{Z}}(\beta)\) represents the Zak phase for the respective upper (\(+\)) and lower (\(-\)) bands. The integral is taken over the range \(-\pi\) to \(+\pi\), and the terms \(\langle\phi^{\pm}(k)|\) and \(|\psi^{\pm}(k)\rangle\) denote the left and right eigenvectors, respectively, associated with the eigenstates of the system. To illustrate the computation of the Zak phase in our physical system with respect to the control parameter \(\beta\), Fig.(A.4c) is provided. It shows how the Zak phase changes as we vary \(\beta\). This figure further demonstrates that the Zak phase transitions from \(\pm\pi/2\) to \(\mp\pi/2\) for the upper and lower bands, respectively, when \(\beta\) crosses from values less than \(0.5\) to values greater than \(0.5\). The eigenstates of the chain possess an internal structure that can be described by the direction of the vector \(\vec{h}(k)\). In particular, due to the negligible value of \(h_{z}\), as the momentum \(k\) varies across the Brillouin zone from \(-\pi\) to \(\pi\), the endpoint of \(\vec{h}(k)\) traces out a closed circular path on the \(h_{x}\)-\(h_{y}\) plane. This circular path has a radius of \(|t_{1}|\) and is centered at \((t_{2}+t_{4},0)\). The topology of this circular path can be characterized by an integer known as the bulk winding number, denoted as \(w\). The bulk winding number counts the number of times the circular path winds around the origin of the \(h_{x}\)-\(h_{y}\) plane. In other words, it quantifies the number of revolutions the path completes around the origin as \(k\) varies. In Fig. (A.4d), we can observe the behavior of the bulk winding number. For \(\beta=0.53\), the winding number \(w=0\), indicating that the circular path does not wind around the origin. On the other hand, for \(\beta=0.46\), the Figure A.4: (a) Schematic representation of a periodic chain with periodic boundary conditions. The inter-cell hopping is denoted as \(t_{1}\), while the intra-cell hoppings are represented by \(t_{2}\), \(t_{3}\), and \(t_{4}\). The on-site values are denoted as \(t_{11}\) and \(t_{22}\). (b) Dispersion relations of the model described by Eq. (A.24) for typical values of the control parameter \(\beta\). (c) Variation of the Zak phase with respect to the control parameter \(\beta\). (d) Trajectory of the endpoints of the vector \(\vec{h}(k)\), which represents the bulk momentum-space response matrix described by Eq. (A.22), traced on the \(h_{x}\), \(h_{y}\) plane as the wavenumber sweeps across the Brillouin zone, \(k:-\pi\to\pi\). winding number \(w=1\), indicating that the circular path completes one full revolution around the origin. However, for \(\beta=0.5\), the winding number is undefined, as the circular path coincides with the origin and does not wind around it. ### Adding defect to the system: asymmetric responce matrix The absence of localized modes in this structure indicates that heat transfer in the system exhibits long-range characteristics, regardless of the value of \(\beta\). Consequently, localizing or inhibiting the flow of heat within the system becomes challenging. To address this, our primary objective is to investigate methods that can induce the emergence of localized modes in the one-dimensional chain of nanoparticles. By achieving localized modes, we aim to enhance our ability to control heat transfer and manipulate thermal properties within the system. To achieve this goal, we propose introducing a perturbation that breaks the symmetry of the system. This perturbation can take the form of a local effect or an asymmetry in the coupling between particles. In particular, we can explore the effect of varying the volumes of nanoparticles within the chain. By employing a simple approach, we consider a chain of nanoparticles with varying volumes. However, it is crucial to ensure the preservation of parity symmetry, as indicated by Eq. (A.19). To simplify the analysis, we focus on a chain where the volumes of two specific particles depend on the parameter \(\beta\). The volume configuration can be expressed as: \[V_{i}(\beta)=\left\{\begin{array}{ll}V_{0}=\text{cte},&i\neq m,n.\\ V(\beta),&i=m,n.\end{array}\right.\] (A.27) Here, \(V_{0}\) represents the volume of the majority of nanoparticles in the chain, while \(V(\beta)\) represents the volumes of the two specific nanoparticles located at positions \(x_{m}\) and \(x_{n}\). By introducing this strategy in the volume configuration, we break the translational symmetry of the system. Consequently, localized states can emerge around the nanoparticles with modified volumes. These localized states arise due to the confinement of the thermal flux within the region affected by the volume perturbation. It is important to note that the specific form of the perturbation, \(V(\beta)\), will determine the characteristics of the localized states, including their relaxation rate and spatial extent. The choice of the \(V(\beta)\) should be made based on the desired properties of the localized states and the intended symmetry-breaking effect. In this case, the response matrix exhibits asymmetric behavior. However, we can assess the existence and flatness of the edge band for \(\beta<\beta_{c}\) by analyzing the derivative \(\frac{d\lambda_{e}}{d\beta^{2}}\). By evaluating this quantity, we can gain insights into the behavior and characteristics of the edge mode in the system. From Eq. (A.20a), we find \(f_{e}(\beta)\simeq BV_{e}(\beta)\), where \(B\) is a constant. Substituting this into Eq. (A.20b) satisfies the flatness condition, as it reduces to \(B\frac{dV_{e}}{d\beta}V_{e}-\frac{dV_{e}}{d\beta}BV_{e}=0\). Therefore, the existence of an edge state is possible if Eq. (A.20a) holds true within the desired range of \(\beta\). The next step is to determine a suitable function for \(V_{e}(\beta)\). In general, we can express \(f_{e}(\beta)\) as a polynomial for \(\beta<\beta_{c}\). Since \(V_{e}(\beta)\) is proportional to \(f_{e}(\beta)\) within the desired range of \(\beta\), we can assume, to first order in \(\beta\), that \(V_{e}(\beta)=V_{m}(\beta)=V_{n}(\beta)=a\beta+b\). Here, \(a\) and \(b\) are constants, and we can determine their values by minimizing the deviation of \(\lambda_{e}\) in Eq. (A.20a). By incorporating a properly defined \(V_{e}(\beta)\), the conditions mentioned earlier hold true for \(\beta<\beta_{c}\) and become invalid for \(\beta>\beta_{c}\). Notably, introducing this asymmetry in nanoparticle volumes ensures the preservation of parity symmetry. Physically, the adiabatic change in volume denoted by \(V(\beta)\), coupled with variations in the intra-cell separation distance, introduces a defect in the system. This defect retains most of the system's symmetries, except for translation symmetry. Consequently, two robust localized states emerge within the energy gap region. Thus, the topological phase transition in a chain of nanoparticles with an asymmetric response matrix is characterized by the emergence of localized modes. Figure (A.5) depicts the eigenvalue spectrum of the same nanoparticle chain discussed previously, but with the additional consideration that the volumes of the first and last particles are given by \(V_{1}(\beta)=V_{N}(\beta)=\frac{4\pi}{3}[-1250000\beta+689000]\) nm\({}^{3}\). The coefficients \(a=-\frac{4\pi}{3}1250000\) and \(b=\frac{4\pi}{3}689000\) are chosen to ensure that the constraints in Eq. (18) hold true for values of \(\beta\) less than \(0.5\). It is apparent from the spectrum that a topological edge band is present, and a phase transition occurs at the critical point \(\beta_{c}=0.5\). The specific behavior and properties of these localized modes depend on the details of the defect itself, such as its position, volume \(V_{0}\), lattice constant \(d\), and the specific form of the function \(V(\beta)\) introduced by the variation in \(\beta\) and intra-cell separation distance. The function \(V(\beta)\) controls the adiabatic change in the system's parameters, affecting the temperature field behavior in the vicinity of the defect. The localization of this modes (known as defect modes) is a result of the broken translational symmetry caused by the defect, leading to a modification of the system's periodicity. This disruption causes the eigenstates associated with the defect to become localized within the band gap, resulting in the emergence of the defect modes. As shown in Fig. (10), the edge band with a higher eigenvalue in the topologically trivial phase (\(\beta>0.5\)) is labeled as \(U\), while the other band with a lower eigenvalue is labeled as \(D\). In the subsequent discussions, their corresponding eigenmodes are referred to as the "upper" edge mode \(\psi_{U}(x)\) and the "lower" edge mode \(\psi_{D}(x)\), respectively. Moreover, the eigenmode in the spectrum with the lowest (highest) eigenvalue is labeled as \(S\) (\(F\)). According to Ref. [53], the first mode \(\mathbf{\psi}_{S}\) predominates in the thermalization process when excited in a collection of particles. In the nontrivial topological phase of the system (\(\beta<0.5\)), the edge modes become localized at both ends of the chain. However, it will be demonstrated in the subsequent section that the topologically localized modes can be positioned at any arbitrary point within the chain, denoted as \((x_{m},x_{n})\), with the condition \(m=N+1-n\). In order to investigate the underlying mechanism behind the emergence of localized edge states as we transition from a symmetric to an asymmetric response matrix, we analyze the eigenvalue spectrum of a chain of length \(N=60\) with \(R_{i}=R_{0}=50\) nm for \(\beta=0.46\), as shown in Fig. (11). By varying the radii of particles \(1\) and \(60\), denoted as \(R_{e}\), the response matrix undergoes a transition from symmetric to asymmetric. When \(R_{e}=R_{0}\), corresponding to a symmetric response matrix, the system exhibits a spectrum with a finite gap and no localized states as we observed in Fig. (1b). However, as \(R_{e}\) is decreased below \(R_{0}\), the response matrix becomes asymmetric. In this asymmetric regime, two defect modes emerge from the upper bulk band and move into the gap region. Notably, these modes reach the midgap for nonzero values of \(R_{e}\). Remarkably, the introduction of an asymmetric response matrix enables the existence of localized modes without modifying the coupling configuration within the bulk of the system. In our subsequent analysis, we will demonstrate that these modes are exponentially confined around the defect position, highlighting their localized nature. ### Robustness of the edge states The robustness of topological edge states is a fundamental characteristic that ensures their protection against perturbation and imperfections in the system. These edge states emerge in specific topological phases of matter, such as topological insulators or topological superconductors, where excitations are confined to the boundaries or interfaces of the material. As illustrated in Fig. (10), the presence of a gap in the eigenvalue spectrum of the system with an asymmetric response matrix distinguishes the edge states from the bulk states. This gap serves as a protective barrier, preventing scattering or hybridization between the edge states and the bulk states. Consequently, the edge states exhibit resilience against local perturbations that do not result in the closure of the gap. To demonstrate the robustness of topological edge states in the proposed system, we investigate the effects of perturbation on the eigenvalue spectrum of the response matrix. The perturbation is introduced by adding uncorrelated random numbers with a Gaussian distribution and a standard deviation of \(\sigma\) to the elements of the response matrix in real space. The parameter \(\sigma\) quantifies the strength of local perturbation, encompassing various factors such as displacement, removal, or changes in material or particle volume. In this section, the presented results are obtained for \(\beta=0.46\), and statistical significance and reliability are ensured by averaging over \(500\) realizations. As shown in Fig.(12a), the introduction of diagonal perturbation breaks the chiral symme Figure A.6: The eigenvalue spectrum in a chain consisting of \(60\) nanoparticles at \(\beta=0.46\). The spectrum is presented for different values of \(R_{e}\), representing the radius of the nanoparticle at position \((m,n)=(1,60)\), while all other particles maintain a constant radius of \(R_{0}=50\) nm. Initially, the system displays a symmetric response matrix when \(R_{e}=R_{0}\). However, as the value of \(R_{e}\) decreases, the system undergoes a transition to an asymmetric response matrix. try, leading to the loss of topological protection for the edge modes within the system. This disruption results in the mixing of bulk and edge states, as evidenced by the overlap of their eigenvalues. Figure(A.7b) illustrates the impact of off-diagonal perturbation on the edge states. In this case, the localized edge states exhibit resilience to the specific perturbation, maintaining their separation from the bulk states. However, the perturbation parameter \(\sigma\) noticeably affects the eigenvalues of the bulk bands. We observe in both cases that as the magnitude of perturbation approaches the band gap, this effect becomes more pronounced and can ultimately close the gap, causing the edge states to merge with the bulk states. The question at hand is whether there exists a perturbation that can be introduced into the model without closing the energy gap between the bulk bands. The model exhibits a distinct topological phase characterized by the presence of protected edge states when the relaxation gap is present. To preserve this gap while introducing perturbations, it is crucial to maintain the system's underlying topological properties and preserve the relevant symmetries. In this context, our focus lies on perturbations that preserve the Mirror-reflection symmetry of the system, as illustrated in Figure (A.7c). This specific perturbation is carefully designed to uphold the mirror reflection symmetry on the perturbed hoppings, while simultaneously avoiding the closure of the band gap. Consequently, the introduced perturbation safeguards the topological invariant of the system, which, in this case, is represented by the winding number. By maintaining the Mirror-reflection symmetry, the perturbation effectively preserves the system's topological phase and ensures the persistence of the associated protected edge states. ### Localized modes of the system with asymmetric response matrix ### L-type and R-type Eigenmodes As discussed earlier, \(\psi_{U}(x)\) and \(\psi_{D}(x)\) are simultaneous eigenstates of \(\hat{H}\) and \(\hat{\Pi}\). They are degenerate states in the topologically nontrivial phase of the system, but not in the trivial phase. Hybridized modes can be formed as follows: \[\psi_{L}(x) =\frac{1}{\sqrt{2}}[\psi_{U}(x)-\psi_{D}(x)],\] (A.28a) \[\psi_{R}(x) =\frac{1}{\sqrt{2}}[\psi_{U}(x)+\psi_{D}(x)],\] (A.28b) These modes have weight distributions \(P_{L}=P_{R}=0.5[P_{U}+P_{D}]\) and are referred to as L-type and R-type states, respectively. It is important to note that these states are not parity eigenstates; however, they are eigenstates of \(\hat{H}\) in the topologically nontrivial phase: \[\hat{H}\psi_{L}(x)=\lambda_{e}\psi_{L}(x),\] (A.29a) \[\hat{H}\psi_{R}(x)=\lambda_{e}\psi_{R}(x).\] (A.29b) Furthermore, the states described by Equations (A.28a) and (A.28b) are mostly concentrated on the left-hand and right-hand localization sites, respectively. However, in the trivial phase, they are not eigenstates of \(\hat{H}\) and are extended over the entire chain. The localized modes and their corresponding L-type and R-type states are depicted in Fig. (A.8) for \(\beta=0.46\). The positions of the localized states are chosen to maintain x-directional mirror symmetry with respect to \(l/2\) by setting \(V_{i\neq m,n}=V_{0}\) and \(V_{m}=V_{n}=V(\beta)\). Each panel in Figure (A.8) represents a typical value of \((m,n)\), with \(n=N+1-m\). It is evident that the corresponding L-type and R-type states in each case are located on the \(m\)th and \(n\)th sites, respectively. We note that the L-type and R-type states can be considered as pure states in the limit of \(\beta\ll\beta_{c}\). Figure A.7: Effect of (a) diagonal, (b) off-diagonal, and (c) Mirror-reflection perturbation on the eigenvalue spectrum in a finite chain with \(N=61\) particles and \((m,n)=(1,60)\), as a function of the perturbation strength \(\sigma\). ### Localized Vs Extended Modes To further investigate the physical characteristics of the model illustrated in Fig. (A.5), we utilize the inverse participation ratio (IPR). The color bars on the right side of the figure indicate the IPR values associated with the modes in the system featuring an asymmetric response matrix. Notably, we observe that the edge modes within the topologically nontrivial phase consistently display the highest IPR values. Remarkably, the IPR of these modes remains the largest irrespective of the specific topological phase of the system. Figure (A.9) illustrates the variation in the profile of \(|\psi_{U}(x)|^{2}\) across the critical point \(\beta_{c}=0.5\) for the same configuration as depicted in Figure (A.5), considering both the topologically trivial and nontrivial phases. At \(\beta=0.46\), the profile is entirely localized at \(x=0\) and \(x=l\). However, as \(\beta\) increases, the contribution of the edge state at \(x=0\) and \(x=l\) diminishes. This delocalization is supported by the decrease in the IPR of \(\psi_{U}(x)\) in Figure (A.5). Upon crossing the critical point, the state becomes extended, indicating the occurrence of an edge-bulk transition. In the topologically trivial phase, we observe a highly uniform profile within the chain, and \(\psi_{U}(x)\) becomes delocalized over the entire system. ### Temporal evolution of temperatures in chain with symmetric response matrix In Fig. (A.10), we investigate the temperature evolution in a chain comprising \(N=60\) nanoparticles with volumes \(V_{i}=\frac{4}{3}\pi(50^{3})\) nm\({}^{3}\). The chain exhibits a symmetric response matrix. We examine two cases: (a) \(\beta=0.46\) and (b) \(\beta=0.54\). Initially, only the first particle (particle 1) is heated to a temperature of \(350\) K, while all other particles are at room temperature (\(300\) K). In both cases, we observe that particle number \(1\) undergoes heat exchange with other particles, eventually reaching thermal equilibrium. This equilibrium state is attained after multiple thermalization steps within the system. Considering the information presented in Fig.(A.4c), which illustrates a topological phase transition in the system as the parameter \(\beta\) changes, while no localized states are observed, and taking into account the inset, which indicates the involvement of a significant number of modes in the temperature evolution, we can reasonably expect a prolonged thermalization time for both scenarios. These results align with the long-range nature of heat exchange in the SSH chain of identical nanoparticles, as reported in Ref.[48]. The dependence of thermalization time on the chain length is shown in Fig. (A.10c) as a function of \(\beta\). The plot reveals a decreasing trend in thermalization time as the chain length increases, eventually reaching a saturation point. On the other hand, the variation of parameter \(\beta\) does not exert a significant influence on the thermalization time of the system. However, it is worth noting that in longer chains, the thermalization time reaches its minimum value at \(\beta=0.5\). Once again, we can infer that the absence of localized states accounts for the lack of dependence of the thermalization time on the parameter \(\beta\) in a chain of identical nanoparticles. Finally, we compare the spatio-temporal evolution of the temperature field in a chain between a system with a symmetric and asymmetric response matrix. In both scenarios, the first particle is maintained at a fixed temperature of \(350\) K, while the temperatures of the rest of the chain are initially set to the room temperature of \(300\) K. The results are presented for \(\beta=0.46\) and \(\beta=0.54\). Figures (A.10d) and (A.10e) depict the results for the system with a symmetric and asymmetric response matrix, respectively. In the asymmetric case, we consider a chain length of \(N=30\) and set the values of \((m,n)=(1,30)\) to ensure the presence of topologically non-trivial edge state. Interestingly, we observe that the thermal energy becomes localized only in the topologically non-trivial phase of the system with an asymmetric response matrix. This localization arises due to the interplay between the asymmetric effects and the system's topological properties, leading to the confinement of thermal energy in specific regions of the chain. In contrast, in the spatio-temporal evolution of the temperature field in the chain with a symmetric response matrix, no such localization is observed. The absence of localization in the symmetric case is consistent with the behavior of a system without topological protection, where thermal energy can freely propagate throughout the chain without being confined to specific regions. These findings highlight the crucial role of the asymmetric response matrix in inducing the emergence of localized states and the distinct behavior of the system compared to the symmetric case. Moreover, they provide further evidence for the connection between asymmetric physics, topological properties, and the spatial distribution of thermal energy in the studied chain system. ## Reference Figure A.10: Evolution of temperatures in a chain with symmetric response matrix and \(N=60\) nanoparticles with volumes \(V_{i}=\frac{4}{3}\pi 50^{3}\) nm\({}^{3}\) for (a) \(\beta=0.46\) and (b) \(\beta=0.54\). As an initial temperature condition, only the \(1\)-th particle is heated up to \(350\) K, while all other particles being initially at room temperature \(300\) K. (c) Thermalization time for the initial temperature state \(\Delta T_{i}(0)=50\delta_{i,1}\), in a chain of NPs with symmetric response matrix and length \(N\in\{20,40,60,80,100\}\). The spatio-temporal evolution of the temperature field, denoted as \(\Delta T(x_{i},t)\), is studied in a chain consisting of \(N=30\) particles. The first particle is maintained at a constant temperature \(T_{1}=350\) K, while all other particles are initially at room temperature \(300\) K: (d) The temperature field \(\Delta T(x_{i},t)\) in a chain with a symmetric response matrix, corresponding to the setup shown in Fig. (b). (e) The temperature field \(\Delta T(x_{i},t)\) in a chain with an asymmetric response matrix, following the configuration depicted in Fig. (c).
2303.02428
Building a Modal-balanced BlockChain with Semantic Reconstruction
The current large blockchain systems (BTC Lightning network, Ethereum, etc.) are generally facing the problems of low persistence rates and high storage costs. Therefore, users tend to store single modal (textual) information on the existing blockchain systems. Inspired by semantic communication algorithms, this paper presents a new algorithm to solve the serious imbalance between textual and visual modals on blockchains. After semantic sampling of the original visual image, the resulting semantic text will be stored on the chain, and the end users can reconstruct a semantically similar image using the \textbf{R}elative \textbf{O}ptimal \textbf{S}emantic \textbf{I}sotope \textbf{S}election algorithm. Experiments on the DIV2K dataset show that the blockchain with our algorithm can achieve 430,000 times the storage capacity and 550,000 times the persistence rate for the original visual data with acceptable semantic information loss.
Zhijie Tan, Xiang Yuan, Shengwei Meng, Yakun Huang, Weiping Li, Zhonghai Wu, Tong Mo
2023-03-04T14:32:46Z
http://arxiv.org/abs/2303.02428v1
# Building a Modal-Balanced Blockchain with Semantic Reconstruction ###### Abstract The current large blockchain systems (BTC Lightning network, Ethereum, etc.) are generally facing the problems of low persistence rates and high storage costs. Therefore, users tend to store single modal (textual) information on the existing blockchain systems. Inspired by semantic communication algorithms, this paper presents a new algorithm to solve the serious imbalance between textual and visual modals on blockchains. After semantic sampling of the original visual image, the resulting semantic text will be stored on the chain, and the end users can reconstruct a semantically similar image using the **R**elative **O**ptimal **S**emantic **I**stotope **S**election algorithm. Experiments on the DIV2K dataset show that the blockchain with our algorithm can achieve 430,000 times the storage capacity and 550,000 times the persistence rate for the original visual data with acceptable semantic information loss. Zhijie Tan\({}^{\dagger}\), Xiang Yuan\({}^{\dagger}\), Shengwei Meng\({}^{\ddagger}\), Yakun Huang\({}^{\ddagger}\), Weiping Li\({}^{\dagger}\), Zhonghai Wu\({}^{\dagger}\), Tong Mo\({}^{\dagger}\)\({}^{\dagger}\)School of Software and Microelectronics, Peking University, Beijing, China \({}^{\ddagger}\)School of Computer Science, Beijing University of Posts and Telecommunications, Beijing, China multimodal, blockchain, semantic reconstruction ## 1 Introduction The blockchain is promised to be the fundamental infrastructure of the Metaverse with the features of permanent file storage and a decentralized verification mechanism. However, these features implied the curses to the capacities of file size and persistence rate when one file is going to be unchained. The permanent file storage feature implied that the total disk size of the blockchain is monotonically increasing while the decentralized verification implied the file persistence should be verified in more than 50% verification nodes. Even worse, such curses are growing up with the node number. Take Ethereum [1], which currently has nearly 400,000 active addresses, as an example: From September 15th, 2021 to September 15th, 2022, the average block generation time is between 12 and 14 seconds, and the average block size is about 2 MB1. Such persistence bandwidth (about 10MB/min) is extremely limited for visual modal data which will cause severe congestion on the blockchain. Furthermore, the large-scale visual data persistence will bring unacceptable storage pressure for every decentralized node. Footnote 1: Data from [https://www.ycharts.com/](https://www.ycharts.com/). To solve the problem of such curses, it is common to use IPFS (InterPlanetary File System) [2] or other cloud storage methods (Google Drive, etc.) to generate a unique hash or hyperlink for the corresponding visual data. However, the eternal storage of external file links does not claim the permanent storage of data: The accessibility and existence of files in the IPFS or other cloud disks are not guaranteed by the blockchain itself, which sources the new uncertainty into the design of the blockchain systems and is therefore not a perfect solution. Besides using the external file links, traditional image compression methods (500 compressed images whose compression ratio is set to extremely high to 1:100 in one minute) still have great difficulty achieving the ideal persistence rate target to be up-chained considering a communication network consisting of nearly 400,000 users. Therefore, while the current mobile Web2 world is dominated by visual data in social media which is easier to understand for every person compared to the textual data, the blockchain for the future Web3 world still suffers the extremely rare visual data. Inspired by current popular semantic communication algorithms[3, 4], the core idea of this paper is swapping off-chained computing resources for up-chained persistence rate and file size. For blockchains with decentralized nodes, off-chained computing tasks can be completed parallelly while up-chained data persistence needs to be verified in serialized blockchain blocks. The effect of off-chained computing tasks will be weakened when the blockchain node number grows. Thanks to the breakthroughs in the fields of the image-to-text [5, 6], text-to-image [7, 8] and other multimodal technology [9], a powerful pre-training model can extract the high-level semantic information of visual images and another model can reconstruct **semantic isotopes** (different reconstructed images generated from the same semantic information). It should be emphasized that compared to the image compression method, our semantic reconstruction algorithm pays more attention to semantic similarity instead of pixel-to-pixel visual similarity. Human beings see an image and they understand the high-level semantic information of this image. If this image is slightly shifted or rotated, this would not have a huge effect on the semantic information but will completely destruct the common visual similarity evaluation indexes like PSNR (Peak Signal-to-Noise Ratio) [10] or SSIM (Structural SIMilarity) [11]. To address the challenges of the curses, this paper presents an algorithm where a blockchain requires only the corresponding natural semantic text (\(T\)) to be up-chained. However, during the process of semantic reconstruction, there exists a problem that reconstructing the high dimensional visual data from the low dimensional textual data will inevitably bring some randomness. The randomness will be reduced when the certainty of the semantic prompts increases. This paper designs the **R**elative **O**ptimal **S**emantic **I**sotope **S**election (**ROSIS**) algorithm to reduce the randomness which is model-independent. We summarize the main contributions as follows: A semantic reconstruction method is formulated to bring more visual data to be up-chained. The ROSIS algorithm is designed to reduce semantic loss during semantic reconstruction. Experiments show the great potential of this method. ## 2 Methodology The prototype framework of our algorithm is shown in Fig. 1. The whole algorithm consists of a semantic sampler (Image2Text model), a semantic reconstructor (Text2Image model), and the ROSIS algorithm. This section will concentrate on the definitions and designs of our algorithm. ### Problem Formulation The learning-based image reconstruction process can be defined as: \[I_{o}\stackrel{{ f(\theta_{1})}}{{\longrightarrow}}X\stackrel{{ g(\theta_{2})}}{{\longrightarrow}}I_{r} \tag{1}\] In (1), \(I_{o}\) is the original image, \(f(\theta_{1})\) is a sampling mapping with parameters \(\theta_{1}\), \(X\) is the intermediate representation, \(g(\theta_{2})\) is a reconstruction mapping with parameters \(\theta_{2}\) and \(I_{r}\) is the reconstructed image. From the perspective of information bottleneck theory[12, 13], \(f(\theta_{1})\) aims to minimize the mutual information \(MI(I_{o},X;\theta_{1})\) while \(g(\theta_{2})\) aims to maximize \(MI(X,I_{l};\theta_{2})\). \(I_{l}\) is the lossy compressed image down-sampled from \(I_{o}\). Therefore, the image lossy reconstruction problem can be defined as: \[\operatorname*{arg\,min}_{\theta_{1},\theta_{2}}MI(I_{o},X;\theta_{1})- \gamma MI(X,I_{l};\theta_{2}) \tag{2}\] \(\gamma\) is used to balance \(f\) and \(g\). And the semantic reconstruction problem can be similarly defined as: \[\operatorname*{arg\,min}_{\theta_{1},\theta_{2}}MI(Inf(I_{o}),T;\theta_{1})- \gamma MI(T,Inf(I_{l});\theta_{2}) \tag{3}\] In (3), \(Inf(I)\) is the semantic representation of \(I\), and \(T\) is an intermediate semantic representation. Our algorithm chooses text (natural language) as the semantic representation. ### Decoupled Semantic Sampler and Reconstructor There will be further doubt about why \(X\) needs to be a text in (3). From the aspect of compression ratio, \(X\) can be compressed into a lighter representation. However, such representation brings a hidden assumption that \(g\) can map every \(x\in X\) to an acceptable \(i_{r}\in I_{r}\), which brings a restriction: \(f\) and \(g\) is tightly coupled (usually trained on the same dataset or for a specific task). This restriction will require that all of the sampling and reconstruction models be up-chained. If not up-chained, \(x\) without semantic information cannot be decoded into an acceptable image. Then the permanent file storage feature will be unsatisfied. Therefore, the user needs to download the corresponding reconstructor model from the blockchain, which will bring heavy communication and storage pressure for the blockchain. If \(x\) is a semantic text, \(f\) and \(g\) can be any pair of powerful image-to-text and text-to-image pretrained models. There will be no extra communication or storage pressure for the blockchain. As a result, our method adopts a decoupled semantic sampler and reconstructor, which means these models and images are transparent to both sides. Thus, \(\theta_{1}\) and \(\theta_{2}\) of \(f\) and \(g\) can hardly be finetuned. In fact, \(\theta_{1}\) and \(\theta_{2}\) are fixed in our proposed algorithm. ### The Relative Optimal Semantic Isotope Selection (ROSIS) Algorithm Because \(\theta_{1}\) is fixed, \(T\) becomes a fixed value. The fixed \(T\) states that the current problem is slightly different from (3) and mutual information can be replaced with cosine distance. The semantic reconstruction problem in (3) can be converted as: \[\operatorname*{arg\,min}_{\hat{T}}Cosine\_dist(T,Inf(\hat{I}_{r})) \tag{4}\] \(\hat{I}_{r}\) is the reconstructed image from \(\hat{T}\). \(\hat{T}\) is the combination of the original \(T\) and a random seed. Since the original image and the sampler is invisible to the reconstructor caused by the transparency mentioned in section 2.2, the text \(T\) is the only reliable signal in (4) while in (2) and (3) \(I_{o}\) is more reliable. With a compression ratio of 10000:1 or higher, reconstructing \(i_{r}\in I_{r}\) from \(t\in T\) will be born with uncertainty for the general task. It appears as for the Text2Image models, different random seeds will bring different images even with the same input text \(T\). So we propose the ROSIS algorithm: generate different random seeds to get different \(\hat{T}\) and choose the semantic closest image as the final \(I_{r}\). For the tasks whose datasets and models are known beforehand, the ROSIS algorithm can be applied as a model and data independent method to acquire extra information gain based on the finetuned models. For example, the face-to-id [14] and id-to-face [15] tasks can be improved by the ROSIS algorithm. If there are some similar faces in the dataset, the id-to-face model may generate some similar faces. Then the best face can be chosen. ## 3 Experiments ### Experiment Environment & Dataset & Metrics **Experiment Environment**: The operating system used in the experiment part is ubuntu 20.04, the CPU is Intel Xeon (R) ### Algorithm Performance on F.S. and P.T. A prototype of the semantic reconstruction method is shown in Fig. 1. This prototype uses \(BLIP\)[5] as the Image2Text model in Fig. 1. The visual encoder of \(BLIP\) uses transformer blocks whose weights are initialized from \(ViT\)-\(L/16\)[17]. And the text encoder uses transformer blocks whose weights are initialized from \(BERT_{base}\)[18]. For a given text prompt \(T\), Stable Diffusion (Text2Image Model in Fig. 1.) [7] is used to recover the corresponding image \(I\). \(Inf(I)\) is calculated by \(all\)-\(MiniLM\)-\(L6\)-\(v2\)[19]. The results of this experiment are shown in Table 1. We compare our framework with the popular jpg and.webp methods, state-of-the-art RC [21] and SlimCAEs [20] methods (PSNR\(>\)25DB). The W.75/J.75 method in Table 1 means that the original image (.png) in DIV2k is saved as.webp/.jpg using the default quality 75 (PSNR\(>\)25DB). The W.1/J.1 in Table 1 means that the original image (.png) in DIV2k is saved as.webp/.jpg using quality 1 (PSNR\(>\)20DB). It is easy to be seen from Table 1 that the framework proposed in this paper has incomparable advantages in persistence speed and file size compared with image compression methods. Because compressing.txt to.gz file does not cause information loss, this method mentioned later in this article refers to the compressed text (.gz file). Even compared with.jpg/.webp, our method can still achieve nearly 10000x in the manner of F.S. AT or TPT AT. mWT and MWT reveal two clues to us. One clue is that for images with similar resolutions, \(\frac{MWT}{mWT}\) (for W.1 in Tabel 1) can achieve 6.82 since bpp (bits per pixel) of these images are different. In contrast, the text prompts sampled from the images have similar file sizes. The other clue is that our method can map massive visual data to a file that is small enough to be stored in one blockchain block, which will greatly reduce the communication and synchronization pressures of the entire blockchain system. This explains why the mWT&MWT metrics corresponding to our method in Table 1 are all consistent with TPT. Figure 1: The overall architecture of the semantic reconstruction prototype. Phase 1: the semantic sampling phase. The raw image is converted to a text prompt \(T\). Phase 2: the semantic reconstruction phase. The text prompt \(T\) is fetched from the blockchain. We generate \(n\) semantic isotopes, \(I_{1}\),.., \(I_{n}\) from \(T\) by Text2Image model, then converted them to \(T_{1}\),.., \(T_{n}\) separately. The semantic distance between \(T_{s}\) and \(T\) will be calculated and \(I_{r}\) with the corresponding index of the minimum semantic distance will be considered as the final reconstruction result. Considering the semantic distance of the two images, we can state that the method proposed in this paper achieves the average semantic distance similar to \(Dis_{mean}\) of the.jpg image with quality 1, but only uses up to 1/10000 of the persistence time and 1/6000 of the file size. \(Dis_{max}\) reflects the semantic loss in the worst case, and it tells that our method is superior to all the methods in the.jpg series. In other words, this framework can achieve acceptable performance for all images with the worst situation (0.712 in Table 1). From \(Dis_{min}\) we can find that this framework can generate extremely accurate semantic expressions in some cases (0.0876 in Table 1). There will be further doubt that the consumption time of the semantic sampler/reconstruct consumes nearly 10s for a single image, which seems too long. Assuming that there are 1000 hosts with their running nodes on the same blockchain, every node has a different image from DIV2K and they need to persist the images. With our method, there will only be one generated blockchain block and the total time including the TSS&TSR is nearly 30s. With the J.1 method, the total time will grow up to at least 170s. Besides, for this blockchain, the J.1 file (about 51MB) stored in nearly 510 blockchain blocks has to be verified between 1000 distributed hosts instead of our only local host, and the real total consumption time and the waiting time for each host will increase dramatically with the possible congestion or other factors. The main cause of this phenomenon is that off-chained computing can be parallel while up-chained persistence can only be verified by serialized blocks on the shared blockchain. The effect of TSS&TSR will decrease with increasing nodes number. ### Ablation Study of ROSIS Algorithm We list the experiment results without using ROSIS in the worst situation and a random situation in Table 2. The worst situation means the semantic isotope is selected as the image whose semantic distance is furthest from the original image and the random situation means the semantic isotope is randomly selected. Compared to the worst situation, the ROSIS algorithm improves by 73.4% for the mean distance, 134.0% for the minimum distance, and 35.8% for the maximum distance. Compared to the random situation, the ROSIS algorithm improves by 32.4% for the mean distance, 86.0% for the minimum distance, and 25.6% for the maximum distance. Although more random seeds will bring better results, the number of random seeds should be considered carefully to save computing resources. ## 4 Conclusion & Future Work In the last decade, the bits per pixel increases less than 10% (vs.webp) on the DIV2K dataset [21, 20] while the Ethereum users are sharing a network whose total bandwidth is 10MB/min. Therefore, we propose a semantic reconstruction algorithm to build a modal-balanced blockchain under such limited bandwidth. The ROSIS algorithm is designed to reduce the semantic loss caused by the randomness when reconstructing the high dimensional image from the low dimensional text. Experiments prove the breakthrough of making the DIV2K dataset up-chained with less than 1/430,000 space usage and 1/550,000 persistence time with acceptable semantic loss. In the future, we will explore possible algorithms to produce a proper \(\tilde{T}\) to reduce the difference caused by reconstructor choosing. Besides, for the dedicated task whose models update at low frequency, we will investigate more time to get a better representation \(X\). ## 5 Acknowledgements This work is supported by the Science and Technology Research "\(JieBangGuaShuai\)" Program of Liaoning Province, China: Intelligent e-Government System based on Consortium Blockchain under Grant 2021JH1/10400010. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline & TPT(s) & mWT(ms) & MWT(ms) & F.S.(MB) & Dis\({}_{mean}\) & Dis\({}_{min}\) & Dis\({}_{max}\) & TSS\({}_{mean}\)(s) & TSR\({}_{mean}\)(s) \\ \hline Ori. & 7209.580 & 3744.01 & 19804.14 & 3800.515 & 0.0 & 0.0 & 0.0 & - & - \\ J.75 & 1020.774 & 533.61 & 2258.87 & 380.160 & **0.253** & 0.0548 & 0.763 & - & - \\ J.1 & 169.245 & 89.48 & 373.03 & 51.007 & 0.375 & 0.115 & 0.894 & - & - \\ W.75 & 780.385 & 457.10 & 1912.74 & 338.183 & 0.254 & 0.0548 & 0.700 & - & - \\ W.1 & 178.157 & 63.99 & 436.63 & 52.191 & 0.287 & 0.0901 & 0.721 & - & - \\ S[20] & 720.143 & 396.13 & 1660.63 & 307.63 & 0.254 & **0.0543** & **0.684** & - & - \\ R[21] & 743.211 & 426.52 & 1820.86 & 324.12 & 0.254 & 0.0544 & 0.697 & - & - \\ Ours & 0.0440 & 0.0440 & 0.0440 & 0.0371 & 0.376 & 0.0876 & 0.712 & 12.51 & 10.11 \\ Ours* & **0.0131** & **0.0131** & **0.0131** & **0.00879** & 0.376 & 0.0876 & 0.712 & 12.51 & 10.11 \\ \hline \end{tabular} \end{table} Table 1: Comparison of semantic reconstruction with image compression methods. J.75/J.1 means the compressed quality (an integer between 1 and 95) is set to 75/1. J./W. means.jpg/.webp. Ours/ours* means the original text prompt(.txt)/the compressed text prompt(.gz). Definitions of TPT (Total Persistence Time), mWT/MWT (minimum Waiting time/Maximum Waiting Time), F.S. (File Size), Dis. (semantic Distance between the original image and the chosen one), TSS (Time of Semantic Sampling per image)/TSR (Time of Semantic Reconstruction per image) can be found in section 3.1. The best results are in bold. \begin{table} \begin{tabular}{c c c c} \hline & \(Dis_{mean}\) & \(Dis_{min}\) & \(Dis_{max}\) \\ \hline Ours* & 0.376 & 0.0876 & 0.712 \\ Worst & 0.652 & 0.205 & 0.967 \\ Random & 0.498 & 0.163 & 0.894 \\ \hline \end{tabular} \end{table} Table 2: Semantic distance with/without ROSIS algorithm.
2302.12512
Cosmological consequences of a scalar field with oscillating equation of state. IV. Primordial nucleosynthesis and the deuterium problem
We study the primordial nucleosynthesis (BBN) in the stepwise scalar field model proposed by Ti\'an [arXiv:1912.13208, Phys. Rev. D 101, 063531 (2020)], which provides a multiaccelerating Universe solution to the cosmological coincidence problem and predicts that the scalar field may be non-negligible even in the early Universe. The observed abundances of the light elements can be used to constrain the energy density of the scalar field during the BBN era. We present a public \texttt{Matlab} code to implement the BBN calculation in the stepwise scalar field model. We show that the model can survive the BBN constraints. In particular, this model incorporates a new solution to the possible deuterium problem: very early dark energy that appears at the end of BBN. In addition, the BBN constraints, along with constraints from the cosmic late-time acceleration, suggest that the Universe in the radiation era evolves in a chaotic accelerating manner, rather than an oscillating scaling manner.
S. X. Tian
2023-02-24T08:51:34Z
http://arxiv.org/abs/2302.12512v1
Cosmological consequences of a scalar field with oscillating equation of state. IV. Primordial nucleosynthesis and the deuterium problem ###### Abstract We study the primordial nucleosynthesis (BBN) in the stepwise scalar field model proposed by Tian [Phys. Rev. D **101**, 063531 (2020)], which provides a multiaccelerating Universe solution to the cosmological coincidence problem and predicts that the scalar field may be non-negligible even in the early Universe. The observed abundances of the light elements can be used to constrain the energy density of the scalar field during the BBN era. We present a public Matlab code to implement the BBN calculation in the stepwise scalar field model. We show that the model can survive the BBN constraints. In particular, this model incorporates a new solution to the possible deuterium problem: very early dark energy that appears at the end of BBN. In addition, the BBN constraints, along with constraints from the cosmic late-time acceleration, suggest that the Universe in the radiation era evolves in a chaotic accelerating manner, rather than an oscillating scaling manner. ## I Introduction The current standard cosmological model holds that there are two stages of accelerating expansion in the Universe [1; 2]. One is the early inflation, which is proposed to solve the horizon and flatness problems [3; 4; 5]. The other is the cosmic late-time acceleration, which was confirmed by observations of supernovae [6; 7]. Theoretically, there are many dark energy models to explain the late-time acceleration, such as the cosmological constant (the standard model), quintessence [8; 9; 10; 11], phantom [12], \(k\)-essence [13; 14], quintom [15; 16; 17]. However, these models share a common problem -- the cosmological coincidence problem, which states why the energy density of dark energy is now on the same order of magnitude as that of matter [18; 19]. To solve this problem, Dodelson _et al._[20] proposed that the Universe may have experienced periodic accelerating expansion in the past and the current acceleration is just a natural continuation. Meanwhile they implemented this scenario with a canonical scalar field model. This multiacceleration scenario inspired many follow-up studies, including model extensions [21; 22; 23; 24; 25] and confronting observations [26; 27; 28; 29]. In particular, inflation can be naturally unified into this scenario [21; 23; 24], which makes the model more attractive. We proposed a new canonical scalar field model to realize the multiacceleration scenario [30]. Hereafter we call this type of model a stepwise scalar field model since the potential is similar to the staircase (see Fig. 1 in [30] for an illustration). Furthermore, we found chaos and period-doubling bifurcations in our model [31], which were not identified in previous models. Our model also naturally unifies inflation and predicts an extremely small tensor-to-scalar ratio [32], which is consistent with current observations [33]. As the literal meaning of multiacceleration expresses, there should also be stages of accelerating expansion in the early Universe, that is, stages dominated by dark energy. How to detect dark energy in the early Universe? One approach is the global cosmological parameter constraints as did in [26; 27; 29]. This approach has strong model dependencies and thus is indirect. Another approach is the primordial nucleosynthesis (Big Bang nucleosynthesis, abbreviated BBN, see [34; 35] for reviews), which could probe the Universe with temperatures between \(10^{12}\,\mathrm{K}\) and \(10^{8}\,\mathrm{K}\). Given the great success (mainly about the helium-4 and deuterium abundances) of the standard BBN theory at the time, Dodelson _et al._[20] attempted to completely hide the scalar field during the BBN era in their model. Here standard means there is no dark energy. There are other similar works using BBN to constrain the upper limit of the early dark energy density in specific models [36; 37]. However, with the improvement of the measurement uncertainties about nuclear reaction rates and cosmological parameters, the standard BBN theory now seems to be broken. In addition to the well-known primordial lithium problem [38], there are inconsistencies between the theoretical prediction and observational abundance about deuterium [39; 40]. The essence of this deuterium problem remains unclear. On the one hand, it is still debatable whether the deuterium problem really exists [41; 42]. On the other hand, if the deuterium problem does exist, its solution may hint at new physics, such as time-varying fundamental constants [43]. The deuterium and lithium problems may also shed light on the possible existence of dark energy in the early Universe. In this paper, we use BBN to test the stepwise scalar field model proposed by us [30]. We focus on the following two issues: 1. Is it possible to completely hide the scalar field in our model as did in [20]? 2. Is there some kind of early dark energy profiles in our model to solve the deuterium problem, or even the lithium problem? Meanwhile we require that the model under the same parameter settings can explain the observed cosmic late-time acceleration. In this paper, we intend to provide case studies rather than global parameter constraints. This paper is organized as follows. Section II summarizes the BBN theory, including how to incorporate the stepwise scalar field into the BBN calculation. Section III describes our numerical results, including discussions about the BBN results of the standard model, and models with exponential scalar field and stepwise scalar field. Conclusions are presented in Sec. IV. Throughout this paper, we adopt the SI Units and retain all physical constants unless otherwise stated. All reported uncertainties represent 68% confidence intervals. ## II Basic Theory BBN calculation requires one to solve a nuclear reaction network in the expanding Universe at temperatures between \(10^{12}\,\mathrm{K}\) and \(10^{8}\,\mathrm{K}\) (equivalently energy scales between \(100\,\mathrm{MeV}\) and \(0.01\,\mathrm{MeV}\)). The main BBN physics was summarized in [44, 45], and the results are widely used in current BBN codes, e.g., NUC123[46, 47], PArthenNoPE[48, 49, 50], AlterBBN[51, 52] and PRIMAT[35, 39]. Here we summarize the BBN theory implemented in this paper, which include all the key physics. The results are integrated into a public Matlab code, which is named BBNLab and is available at GitHub1. Note that no physics in this section is original. However, this summary, especially the explicit expressions given in Eqs. (13) and (15), may be useful for beginners and presents a clear basis for our future BBN work. Footnote 1: [https://github.com/tshuxun/BBNLab](https://github.com/tshuxun/BBNLab) ### Evolution equations The Universe is assumed to be described by the flat Friedmann-Lemaitre-Robertson-Walker metric \(\mathrm{d}s^{2}=-c^{2}\mathrm{d}t^{2}+a^{2}(\mathrm{d}x^{2}+\mathrm{d}y^{2}+ \mathrm{d}z^{2})\), where \(a=a(t)\) is the cosmic scale factor and \(c\) is the speed of light, and contains photons, neutrinos, electrons, positrons, baryons, and a canonical scalar field. Hereafter we use the subscripts \(\{\gamma,\nu,e^{-},e^{+},b,\phi\}\) to denote these ingredients respectively, use plasma to refer {photons + electrons + positrons + baryons}, use the subscript matter to refer {plasma + neutrinos}, and use the subscript total to refer {matter + \(\phi\)}. During BBN era, we assume that the plasma is in thermal equilibrium, while the neutrino gas is thermally decoupled from the plasma2. The scalar field interacts with other species only through gravity. Nuclides considered in the nuclear network and their basic properties are listed in Table 1. Substituting the metric into the Einstein field equations gives the Friedmann equation Footnote 2: Note that neutrinos are in thermal equilibrium with the plasma through weak interactions when the energy scale is higher than \(2\,\mathrm{MeV}\)[35]. As the temperature drops, neutrino decoupling and \(e^{\pm}\) annihilation occur one after the other. Neither process is instantaneous and there is a temporal overlap between them. This overlap results in a neutrino heating process from the \(e^{\pm}\) annihilation [53]. For simplicity, we assume that all the influences of this neutrino heating process on BBN are reflected in a constant \(N_{\nu}^{\mathrm{eff}}\) [see Eq. (2b)]. After considering \(N_{\nu}^{\mathrm{eff}}\), the neutrino gas can be regarded as expanding adiabatically. In addition, the scalar field that appears during the \(e^{\pm}\) annihilation may slightly change the value of \(N_{\nu}^{\mathrm{eff}}\) as it affects the Hubble expansion rate. In this paper, we ignore this possible modification and would like to leave this issue to the future. \[H^{2}=\frac{8\pi G}{3}(\rho_{\gamma}+\rho_{\nu}+\rho_{e}+\rho_{b}+ \rho_{\phi}), \tag{1}\] where \(H\equiv\dot{a}/a\) is the Hubble parameter, \(\dot{}\equiv\mathrm{d}/\mathrm{d}t\), \(G\) is the Newtonian constant of gravitation, \(\rho_{e}=\rho_{e^{-}}+\rho_{e^{+}}\). The mass densities (energy densities/\(c^{2}\)) are given by \[\rho_{\gamma} =\frac{\pi^{2}k_{\mathrm{B}}^{4}T_{\gamma}^{4}}{15\hbar^{3}c^{5}}, \tag{2a}\] \[\rho_{\nu} =\frac{7N_{\nu}^{\mathrm{eff}}\pi^{2}k_{\mathrm{B}}^{4}T_{\nu}^ {4}}{120\hbar^{3}c^{5}},\] (2b) \[\rho_{e} =\frac{k_{\mathrm{B}}^{4}T_{\gamma}^{4}}{\pi^{2}\hbar^{3}c^{5}} \int_{z_{\gamma}}^{\infty}x^{2}(x^{2}-z_{\gamma}^{2})^{1/2}\times\] \[\qquad\qquad\qquad\left(\frac{1}{e^{x-\phi_{e}}+1}+\frac{1}{e^{x+ \phi_{e}}+1}\right)\mathrm{d}x,\] \[=\frac{2m_{e}^{4}c^{3}}{\pi^{2}\hbar^{3}}\sum_{i=1}^{\infty}(-1)^ {i+1}\cosh(i\phi_{e})M(iz_{\gamma}),\] (2c) \[\rho_{b} =n_{b}\sum_{i=1}^{N_{\mathrm{medi}}}Y_{i}(m_{i}+\frac{3m_{e}}{2z_ {\gamma}}),\] (2d) \[\rho_{\phi} =\frac{c^{2}}{8\pi G}\left[\frac{\dot{\phi}^{2}}{2c^{2}}+V(\phi) \right], \tag{2e}\] where \(k_{\mathrm{B}}\) is the Boltzmann constant, \(\hbar\) is the reduced Planck constant, \(T_{\gamma}\) is the plasma temperature, \(T_{\nu}\) is the neutrino temperature, \(N_{\nu}^{\mathrm{eff}}=3.046\) is the effective number of neutrinos [53], \(\infty\) means positive infinity, \(z_{\gamma}\equiv m_{e}c^{2}/(k_{\mathrm{B}}T_{\gamma})\), \(\phi_{e}\equiv\mu_{e}/(k_{\mathrm{B}}T_{\gamma})\), \(m_{e}\) is the mass of electron, \(\mu_{e}\) is the chemical potential of the electron gas, \(M(x)\) is a special function defined in Appendix A, \(m_{i}\) is the mass of nuclide \(i\) (see Table 1), \(Y_{i}\equiv n_{i}/n_{b}\), \(n_{i}\) is the number density of nuclide \(i\), \(n_{b}\) is the total number density of baryons, \(N_{\mathrm{nuclei}}\) is the total species of nuclides, \(V(\phi)\) is the potential of the scalar field (see Sec. II.1.1). The conventions about the scalar field and also \(\rho_{\phi}\) are consistent with [30]. The black-body spectrum is used to derive Eq. (2a). The Fermi-Dirac distribution with zero neutrino mass and chemical potential is used to derive Eq. (2b). This assumption is reasonable as the neutrino mass is much smaller than \(\mathrm{MeV}/c^{2}\) and the zero chemical potential is consistent with current observational constraints [35]. The Fermi-Dirac distribution with \(\mu_{e}+\mu_{e^{+}}=0\), where \(\mu_{e^{+}}\) is the positron chemical potential, is used to derive the first equality of Eq. (2c). Here \(\mu_{e}+\mu_{e^{+}}=0\) corresponds to the chemical equilibrium of the \(e^{-}+e^{+}\leftrightarrow\gamma+\gamma\) reaction. The series expansion in the second equality of Eq. (2c) facilitates numerical calculations. The ideal gas model is used to derive Eq. (2d). The internal degeneracy of particles has been taken into account in the above results. As we discussed before, aside from the gravitational interaction, the thermodynamics of the plasma and neutrinos, and the dynamics of the scalar field are independent of each other. Therefore, energy conservation provides three other evolution equations. For plasma, we obtain \[\dot{\rho}_{\text{plasma}}+3H\left(\rho_{\text{plasma}}+p_{\text{plasma}}/c^ {2}\right)=0, \tag{3}\] where \(\rho_{\text{plasma}}=\rho_{\gamma}+\rho_{e}+\rho_{b}\), \(p_{\text{plasma}}=p_{\gamma}+p_{e}+p_{b}\), \(p_{e}=p_{e^{-}}+p_{e^{+}}\), and the pressures are given by \[p_{\gamma} =\frac{1}{3}\rho_{\gamma}c^{2}, \tag{4a}\] \[p_{e} =\frac{k_{\text{B}}^{4}T_{\gamma}^{4}}{3\pi^{2}\hbar^{3}c^{3}} \int_{z_{\gamma}}^{\infty}(x^{2}-z_{\gamma}^{2})^{3/2}\times\] \[\qquad\qquad\qquad\qquad\left(\frac{1}{e^{x-\phi_{e}}+1}+\frac{1 }{e^{x+\phi_{e}}+1}\right)\mathrm{d}x,\] \[=\frac{2m_{e}^{4}c^{5}}{\pi^{2}\hbar^{3}z_{\gamma}}\sum_{i=1}^{ \infty}\frac{(-1)^{i+1}}{i}\cosh(i\phi_{e})L(iz_{\gamma}),\] (4b) \[p_{b} =n_{b}k_{\text{B}}T_{\gamma}\sum_{i=1}^{N_{\text{nuclei}}}Y_{i}, \tag{4c}\] where \(L(x)\) is a special function defined in Appendix A. The basic theory and assumptions used to derive the above results are discussed in the previous paragraph. For neutrinos, considering its pressure \(p_{\nu}=\rho_{\nu}c^{2}/3\), we obtain \[\dot{T}_{\nu}+HT_{\nu}=0. \tag{5}\] Integrating the above equation gives \(T_{\nu}\propto a^{-1}\). For the scalar field, the equation of energy conservation is exactly its equation of motion [30] \[\ddot{\phi}+3H\dot{\phi}+c^{2}V^{\prime}=0, \tag{6}\] where \(V^{\prime}\equiv\mathrm{d}V/\mathrm{d}\phi\). There are two other equations that are closely related to baryons. One is the equation of baryon number conservation \[\dot{n}_{b}+3Hn_{b}=0. \tag{7}\] The other is a constraint equation given by the electrical neutrality of the Universe, which can be written as \[\frac{n_{e^{-}}-n_{e^{+}}}{n_{\gamma}}=\eta\sum_{i=1}^{N_{\text{nuclei}}}Z_{i} Y_{i}, \tag{8}\] where \(\eta\equiv n_{b}/n_{\gamma}\), and \(n_{e^{-}}\), \(n_{e^{+}}\), \(n_{\gamma}\) are the number densities of relevant ingredients, \(n_{\gamma}=2\zeta(3)m_{e}^{3}c^{3}/(\pi^{2}\hbar^{3}z_{\gamma}^{3})\) and \[\frac{n_{e^{-}}-n_{e^{+}}}{n_{\gamma}} =\frac{\sinh\phi_{e}}{2\zeta(3)}\int_{z_{\gamma}}^{\infty}\frac{ x(x^{2}-z_{\gamma}^{2})^{1/2}}{\cosh x+\cosh\phi_{e}}\mathrm{d}x\] \[=\frac{z_{\gamma}^{3}}{\zeta(3)}\sum_{i=1}^{\infty}(-1)^{i+1} \sinh(i\phi_{e})L(iz_{\gamma}), \tag{9}\] where \(\zeta(x)\) is the Riemann zeta function and \(\zeta(3)\approx 1.2021\). So far, roughly speaking, the above results determine the thermodynamics of the Universe: There are six equations {Eqs. (1), (3) and (5-8)} and six variables {\(a,T_{\gamma},T_{\nu},\phi_{e},n_{b},\phi\)}. Note that it is unnecessary to find the solution of \(a(t)\) because other equations depend directly on \(H\) rather than \(a\). The rest is the nuclear reaction network which rules the evolution of \(Y_{i}\). The general form of a nuclear reaction can be written as \[N_{i_{1}}(^{A_{i}}Z_{i_{1}})+N_{i_{2}}(^{A_{i_{2}}}Z_{i_{2}})+\cdots+N_{i_{p}}( ^{A_{i_{p}}}Z_{i_{p}})\longleftrightarrow N_{j_{1}}(^{A_{j_{1}}}Z_{j_{1}})+ N_{j_{2}}(^{A_{j_{2}}}Z_{j_{2}})+\cdots+N_{j_{q}}(^{A_{j_{q}}}Z_{j_{q}}), \tag{10}\] where \({}^{A_{i}}Z_{i}\) denotes nuclide \(i\) (see Table 1), \(N_{i}\) is the stoichiometric coefficient. The evolution of \(Y_{i}\) takes the form [35] \[\dot{Y}_{i_{1}}=\sum_{\text{reactions with }i_{1}}N_{i_{1}}\left(\Gamma_{j_{1} \cdots j_{q}\to i_{1}\cdots i_{p}}\frac{Y_{j_{1}}^{N_{j_{1}}}\cdots Y_{j_{q}}^ {N_{j_{q}}}}{N_{j_{1}}!\cdots N_{j_{q}}!}-\Gamma_{i_{1}\cdots i_{p}\to j_{1} \cdots j_{q}}\frac{Y_{i_{1}}^{N_{i_{1}}}\cdots Y_{i_{p}}^{N_{p}}}{N_{i_{1}}! \cdots N_{i_{p}}!}\right)\equiv\Gamma_{i_{1}}, \tag{11}\] where the sum includes all reactions involving nuclide \(i_{1}\), \(\Gamma_{i_{1}\cdots i_{p}\to j_{1}\cdots j_{q}}\) and \(\Gamma_{j_{1}\cdots j_{q}\to i_{1}\cdots i_{p}}\) are the reaction rates (see Sec. II.1.2), and the last equality defines \(\Gamma_{i}\). The nuclear reactions considered in this work are listed in Table 2. These reactions are selected from Table 2 in [48] with the criterion that the reactions involve only the nuclides listed in Table 1. As we will see, this nuclear reaction network gives reasonable BBN predictions. The above results form complete BBN evolution equations. However, Eqs. (3) and (8) are implicit and not suitable for direct numerical calculations. Here we convert these two equations into explicit form. Multiplying Eq. (8) by \(\zeta(3)/z_{\gamma}^{3}\), differentiating the result with respect to \(t\), and substituting Eq. (11) into the result to eliminate \(\dot{Y}_{i}\), we obtain \[\kappa_{1}\dot{\phi}_{e}+\kappa_{2}\dot{z}_{\gamma}=\kappa_{3}, \tag{12}\] where the coefficients are given by \[\kappa_{1} =\sum_{i=1}^{\infty}(-1)^{i+1}i\cosh(i\phi_{e})L(iz_{\gamma}), \tag{13a}\] \[\kappa_{2} =\sum_{i=1}^{\infty}(-1)^{i+1}i\sinh(i\phi_{e})L^{\prime}(iz_{ \gamma}),\] (13b) \[\kappa_{3} =\frac{\pi^{2}\hbar^{3}n_{b}}{2m_{e}^{3}c^{3}}\sum_{i=1}^{N_{ \text{nuclei}}}Z_{i}(\Gamma_{i}-3HY_{i}), \tag{13c}\] where \(L^{\prime}(x)\equiv\text{d}L/\text{d}x\) and its explicit expression is given in Appendix A. Expanding \(\dot{\rho}_{\text{plasma}}\) by \(\{\dot{\phi}_{e},\dot{z}_{\gamma},\dot{n}_{b},\dot{Y}_{i}\}\), and substituting Eqs. (7) and (11) to eliminate \(\{\dot{n}_{b},\dot{Y}_{i}\}\), then Eq. (3) can be rewritten as \[\kappa_{4}\dot{\phi}_{e}+\kappa_{5}\dot{z}_{\gamma}=\kappa_{6}, \tag{14}\] where the coefficients are given by \[\kappa_{4} =\sum_{i=1}^{\infty}(-1)^{i+1}i\sinh(i\phi_{e})M(iz_{\gamma}), \tag{15a}\] \[\kappa_{5} =\frac{\pi^{2}\hbar^{3}}{2m_{e}^{4}c^{3}}\left(-\frac{4\rho_{ \gamma}}{z_{\gamma}}-\frac{3m_{e}n_{b}}{2z_{\gamma}^{2}}\sum_{i=1}^{N_{\text{nuclei }}}Y_{i}\right)\] \[\quad+\sum_{i=1}^{\infty}(-1)^{i+1}i\cosh(i\phi_{e})M^{\prime}(iz _{\gamma}),\] (15b) \[\kappa_{6} =\frac{\pi^{2}\hbar^{3}}{2m_{e}^{4}c^{3}}\left[-3H\left(\rho_{ \gamma}+\rho_{e}+\frac{p_{\text{plasma}}}{c^{2}}\right)\right.\] \[\quad\left.-n_{b}\sum_{i=1}^{N_{\text{nuclei}}}\Gamma_{i}\left(m_ {i}+\frac{3m_{e}}{2z_{\gamma}}\right)\right], \tag{15c}\] where \(M^{\prime}(x)\equiv\text{d}M/\text{d}x\) and its explicit expression is given in Appendix A. The combination of Eqs. (12) and (14) gives \[\dot{\phi}_{e} =\frac{\kappa_{3}\kappa_{5}-\kappa_{2}\kappa_{6}}{\kappa_{1} \kappa_{5}-\kappa_{2}\kappa_{4}}, \tag{16}\] \[\dot{z}_{\gamma} =\frac{\kappa_{1}\kappa_{6}-\kappa_{3}\kappa_{4}}{\kappa_{1} \kappa_{5}-\kappa_{2}\kappa_{4}}. \tag{17}\] So far, we can obtain the explicit expressions of the first time-derivative of \(\{z_{\gamma},z_{\nu},\phi_{e},\eta,\phi,\dot{\phi},Y_{i}\}\), where \(z_{\nu}\equiv m_{e}c^{2}/(k_{\text{B}}T_{\nu})\). Here we replace \(\{T_{\nu},n_{b}\}\) with \(\{z_{\nu},\eta\}\) to make the variables dimensionless. In addition, \(\dot{z}_{\nu}=Hz_{\nu}\) and \(\dot{\eta}=3\pi^{2}\hbar^{3}z_{\gamma}^{2}n_{b}(\dot{z}_{\gamma}-Hz_{\gamma})/[ 2\zeta(3)m_{e}^{3}c^{3}]\) can be easily derived from the previous equations. \begin{table} \begin{tabular}{l l|l l l l l l l} No. & Reaction & No. Reaction & No. Reaction & No. Reaction & No. Reaction \\ \hline 01 & \(n\leftrightarrow p\) & 09 & \(T+D\leftrightarrow n+{}^{4}\text{He}\) & 17 & \({}^{3}\text{He}+{}^{3}\text{He}\leftrightarrow p+p+{}^{4}\text{He}\) & 25 & \({}^{6}\text{Li}+D\leftrightarrow n+{}^{7}\text{Be}\) \\ 02 & \(p+n\leftrightarrow\gamma+D\) & 10 & \(T+T\leftrightarrow n+n+{}^{4}\text{He}\) & 18 & \({}^{4}\text{He}+D\leftrightarrow\gamma+{}^{6}\text{Li}\) & 26 & \({}^{6}\text{Li}+D\leftrightarrow p+{}^{7}\text{Li}\) \\ 03 & \(D+n\leftrightarrow\gamma+T\) & 11 & \({}^{3}\text{He}+n\leftrightarrow\gamma+{}^{4}\text{He}\) & 19 & \({}^{4}\text{He}+T\leftrightarrow\gamma+{}^{7}\text{Li}\) & 27 & \({}^{7}\text{Li}+p\leftrightarrow{}^{4}\text{He}+{}^{4}\text{He}\) \\ 04 & \(D+p\leftrightarrow\gamma+{}^{3}\text{He}\) & 12 & \({}^{3}\text{He}+n\leftrightarrow p+T\) & 20 & \({}^{4}\text{He}+{}^{3}\text{He}\leftrightarrow\gamma+{}^{7}\text{Be}\) & 28 & \({}^{7}\text{Li}+p\leftrightarrow\gamma+{}^{4}\text{He}+{}^{4}\text{He}\) \\ 05 & \(D+D\leftrightarrow n+{}^{3}\text{He}\) & 13 & \({}^{3}\text{He}+D\leftrightarrow p+{}^{4}\text{He}\) & 21 & \({}^{6}\text{Li}+n\leftrightarrow\gamma+{}^{7}\text{Li}\) & 29 & \({}^{7}\text{Li}+D\leftrightarrow n+{}^{4}\text{He}+{}^{4}\text{He}\) \\ 06 & \(D+D\leftrightarrow p+T\) & 14 & \({}^{3}\text{He}+T\leftrightarrow\gamma+{}^{6}\text{Li}\) & 22 & \({}^{6}\text{Li}+n\leftrightarrow T+{}^{4}\text{He}\) & 30 & \({}^{7}\text{Be}+n\leftrightarrow p+{}^{7}\text{Li}\) \\ 07 & \(T\rightarrow\bar{\nu}_{e}+e^{-}+{}^{3}\text{He}\) & 15 & \({}^{3}\text{He}+T\leftrightarrow D+{}^{4}\text{He}\) & 23 & \({}^{6}\text{Li}+p\leftrightarrow\gamma+{}^{7}\text{Be}\) & 31 & \({}^{7}\text{Be}+n\leftrightarrow{}^{4}\text{He}+{}^{4}\text{He}\) \\ 08 & \(T+p\leftrightarrow\gamma+{}^{4}\text{He}\) & 16 & \({}^{3}\text{He}+T\leftrightarrow n+p+{}^{4}\text{He}\) & 24 & \({}^{6}\text{Li}+p\leftrightarrow{}^{3}\text{He}+{}^{4}\text{He}\) & 32 & \({}^{7}\text{Be}+D\leftrightarrow p+{}^{4}\text{He}+{}^{4}\text{He}\) \\ \end{tabular} \end{table} Table 2: The nuclear reaction network implemented in BBNLab. The \(n\leftrightarrow p\) reaction includes three weak reactions (see Eq. (37) in [55]). \begin{table} \begin{tabular}{l l l l l l l l l l} No. & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \hline Nuclide & \(n\) & \(p\) & \(D\) & \(T\) & \({}^{3}\text{He}\) & \({}^{4}\text{He}\) & \({}^{6}\text{Li}\) & \({}^{7}\text{Li}\) & \({}^{7}\text{Be}\) \\ \(\Delta m_{i}\) & 8.0713 & 7.2890 & 13.1357 & 14.9498 & 14.9312 & 2.4249 & 14.0869 & 14.9071 & 15.7690 \\ \(A_{i}\) & 1 & 1 & 2 & 3 & 3 & 4 & 6 & 7 & 7 \\ \(Z_{i}\) & 0 & 1 & 1 & 1 & 2 & 2 & 3 & 3 & 4 \\ \(s_{i}\) & 1/2 & 1/2 & 1 & 1/2 & 1/2 & 0 & 1 & 3/2 & 3/2 \\ \end{tabular} \end{table} Table 1: Nuclides considered in the nuclear network, together with mass excess \(\Delta m_{i}\), mass number \(A_{i}\), proton number \(Z_{i}\) and spin \(s_{i}\). The data comes from [54]. Mass excess is given in MeV. Note that the nuclide mass \(m_{i}=A_{i}m_{u}+\Delta m_{i}-Z_{i}m_{e}\), where \(m_{u}\) is the atomic mass constant. BBNLab adopts the variables \(\{z_{\gamma},z_{\nu},\phi_{e},\eta,\phi,\dot{\phi},Y_{i}\}\) in its numerical calculations. Furthermore, we change the "time" variable from \(t\) to the \(e\)-folding number \(N\), where \(N\equiv\ln(a/a_{1})\), \(a_{1}\) is the scale factor at one specific time and its absolute value is meaningless. The transformation of the evolution equations directly follows \(\mathrm{d}X/\mathrm{d}N=\dot{X}/H\). Hereafter we use the subscript "ini" to denote the beginning of BBN. If we set the initial temperature to \(10^{12}\,\mathrm{K}\), then \(N\in[N_{\mathrm{ini}},N_{\mathrm{ini}}+10]\) roughly corresponds to \(10^{12}\,\mathrm{K}\leq T_{\gamma}\lesssim 10^{8}\,\mathrm{K}\). The exact value of \(N_{\mathrm{ini}}\) have no physical effect as the BBN evolution equations do not depend on \(N\) explicitly. The reason for keeping \(N_{\mathrm{ini}}\) here is that it is an auxiliary value used to calculate the initial conditions of the stepwise scalar field (see Sec. II.2). For clarity, we set \(N_{\mathrm{ini}}=0\) for the standard model and the exponential scalar field model (see Sec. II.1.1). The BBN evolution equations are stiff. In BBNLab, we use the Matlab/ode23s solver with suitable option structure to integrate this stiff system. We set the options as odeset( 'RelTol',1E-10,'AbsTol',1E-10,'MaxStep',1E-3) for Fig. 1, and odeset('RelTol',1E-10,'AbsTol', 1E-10,'MaxStep',2E-4) for Fig. 3 and Fig. 4. Details about ode23s can be found in [https://www.mathworks.com/help/matlab/ref/ode23s.html](https://www.mathworks.com/help/matlab/ref/ode23s.html). For all series expansions involved in our numerical calculations, e.g., Eq. (2c), we adopt the results up to the 20th order. #### ii.2.1 Models of the scalar field The goal of this paper is to study BBN in the framework of the stepwise scalar field model proposed in [30]. As a comparison, we will also present the BBN results for the standard model, in which no scalar fields exist, and the exponential scalar field model [56; 57]. Note that the stepwise scalar field model is inspired by the exponential scalar field model and these two models share some similar features, e.g., the scaling property [31]. Discussions on the exponential scalar field model may shed light on what kind of the stepwise scalar field model can survive from the BBN constraints. In our conventions [30], the exponential potential reads \[V(\phi)=V_{0}\exp(-\lambda\phi), \tag{18}\] where \(\lambda\) is a dimensionless constant, \(V_{0}\) is a constant in unit of \(\mathrm{length}^{-2}\), \(\phi\) is dimensionless. Scaling solution is a main mathematical property of this model [57]. In the radiation-dominated Universe, the scaling solution requires \(\lambda>2\). Furthermore, previous discussions on BBN require \(\lambda\gtrsim 9\)[36]. The potential of the stepwise scalar field model is written as [30] \[V(\phi)=V_{0}\exp\left[-\frac{\lambda_{1}+\lambda_{2}}{2}\phi-\frac{\alpha( \lambda_{1}-\lambda_{2})}{2}\sin\frac{\phi}{\alpha}\right], \tag{19}\] where \(\lambda_{1}\), \(\lambda_{2}\) and \(\alpha\) are dimensionless constants, \(V_{0}\) and \(\phi\) follow the previous conventions. Period-doubling bifurcation together with the oscillating scaling and chaotic accelerating solutions are the main mathematical properties of this model [31]. In order to explain the cosmic late-time acceleration and solve the accompanying coincidence problem, we require \(\{\lambda_{1}+\lambda_{2}>4,\,0<\lambda_{2}\lesssim 0.4,\,\alpha=\mathcal{O}(1)\}\)[30]. Furthermore, we require \(\lambda_{2}=\mathcal{O}(10^{-4})\) to unify inflation [32]. #### ii.2.2 Nuclear reaction rates For clarity, we adopt the conventions that the forward direction of the reactions listed in Table 2 and Eq. (10) is taken as left to right. For the \(n\leftrightarrow p\) reaction, weak interaction theory gives [55; 58] \[\Gamma_{n\to p} =K\int_{1}^{\infty}\frac{x(x+q)^{2}\sqrt{x^{2}-1}}{(1+e^{xz_{ \gamma}})\left[1+e^{-(x+q)z_{\nu}}\right]}\mathrm{d}x\] \[\quad+K\int_{1}^{\infty}\frac{x(x-q)^{2}\sqrt{x^{2}-1}}{(1+e^{-xz _{\gamma}})\left[1+e^{(x-q)z_{\nu}}\right]}\mathrm{d}x, \tag{20a}\] \[\Gamma_{p\to n} =K\int_{1}^{\infty}\frac{x(x-q)^{2}\sqrt{x^{2}-1}}{(1+e^{xz_{ \gamma}})\left[1+e^{-(x-q)z_{\nu}}\right]}\mathrm{d}x\] \[\quad+K\int_{1}^{\infty}\frac{x(x+q)^{2}\sqrt{x^{2}-1}}{(1+e^{- xz_{\gamma}})\left[1+e^{(x+q)z_{\nu}}\right]}\mathrm{d}x, \tag{20b}\] where \(q=(m_{n}-m_{p})/m_{e}\approx 2.5310\), \(m_{n}\) and \(m_{p}\) are the neutron and proton mass respectively, \(K=6.9503\times 10^{-4}\,\mathrm{s}^{-1}\) is the normalization constant computed by requiring \(\Gamma_{n\to p}=1/\tau_{n}\) at low temperatures, \(\tau_{n}=879.4\,\mathrm{s}\) is the neutron lifetime [59]. The Fermi correction discussed in [58] is ignored here. At high temperatures, we have \(z_{\nu}=z_{\gamma}\) so that \(\Gamma_{n\to p}=\Gamma_{p\to n}e^{gz_{\gamma}}\), which is consistent with the equilibrium abundances (see Eq. (25)). For other reactions, generally, the forward thermonuclear reaction rates in the CGS units are given in the literature [35]. The forward reaction rates read [35] \[\Gamma_{i_{1}\cdots i_{p}\to j_{1}\cdots j_{q}}=\left[\frac{n_{b}m_{u}}{ \mathrm{g/cm^{3}}}\right]^{(N_{i_{1}}+\cdots+N_{i_{p}})-1}\cdot R_{\mathrm{t}} \cdot[\mathrm{s}^{-1}], \tag{21}\] where \(R_{\mathrm{t}}\) is the numerical value of the thermonuclear reaction rate in CGS units. The \(R_{\mathrm{t}}\) actually used in the present calculation and relevant references are presented in the code. Most of the reaction rates we use are consistent with those used in PRIMAT (Version 0.2.0) [39]. Especially, as did in [39], we adopt the most recent measurements about the \(D+p\to\gamma+{}^{3}\mathrm{He}\) reaction [60]. The deuterium abundance is sensitive to this reaction [39; 40] and is the core in our discussions. The last term in Eq. (21) is the physical unit, i.e., the unit of \(\Gamma_{i_{1}\cdots i_{p}\to j_{1}\cdots j_{q}}\). The inverse reaction rates read [35] \[\Gamma_{j_{1}\cdots j_{q}\to i_{1}\cdots i_{p}}=\left[\frac{n_{b}m_{u}}{ \mathrm{g/cm^{3}}}\right]^{(N_{j_{1}}+\cdots+N_{j_{q}})-(N_{i_{1}}+\cdots+N_{i_ {p}})}\times\] \[\frac{\gamma_{j_{1}\cdots j_{q}\to i_{1}\cdots i_{p}}}{\gamma_{i_{1}\cdots i_{p} \to j_{1}\cdots j_{q}}}\times\Gamma_{i_{1}\cdots i_{p}\to j_{1}\cdots j_{q}}, \tag{22}\] where the dimensionless ratio \[\frac{\gamma_{j_{1}\cdots j_{q}\to i_{1}\cdots i_{p}}}{\gamma_{i_{1} \cdots i_{p}\to j_{1}\cdots j_{q}}}=\beta_{1}\left(\frac{T_{\gamma}}{10^{9}\, \mathrm{K}}\right)^{\beta_{2}}\exp\left(\frac{\beta_{3}\cdot 10^{9}\,\mathrm{K}}{T_{ \gamma}}\right), \tag{23}\] and the constant \(\beta_{i}\) is given in our code. Note that each nuclear reaction corresponds to a set of \(\beta_{i}\). Uncertainties [61] of the reaction rates are not included in the present calculation. More nuclear reactions and the uncertainties will be considered in the future. ### Initial conditions Considering the neutrino-plasma thermal equilibrium, we set the initial temperatures \(T_{\gamma,\mathrm{ini}}=T_{\nu,\mathrm{ini}}=10^{12}\,\mathrm{K}\) in BBNLab. For the baryon-to-photon number ratio, we set \(\eta_{\mathrm{ini}}\in 11/4\times[10^{-10},10^{-9}]\) so that \(\eta\in[10^{-10},10^{-9}]\) at the ending of BBN. Here the coefficient \(11/4\) is due to the entropy transfer from \(e^{\pm}\) pairs to photons [35]. The initial high temperature makes all nuclides in statistical equilibrium, which includes thermal and chemical equilibrium. The initial number density of nuclide \(i\) reads \[n_{i,\mathrm{ini}}=\frac{g_{i}}{\hbar^{3}}\left(\frac{m_{i}k_{\mathrm{B}}T_{ \gamma,\mathrm{ini}}}{2\pi}\right)^{3/2}\exp\left(\frac{\mu_{i,\mathrm{ini}}- m_{i}c^{2}}{k_{\mathrm{B}}T_{\gamma,\mathrm{ini}}}\right), \tag{24}\] where \(g_{i}=2s_{i}+1\) is the spin degeneracy, \(\mu_{i,\mathrm{ini}}\) is the initial chemical potential. The chemical potential of leptons is negligible compared to that of baryons. Therefore we obtain \(\mu_{n,\mathrm{ini}}=\mu_{p,\mathrm{ini}}\) based on the \(n\leftrightarrow p\) reaction. Applying Eq. (24) to neutrons and protons, we obtain \[\frac{n_{n,\mathrm{ini}}}{n_{p,\mathrm{ini}}}=\left(\frac{m_{n}}{m_{p}}\right) ^{3/2}\exp\left[\frac{(m_{p}-m_{n})c^{2}}{k_{\mathrm{B}}T_{\gamma,\mathrm{ini} }}\right]. \tag{25}\] Initially, neutrons and protons are much more numerous than other nucleons. Therefore, it is reasonable to adopt \(n_{n,\mathrm{ini}}+n_{p,\mathrm{ini}}=n_{b,\mathrm{ini}}\), which in turn gives \[Y_{n,\mathrm{ini}} =1/(1+n_{p,\mathrm{ini}}/n_{n,\mathrm{ini}}), \tag{26a}\] \[Y_{p,\mathrm{ini}} =1-Y_{n,\mathrm{ini}}. \tag{26b}\] For other nuclides, we set \(Y_{i,\mathrm{ini}}=0\) in BBNLab. Note that we turn off all nuclear reactions except the \(n\leftrightarrow p\) reaction until \(T_{\gamma}=10^{10}\,\mathrm{K}\). Once the full nuclear reaction network is turned on, the heavier nuclides reach their equilibrium abundances very quickly (see Fig. 1 for an illustration). Therefore, it is reasonable to set \(Y_{i,\mathrm{ini}}=0\) for the heavier nuclides and the final results is independent of this setting. This strategy is also adopted by PRIMAT[35]. The initial value of \(\phi_{e}\) can be obtained by numerically solving Eq. (8). Here we present a high temperature approximate solution. The right side of Eq. (8) is much less than \(1\) as \(\eta\ll 1\). For its left side, i.e., Eq. (9), one can verify that \(\lim_{z_{\gamma}\to 0}z_{\gamma}^{3}L(iz_{\gamma})=2/i^{3}=\mathcal{O}(1)\), where \(z_{\gamma}\to 0\) corresponds to the high temperature approximation. Therefore, Eq. (8) indicates \(\sinh(i\phi_{e})=\mathcal{O}(\eta)\ll 1\) at the beginning of BBN. Substituting the Taylor expansion \(\sinh x=x\) into Eq. (8), we obtain \[\phi_{e,\mathrm{ini}}=\frac{\zeta(3)\eta_{\mathrm{ini}}\sum_{i=1}^{N_{\mathrm{ nucl}}}Z_{i}Y_{i,\mathrm{ini}}}{z_{\gamma,\mathrm{ini}}^{3}\sum_{i=1}^{\infty}(-1)^{i+1} iL(iz_{\gamma,\mathrm{ini}})}. \tag{27}\] This result is consistent with Eq. (20) in [51]. The initial conditions of the scalar field, i.e., in principle, the values of \(\{\phi_{\mathrm{ini}},\dot{\phi}_{\mathrm{ini}},V_{0}\}\), are calculated as follows. For the exponential scalar field model, we only consider the scaling case and assume that the scalar field reaches the scaling attractor at the beginning of BBN. Without loss of generality, we can set \(\phi_{\mathrm{ini}}=0\) and use \(V_{0}\) to adjust the potential energy. Then the initial conditions can be calculated as follows 1. \(x_{1,\mathrm{ini}}=4/(\sqrt{6}\lambda)\), \(x_{2,\mathrm{ini}}=2/(\sqrt{3}\lambda)\), 2. \(\Omega_{\phi,\mathrm{ini}}=4/\lambda^{2}\), \(\rho_{\mathrm{matter,ini}}=\rho_{\mathrm{plasma,ini}}+\rho_{\nu,\mathrm{ini}}\), \(\rho_{\mathrm{total,ini}}=\rho_{\mathrm{matter,ini}}/(1-\Omega_{\phi,\mathrm{ini }})\), \(H_{\mathrm{ini}}=(8\pi G\rho_{\mathrm{total,ini}}/3)^{1/2}\), 3. \(\dot{\phi}_{\mathrm{ini}}=\sqrt{6}H_{\mathrm{ini}}x_{1,\mathrm{ini}}\), \(V_{0}=3(H_{\mathrm{ini}}x_{2,\mathrm{ini}}/c)^{2}\). Here \(x_{1}\) and \(x_{2}\) are the dimensionless variables defined in the dynamical system analysis (see Eq. (7a) in [30]) and their initial values are given by the scaling solution, \(\Omega_{\phi}\) is the relative energy density of the scalar field. For the stepwise scalar field model, instead of the scaling solution, we can use the numerical solution of the dynamical system to give the values of \(x_{1}\) and \(x_{2}\) at the beginning of BBN. The dynamical system is given by Eq. (8) with \(w_{\mathrm{m}}=1/3\) in [30] and we do not repeat the equations and conventions here. Other subsequent steps are similar to the exponential case. However, one difference is that the value of \(\phi_{\mathrm{ini}}\) is no longer fixed. Without loss of generality, we assume \(0\leq\phi_{\mathrm{ini}}\leq 2\alpha\pi\) and use \(V_{0}\) to adjust the potential energy. Details are as follows 1. Integrating the dynamical system in \(N\in[0,N_{\mathrm{ini}}]\) gives the values of \(\{x_{1,\mathrm{ini}}\), \(x_{2,\mathrm{ini}}\), \(\lambda_{\mathrm{ini}}\), \(\nu_{\mathrm{ini}}\}\), i.e., the values of \(\{x_{1},x_{2},\lambda,\nu\}\) at \(N=N_{\mathrm{ini}}\) (see Fig. 2 for an illustration). In this step, we set \(\{x_{1}=0.75,x_{2}=0.5,\lambda=\lambda_{2}+2,\nu=\nu_{+}(\lambda)\}\) at \(N=0\) for all following calculations. Compared to setting the values at \(N=N_{\mathrm{ini}}\) directly, this step allows the system to enter the possible attractor solution and makes the desired values more natural. Note that \(\lambda\) is an variable in the stepwise scalar field model, while it is a constant in the exponential scalar field model; \(N_{\mathrm{ini}}\) is an auxiliary parameter and its absolute value have no physical meaning (see also discussions in Sec. II.1). 2. \(\Omega_{\phi,\rm ini}=x_{1,\rm ini}^{2}+x_{2,\rm ini}^{2}\), \(\rho_{\rm matter,ini}=\rho_{\rm plasma,ini}+\rho_{\nu,\rm ini}\), \(\rho_{\rm total,ini}=\rho_{\rm matter,ini}/(1-\Omega_{\phi,\rm ini})\), \(H_{\rm ini}=(8\pi G\rho_{\rm total,ini}/3)^{1/2}\), 3. \(\phi_{\rm ini}=\alpha\arccos\bigg{[}\frac{2}{\lambda_{1}-\lambda_{2}}\cdot \bigg{(}\lambda_{\rm ini}-\frac{\lambda_{1}+\lambda_{2}}{2}\bigg{)}\bigg{]}\), \(\mathtt{if}\ \nu_{\rm ini}>0\ \mathtt{then}\ \phi_{\rm ini}=2\alpha\pi-\phi_{\rm ini}\), 4. \(\dot{\phi}_{\rm ini}=\sqrt{6}H_{\rm ini}x_{1,\rm ini}\), \(V_{\rm ini}=3(H_{\rm ini}x_{2,\rm ini}/c)^{2}\), \(V_{0}=V_{\rm ini}\exp\bigg{[}\frac{\lambda_{1}+\lambda_{2}}{2}\phi_{\rm ini}+ \frac{\alpha(\lambda_{1}-\lambda_{2})}{2}\sin\frac{\phi_{\rm ini}}{\alpha} \bigg{]}\). Here the third step gives the solution of Eqs. (7b) and (7c) in [30]. Note that the function \(\arccos\) returns values in the interval \([0,\pi]\) and thus the final result satisfies our initial assumption \(0\leq\phi_{\rm ini}\leq 2\alpha\pi\). Considering the initial temperature, the above procedure gives \(V_{0}\sim G(100\,\rm MeV)^{4}/h^{3}c^{7}\sim 10^{-80}l_{\rm P}^{-2}\), where \(l_{\rm P}\) is the Planck length. This result is much smaller than that given in [32]. However, there is no inconsistency in the model. The reason is that there is a degeneracy between the absolute values of \(V_{0}\) and \(\phi_{\rm ini}\). Because the function \(\sin\) is periodic, increasing \(\phi_{\rm ini}\) by several periods together with a corresponding rescaled \(V_{0}\) does not affect the potential and dynamics of the scalar field. In [32], we assumed \(\phi\approx\alpha\pi\) at the beginning of inflation and obtained \(V_{0}\sim 10^{-14}l_{\rm P}^{-2}\). If we want to restore these results, what we need to do is increase the above obtained \(\phi_{\rm ini}\) by several periods so that \(\exp[-(\lambda_{1}+\lambda_{2})\phi_{\rm ini}/2]\sim 10^{-66}\). The BBN results presented in this paper are independent of these settings. ## III Numerical results To compare the theoretical predictions and observations, we consider \(Y_{\rm P}\equiv 4\,Y_{\rm 4\,He}\) (the ratio of the abundance by mass of helium-4), D/H, \({}^{3}\)He/H and \({}^{7}\)Li/H (the ratio of the abundance of the isotope to hydrogen, and \(i/{\rm H}\equiv Y_{i}/Y_{p}\)). Observational results are \(Y_{\rm P}=0.2453\pm 0.0034\)[62], \({\rm D/H}=(2.527\pm 0.030)\times 10^{-5}\)[63], \({}^{3}{\rm He/H}<(1.1\pm 0.2)\times 10^{-5}\)[64], and \({}^{7}{\rm Li/H}=(1.58^{+0.35}_{-0.28})\times 10^{-10}\)[65]. The \({}^{3}\)He result is given by its abundance in our Galaxy and the result is adopted as an upper limit of the primordial abundance because the post-BBN evolution of \({}^{3}\)He is unclear [35; 64]. As stated in Sec. II.2, we compute the abundances for a given scalar field model over the range \(\eta_{\rm today}\in[10^{-10},10^{-9}]\), which covers the _Planck_ 2018 result \(\eta_{\rm today}=(6.1374\pm 0.0383)\times 10^{-10}\)[66; 39]. Here the subscript "today" means the present epoch or equivalently the ending of BBN. In order to check the correctness of our code, we present the equilibrium abundances. Omitting the subscript "ini" in Eq. (26) gives the equilibrium abundances of neutron and proton. This result is available at \(T_{\gamma}\sim 10^{12}\,\rm K\). For other nuclides, the equilibrium abundances are given by [35] \[Y_{i,\rm eq}=g_{i}2^{(3A_{i}-5)/2}\pi^{(1-A_{i})/2}\zeta(3)^{A_{ i}-1}c^{3(1-A_{i})}\eta^{A_{i}-1}\times\\ Y_{p}^{Z_{i}}Y_{n}^{A_{i}-Z_{i}}\left[\frac{m_{i}(k_{\rm B}T_{ \gamma})^{A_{i}-1}}{m_{p}^{Z_{i}}m_{n}^{A_{i}-Z_{i}}}\right]^{3/2}\exp\left( \frac{B_{i}}{k_{\rm B}T_{\gamma}}\right), \tag{28}\] where the binding energy \(B_{i}\equiv[Z_{i}m_{p}+(A_{i}-Z_{i})m_{n}-m_{i}]\times c^{2}\), and the relevant values are given in Table 1. If the temperature is low enough that the neutrons and protons are not in thermal equilibrium, e.g., \(T_{\gamma}\sim 10^{10}\,\rm K\), then \(Y_{i,\rm eq}\) should be calculated based on the exact values of \(Y_{n}\) and \(Y_{p}\), rather than their equilibrium values. ### Exponential scalar field Figure 1 presents the main BBN results for the exponential scalar field model. The predicted abundances as a function of \(\eta_{\rm today}\) and the observed results are plotted in the left part. The case of \(\lambda=\infty\) corresponds to the standard BBN model, at which \(\Omega_{\phi}=0\). Compared to PRIMAT, our code gives a slightly lower \({}^{4}\)He abundance, and consistent abundances of D, \({}^{3}\)He and \({}^{7}\)Li. The lower \(Y_{\rm P}\) should be due to the incomplete nuclear reaction network. However, the difference is small relative to the observational uncertainty and thus is ignorable. Importantly, our code recovers the deuterium problem, which is first pointed out in [39] and states that the predicted D/H abundance is lower than the observational constraints. Using a scalar field to solve the deuterium problem is the core in our following discussions. Decreasing \(\lambda\), that is, increasing \(\Omega_{\phi}\) during BBN, can significantly increase the abundances of \({}^{4}\)He and D. In particular, we can solve the deuterium problem with \(\lambda=10\). A potential disadvantage here is that the predicted abundance of \({}^{4}\)He may exceed its observed value if we consider the full PRIMAT nuclear reaction network. The case of \(\lambda=8\) gives too large \(Y_{\rm P}\) and D/H and will be ruled out by the observations. This is consistent with the result given by [36] (\(\lambda>9\) at \(2\sigma\) C.L.). The abundances of \({}^{3}\)He and \({}^{7}\)Li remain almost unchanged when \(\lambda\) changes from infinity to 8. Consistent with conventional cognition [37], our results show that the exponential scalar field cannot solve the primordial lithium problem [38]. The case of \(\lambda=10\) deserves more discussion as it can solve the deuterium problem. The evolution of the isotopes as a function of \(T_{\gamma}\) and their equilibrium abundance are depicted in the right part of Fig. 1. As the Universe expands (temperature decreases), the evolution of neutron and proton follow their equilibrium abundance until \(T_{\gamma}\approx 2\times 10^{10}\,\rm K\). Heavier nuclei are taken into account when \(T_{\gamma}=10^{10}\,\rm K\) and their evolution follow the equilibrium abundance until \(T_{\gamma}\approx 6\times 10^{9}\,\rm K\). Therefore, it is reasonable for BBNLab to consider heavier nuclei starting from \(T_{\gamma}=10^{10}\,\rm K\). We also plot the evolution of \(\Omega_{\phi}\) and the equation of state (EOS) \(w_{\rm matter}\). During BBN era, \(\Omega_{\phi}\) remains almost constant (4%), which corresponds to the scaling solution [57]. Small fluctuations in \(\Omega_{\phi}\) are due to the fluctuations in \(w_{\rm matter}\), which in turn results from the \(e^{\pm}\) annihilation. Combining the left and right parts of Fig. 1, we conclude that if the relative scalar field energy density reaches 4%, then it can result in observable influences on the final BBN predictions. During BBN, such a scalar field may can affect the out-of-equilibrium process of neutron and proton around \(T_{\gamma}\approx 2\times 10^{10}\,\mathrm{K}\), and the heavier nuclei evolution when \(T_{\gamma}\lesssim 6\times 10^{9}\,\mathrm{K}\). ### Stepwise scalar field In the stepwise scalar field model, our previous work established the following unified cosmic evolution scenario: inflation [32], deflationary phase with very stiff EOS [32], radiation era with oscillating scaling solution (OSS) [31], and matter era with chaotic accelerating solution (CAS) [30; 31]. OSS during radiation era can attenuate the sensitive dependence of the late-time cosmic evolution on the early initial conditions and facilitate the cosmological parameter constraints [31]. This is a pragmatic choice rather than an observational confirmation. In principle, the Universe can also evolve as a CAS during the radiation era. The BBN discussions may shed light on the Figure 1: Left: Dependence of \(\{\mathrm{Y_{P},D/H,\,{}^{3}He/H,\,{}^{7}Li/H}\}\) in \(\eta_{\rm today}\) for the exponential scalar field model (\(\lambda=8,10,16,\infty\)) and observational constraints. Due to radioactive decay, the T and \({}^{7}\)Be abundance has been added into \({}^{3}\)He and \({}^{7}\)Li, respectively. Horizontal shaded areas represent the observational abundances, while vertical shaded areas represent the _Planck_ 2018 result on \(\eta_{\rm today}\) (see the main text for the values). Right: Evolution of the elements abundances, \(\Omega_{\phi}\) and \(w_{\rm matter}\) for \(\lambda=10\) and \(\eta_{\rm today}=6.1374\times 10^{-10}\). Dashed curves represent the equilibrium abundances. The fluctuations in \(\Omega_{\phi}\) and \(w_{\rm matter}\) evolutions are caused by the \(e^{\pm}\) annihilation. real type of cosmic evolution during the radiation era. We start from the OSS. Roughly speaking, Sec. III.1 shows that \(\Omega_{\phi}\approx 4\%\) is a critical point for the BBN constraints. Higher \(\Omega_{\phi}\) would be excluded from observations, while lower \(\Omega_{\phi}\) would have no observable effect. If we expect OSS in the radiation era, then we require \(\lambda_{1}\gtrsim 20\) because \(\Omega_{\phi}\approx 16/(\lambda_{1}+\lambda_{2})^{2}\)[30] and \(\lambda_{2}\ll 1\)[32]. In particular, \(\lambda_{1}\approx 20\) may can be used to solve the deuterium problem as the exponential scalar field did. In Fig. 2 (a), we plot the evolution of \(\Omega_{\phi}(N)\) in the radiation and matter epochs for \(\lambda_{1}=20\). Other parameter settings and calculation details can be found in the caption. The inserted Fourier plots show that, under this parameter settings, the Universe evolves as an OSS in the radiation era, and as a CAS in the matter era. In the radiation era, \(\Omega_{\phi}\approx 4\%\) as we expected. However, this parameter setting is difficult to explain the cosmic late-time acceleration in the matter era. An apparent difficulty is that \(\Omega_{\phi}\) cannot reach \(70\%\), i.e., the current observed dark energy relative density [66]. Increasing \(\lambda_{1}\) will exacerbate this problem. Increasing \(\alpha\) can increase the maximum value of \(\Omega_{\phi}\) in the matter era, but also make the cosmic evolution in the radiation era enter the CAS3. Therefore, if \(\lambda_{1}\gtrsim 20\), then the OSS in the radiation era may be incompatible with the desired acceleration in the matter era. Footnote 3: In current parameter settings, \(f_{\rm loss}/2\) component already appears in the radiation era [see the left inset in Fig. 2 (a)]. Further parameter changes will cause the system to quickly enter the chaotic phase. What we need is a small \(\Omega_{\phi}\) during BBN. To achieve this, in addition to increasing \(\lambda_{1}\), we can also adjust the evolution of the Universe in the radiation era to CAS. Note that \(\Omega_{\phi}\) can be very close to \(0\) and \(1\) in a CAS, but not in an OSS [31]. Technically, decreasing \(\lambda_{1}\) or increasing \(\alpha\) can bring the system into the CAS phase. In Fig. 2 (b), we plot such a scenario. A key setting here is \(\lambda_{1}=10\) and other settings can be found in the caption. The model might survive the BBN constraints if \(\Omega_{\phi}\lesssim 4\%\) during the main BBN era. In addition, this parameter setting should also be able to explain the cosmic late-time acceleration (mainly \(\Omega_{\phi}\approx 70\%\) and the EOS \(w_{\phi}\approx-1\)) as shown in the right part of Fig. 2 (b) and the discussions in [30; 31]. Note that, in the matter era of Fig. 2 (b), we tend to emphasize the existence of the \(\Omega_{\phi}=70\%\) points, rather than requiring it to occur exactly at today, i.e., \(N=N_{\rm eq}+8.13\)[31]. Once the point exists, we can adjust the initial conditions so that it occurs right now, and this does not require fine-tuning [31]. For the same parameter settings as Fig. 2 (b), we perform detailed BBN calculations with BBNLab. The results are shown in Figs. 3 and 4. In order to set the initial conditions of the scalar field at \(T_{\gamma}=10^{12}\,\)K (see the discussions in Sec. II), we need an auxiliary parameter \(N_{\rm ini}\), which is labeled in the figures. The colored shaded region in Fig. 2 (b) roughly corresponds to the BBN era of the \(N_{\rm ini}=70.2\) case in Fig. 3. Note that this correspondence is not strict, since \(w_{\rm matter}\) (i.e., \(w_{\rm m}\) in [31]) remains almost constant in the former case, while \(w_{\rm matter}\) is obviously time-varying in the latter case. Looking at the \(Y_{\rm P}\) and D/H results in Figs. 3 and 4, we conclude that there do exist viable parameter space in the stepwise scalar field model to explain the BBN observations. In particular, the case of \(N_{\rm ini}=70.2\) in Fig. 3 or \(N_{\rm ini}=78.5\) in Fig. 4 could be used to solve the possible deuterium problem [39]. For the cases of \(N_{\rm ini}=70.2\) and \(N_{\rm ini}=78.5\), the evolutions of \(\{Y_{i},\Omega_{\phi},w_{\rm matter}\}\) as a function of \(T_{\gamma}\) are depicted in the corresponding right parts. There is one difference between the two cases worth mentioning: the \(Y_{\rm P}\) predictions are different. Compared with the standard model, the case of \(N_{\rm ini}=70.2\) does not change \(Y_{\rm P}\) while the case of \(N_{\rm ini}=78.5\) increases \(Y_{\rm P}\). This difference should be due to the different values of \(\Omega_{\phi}\) when protons and neutrons are out of equilibrium (see the second vertical dash-dotted line in the right parts of the figures). If \(\Omega_{\phi}\) drops below \(4\%\) before protons and neutrons fall out of equilibrium, then the scalar field would have no observable effect on the isotope evolutions in this period. In the case of \(N_{\rm ini}=70.2\), the scalar field affects the isotope evolutions only near the end of BBN, and only has an observable effect on the deuterium abundance. This discussion also provides a simple mechanism for solving the deuterium problem: very early dark energy appears near the end of BBN. Essentially, this mechanism is the core of solving the deuterium problem with the stepwise scalar field. Finally, we admit that no solution of the lithium problem was found in the stepwise scalar field model. ## IV Conclusions The stepwise scalar field model was constructed to solve the cosmological coincidence problem [20; 30]. A key idea of this model is that the cosmic expansion has been dominated by the scalar field many times, which indicates that a non-negligible scalar field could have arisen at any stage in the early Universe. If it appears during BBN era, then the abundances of light elements will be affected due to the increment of the Hubble expansion rate. In this paper, we analyze the BBN consequences of the stepwise scalar field proposed in [30]. We provide here a public Matlab program, BBNLab, which enables the BBN calculation in the stepwise scalar field model. We find that BBN provides a strong upper limit (\(\sim 4\%\)) to the nearly constant \(\Omega_{\phi}\) (see Fig. 1). However, in the stepwise scalar field model, \(\Omega_{\phi}\) can vary over several orders of magnitude, and can remain less than \(4\%\) over a wide period of time. Based on this, we show examples where the stepwise scalar field can survive the BBN constraints. The scalar field can be completely hidden in BBN era, which restores the standard BBN results (see the case of \(N_{\rm ini}=70.0\) in Fig. 3 or \(N_{\rm ini}=78.2\) in Fig. 4). A more interesting example is that the very early dark energy that appears near the end of BBN can be used to solve the possible deuterium problem [39] (see the case of \(N_{\rm ini}=70.2\) in Fig. 3). In the stepwise scalar field model, the evolution of the Universe can be divided into two categories: OSS and CAS [31]. In order to facilitate the cosmological parameter constraints, we argued that the Universe should evolve as OSS in the radiation era [31]. This is a subjective argument, not a result of observational constraints. However, the above 4% upper limit, together with the constraints from the cosmic late-time acceleration, rule out the possibility of OSS in the radiation era. The examples provided in this paper that not only survive the BBN constraints, but also explain the cosmic late-time acceleration, are all belong to the category of CAS. Therefore, we conclude that the Universe should evolve as a CAS in the radiation era in the stepwise scalar field model. This does not rule out the model, but brings difficulties to the global cosmological parameter constraints: there may be many local maxima in the posterior distribution [30]. How to quantitatively describe such a posterior distribution is a major task for our future work. What kinds of observations could verify or rule out the CAS scenario for the radiation era? Given the above discussed difficulty of cosmological parameter constraints, it is important to look for direct observations. Electromagnetic observations seem impossible due to the scattering of photons and free electrons. However, gravitational waves and dark matter observations might be possible. The stochastic gravitational wave background can be used to infer the cosmic expansion history before BBN [67; 68; 69]. The possible axion detection provides an auxiliary probe of the early Universe [70]. Future multimessenger astronomy can directly probe the early Universe and test the CAS scenario in our model. ## Acknowledgements This work was supported by the Initiative Postdocs Supporting Program under Grant No. BX20200065 and China Postdoctoral Science Foundation under Grant No. 2021M700481. Figure 2: Evolution of the dark energy relative density \(\Omega_{\phi}\) for the stepwise scalar field model. The calculations are based on Eq. (8) with \(w_{\rm m}(N)=(1/3)/[1+\exp(N-N_{\rm eq})]\) in [30]. This equation describes the evolution of the Universe from radiation era to matter era [31]. \(N=N_{\rm eq}\) corresponds to matter-radiation equality and we set \(N_{\rm eq}=100\). We perform numerical calculations in \(N\in[0,150]\) and plot the results within \(N\in[50,150]\). The model parameters are \(\lambda_{1}=20,\lambda_{2}=0.01\) and \(\alpha=0.08\) for the subplot (a), and \(\lambda_{1}=10\), \(\lambda_{2}=0.01\), \(\alpha=0.24\) for the subplot (b). To increase the robustness of our numerical algorithm, we set \(\lambda_{2}=0.01\), rather than \(10^{-4}\), which is favored by discussions on inflation [32]. The initial conditions are \(x_{1,0}=0.75\), \(x_{2,0}=0.5\), \(\lambda_{0}=\lambda_{2}+2\) and \(\nu_{0}=\nu_{+}(\lambda_{0})\), which are the same as the settings in Sec. II.2. The colored shaded region illustrates how to set the initial conditions of the stepwise scalar field at \(T_{\gamma}=10^{12}\,\rm{K}\) in BBNLab (see discussions in the main text). The insets are the Fourier transform of \(\Omega_{\phi}(N)\) with constant \(w_{\rm m}\) (\(w_{\rm m}=1/3\) for the left insets and \(w_{\rm m}=0\) for the right insets). To plot this, we numerically solve the evolution equation in \(N\in[0,1000]\) and perform Fourier transform in \(N\in[100,1000]\). The initial conditions are the same as before. The blue vertical dashed lines denote \(f_{\rm os}=3(1+w_{\rm m})/[\alpha\pi(\lambda_{1}+\lambda_{2})]\)[31]. In the Fourier plots, regular peaks correspond to the OSS, while random forest corresponds to the CAS. ## Appendix A Special functions Here we summarize the special functions involved in the BBN calculation: \[M(x) =\frac{1}{x}\left[\frac{3}{4}K_{3}(x)+\frac{1}{4}K_{1}(x)\right], \tag{10}\] \[M^{\prime}(x) =-(\frac{24}{x^{4}}+\frac{5}{x^{2}})K_{1}(x)-(\frac{12}{x^{3}}+ \frac{1}{x})K_{0}(x),\] (11) \[L(x) =\frac{1}{x}K_{2}(x), \tag{12}\] \[L^{\prime}(x)=-(\frac{1}{x}+\frac{6}{x^{3}})K_{1}(x)-\frac{3}{x^{2}}K_{0}(x), \tag{13}\] where the second kind modified Bessel function [71] \[K_{\nu}(x)=\int_{0}^{+\infty}e^{-x\cosh\theta}\cosh(\nu\theta)\,\mathrm{d}\theta. \tag{14}\] In Matlab, \(K_{\nu}(x)\) is calculated by besselk\((\nu,x)\).
2308.07622
EMID: An Emotional Aligned Dataset in Audio-Visual Modality
In this paper, we propose Emotionally paired Music and Image Dataset (EMID), a novel dataset designed for the emotional matching of music and images, to facilitate auditory-visual cross-modal tasks such as generation and retrieval. Unlike existing approaches that primarily focus on semantic correlations or roughly divided emotional relations, EMID emphasizes the significance of emotional consistency between music and images using an advanced 13-dimension emotional model. By incorporating emotional alignment into the dataset, it aims to establish pairs that closely align with human perceptual understanding, thereby raising the performance of auditory-visual cross-modal tasks. We also design a supplemental module named EMI-Adapter to optimize existing cross-modal alignment methods. To validate the effectiveness of the EMID, we conduct a psychological experiment, which has demonstrated that considering the emotional relationship between the two modalities effectively improves the accuracy of matching in abstract perspective. This research lays the foundation for future cross-modal research in domains such as psychotherapy and contributes to advancing the understanding and utilization of emotions in cross-modal alignment. The EMID dataset is available at https://github.com/ecnu-aigc/EMID.
Jialing Zou, Jiahao Mei, Guangze Ye, Tianyu Huai, Qiwei Shen, Daoguo Dong
2023-08-15T08:13:14Z
http://arxiv.org/abs/2308.07622v2
# EMID: An Emotional Aligned Dataset in Audio-Visual Modality ###### Abstract. In this paper, we propose Emotionally paired Music and Image Dataset (EMID), a novel dataset designed for the emotional matching of music and images, to facilitate auditory-visual cross-modal tasks such as generation and retrieval. Unlike existing approaches that primarily focus on semantic correlations or roughly divided emotional relations, EMID emphasizes the significance of emotional consistency between music and images using an advanced 13-dimension emotional model. By incorporating emotional alignment into the dataset, it aims to establish pairs that closely align with human perceptual understanding, thereby raising the performance of auditory-visual cross-modal tasks. We also design a supplemental module named EMI-Adapter to optimize existing cross-modal alignment methods. To validate the effectiveness of the EMID, we conduct a psychological experiment, which has demonstrated that considering the emotional relationship between the two modalities effectively improves the accuracy of matching in abstract perspective. This research lays the foundation for future cross-modal research in domains such as psychotherapy and contributes to advancing the understanding and utilization of emotions in cross-modal alignment. The EMID dataset is available at [https://github.com/ecnu-aigc/EMID](https://github.com/ecnu-aigc/EMID). music-image dataset, emotional matching, cross-modal alignment + Footnote †: Both authors contributed equally to this research. + Footnote †: Corresponding author. + Footnote †: Both authors contributed equally to this research. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote †: Corresponding author. + Footnote † †: Corresponding author. this article aims to propose a music-image dataset that incorporates emotional information, in order to fulfill the following objectives: (1) update the original cross-modal alignment pattern to project emotional features into the shared latent space. This adjustment is helpful to address challenges related to data collection and annotation reliance as well, (2) provide data support for training existing cross-modal generation and retrieval models. This dataset goes beyond simplistic and superficial semantic binding and takes into account the emotional associations between visual and auditory modalities, enhancing the applicability of the proposed dataset in the field of psychotherapy. To be more specific, our work makes the following key contributions: * We introduce a high-quality Emotionally paired Music and **I**mage **D**ataset (**EMID**), comprising more than 30k data pairs accompanied by abundant emotional annotations. * We pioneer the attempt to align two distinct modalities based on emotions. Our approach involves extracting semantics from source data to form raw image-music pairs, subsequently determining the optimal matches by projecting each candidate onto the emotional coordinate space, thereby modifying embedded representations from original cross-modal joint embedding space. * A meticulously designed professional psychological experiment is conducted to assess the dataset's quality and affirm the pivotal role of emotions in modal alignment. Overall, the EMID will serve as a solid foundation for AIGC's application in psychotherapy-related work. Its availability paves the way for novel advancements in music and image conditional generation and retrieval, taking a closer step to human cognition and unlocking new possibilities and opportunities. ## 2. Related Work ### Cross-modal Dataset The latest advancements in cross-modal dataset development labor primarily focus on obtaining images, audio, or videos based on textual inputs (text-to-image [7; 26; 30; 34; 35]; text-to audio/videos [9; 11; 19]) consisting of labels, descriptive information, evaluation content, etc., which are available throughout the internet and can be collected by automated tools with ease. The majority of text sections from existing cross-modal datasets are composed of limited tags or monotonous sentences [29], hindering the establishment of strong sensory impacts through their collocation with images or audio. To overcome this limitation, efforts are being made to collect more impressive audio-visual combinations. Initially, videos and audio are often paired naturally, such as the huge dataset AudioSet [11], which is characterized as an object-oriented video dataset organized by audio events, using a meticulously constructed hierarchical ontology of 632 distinct audio classes, including at least 100 instances of 485 audio event classes. However, plain matching approach may lead to issues such as misalignment between video frames and audio, high data noise and classification errors [9]. Furthermore, in scenarios where audio is substituted with structured and creative music, the original semantic alignment scheme [40] becomes obsolete. Nonetheless, various solutions have been proposed to address this issue. For example, MusicCaps [1] manually annotated music segments of 5,521 pieces of AudioSet [11], each with an English text description written by ten professional musicians; Huang et al. [18] proposes a dataset called MulAnCap, which leverages a pre-trained joint music-text model MULAN [17] to annotate music segments in AudioSet [11] based on music descriptions generated by LLM and 200k music descriptive sentences and labels from MusicCaps [1], retaining over 400k music-text pairs. The approach of emotional alignment, however, better conforms to human cognition [24; 41]. Nevertheless, there have been rare research on the emotional alignment between images and music, with one of the solely known music-image paired dataset based on artistic conception, the BOP dataset [8], which artificially annotates 3k+ images and 1.5k+ piano pieces based on creation background and emotions conveyed. Nonetheless, the manual annotation scheme obstructs data expansion, and the richness of source data is expected to be improved. Therefore, there is an urgent demand for a music-image matching dataset. ### Cross-modal Task Cross-modal tasks involving generation, retrieval, and editing are predominantly addressed using state-of-the-art frameworks such as GAN [13], VQ-VAE [31; 42], and Diffusion models [16; 32]. The recent rapid development of AIGC has garnered substantial attention from both academia and industry. Research efforts [22; 23; 25] have been directed towards visual editing guided by sound and vice versa, taking advantage of conditional guidance to facilitate tasks like style transfer, local fine-tuning, inpainting and super-resolution while preserving the integrity of the original content. Regarding generative work, there is still ample room for advancement. Early generative works focus on restoring sound of silent instrument-playing videos based on the performer's body movements or gestures [10; 38]. Subsequent jobs attempt to generate corresponding audio based on images or videos of animals, water flow, knocking actions and more [33; 36; 45]. However, this type of audio often amounts to a semantic repetition of the image, lacking imaginative and artistic audio-visual elements. To overcome this constraint, recent works have explored novel manners to introduce more imaginative and artistic ideas into the generation process. Fan et al. [8] designs a multi-modal architecture named ConchShell based on GAN networks [13], which can generate piano music clips of around 8 seconds conditioned on input landscape images. Suris et al. [39] propose a new method for recommending music tracks that align with a given video (or vice versa), without extra reliance on manual annotation, establishing a connection between the audio-visual pairs at a rhythmic and artistic level. Training multi-modal data representations that can be aligned in latent spaces has been proved to offer robust support for conditional generation tasks [12; 40; 27]. However, there is still a giant research gap in the domain of direct emotional alignment specifically focused on music and images. Therefore, the **EMID** proposed in this article aims to lay the foundation for this specific aspect of research and development. ## 3. The Proposed Dataset: Emid We construct the **EMID**, a high-quality collection of music-image pairs taking account of affective aspects. The EMID ensures a semantic closeness between music and images while evoking similar affective experiences and establishing a unified space of imagination. In the following sections, we will provide a comprehensive and detailed description of the emotional alignment process. ### Emotional Alignment Process The automated construction process of the EMID involves three distinct phases: (1) acquisition of emotional features, (2) formation of candidate pairs, and (3) injection of emotional information. The overall pipeline is illustrated in Figure 2. #### 3.1.1. Acquisition of Emotional Features Both images and music have the ability to convey intricate emotions, although their methods of evoking feelings and intensities differ. As a first step, we aim to extract comparable emotional information from individual samples of music and images. Each music clip conveys a distinctive emotion. For example, the combination of adagio tempo, deep tones, subdued harmonies, and gentle rhythms can evoke a poignant sense of sadness (e.g. the second movement of Dvorak's _New World symphony_). On the other hand, fast rhythms, strong percussion, and lively melodies can convey feelings of happiness and joy (e.g. the first movement of Bach's _Brandenburg Concerte_). To acquire a diverse range of emotional data, we refer to the work conducted by Cowen et al. (2016), from whose visualization web page1 we craw1,841 music clips, each accompanied by rich emotional annotations. These music clips encompass various genres, including rock, folk, jazz, classical, wind, and heavy metal, covering a broad spectrum of emotional dimensions, such as amusing, annoying, anxious/tense, beautiful, calm/relaxing, dreamy, energizing/pump-up, erotic/desirous, indignant/defiant, joyful/cheerful, sad/depressing, scary/fearful and triuphant/heroic that contains 13 emotions in total, which are represented by letters \(A\) to \(M\) in Figure 5, Figure 6 and Table 2. Footnote 1: [https://www.ocf.berkeley.edu/~acowen/music.html](https://www.ocf.berkeley.edu/~acowen/music.html) To establish a standardized representation of emotions, we adopt the Valence-Arousal space theory proposed by Hanjalic (2014). In this theory, valence is used to measure the positive or negative aspects of music emotions, while arousal describes the level of emotional stimulation. For images, we utilize the 8-dimensional emotion classification image data proposed by You et al. (2017). This classification system comprises eight categories: happiness, awe, contentment, surprise, anger, disgust, fear, and sadness. To establish an emotional connection between music and images, we exert the NRC-VAD-Lexicon (2017) dictionary to select three semantically similar words for each image classification, which serve as emotional descriptions for that particular category. Subsequently, we query the dictionary to obtain the _<V_, _A_> coordinates corresponding to these eight classifications (shown in Figure 2(a)) and calculate their averages as the approximate _<V_, _A_> coordinates for the images within each category (shown in Figure 2(b)). It is worth noting that individual images are not specifically assigned with independent coordinates, as the presence of classification labels already constrain an estimated range of each sample within the Valence-Arousal space. By employing the classification coordinates to indicate the location of sample points, we dramatically reduce the computational expenses associated with measuring emotional distances in subsequent analysis, while also minimizing any unfavorable influence on the ultimate matching efficacy. #### 3.1.2. Formation of Candidate Pairs Based on previous work, we initially pick several matched images for each music sample based on semantic similarities to create an initial dataset. Specifically, we employ the cross-modal retrieval model ImageBind (2014) to calculate the semantic matrices for both music and image datasets, recording as \(X\in R^{N_{M}*d},Y\in R^{N_{I}*d}\) respectively, Where \(N_{M},N_{I}\) refer to the number of music and image samples, and \(d\) represents the dimension of latent space that pertains to ImageBind. By performing matrix multiplication, we attain a similarity score between each music sample and image sample, resulting in a semantic similarity matrix \(S\). The specific calculation process is as follow: \[S=Softmax(X\cdot Y^{T}) \tag{1}\] Each element \(S_{ij}=\frac{M_{i}\cdot I_{j}}{\|M_{i}\|\cdot\|I_{j}\|}\) reflects the semantic distance between the \(i_{th}\) music segment and the \(j_{th}\) image. Utilizing this matrix, we select \(K\) (setting \(K=100\) in the EMID) images with Figure 2. The construction process of EMID includes the following steps. (1) Use ImageBind to get the semantic embedding for images and music as matrix \(X\) and \(Y\). Then we get the similarity score matrix through \(S=Softmax(X\cdot Y^{T})\), (2) select \(K\) images with the highest scores (in red color) from \(i_{th}\) row of \(S\) as the candidate image-set for the \(i_{th}\) music, then calculate the cosine distance of each music clip and its image-candidates in emotional valence-arousal space, (3) generate EMID through weighted summation of semantic scores and emotional scores, remaining \(m\) best-matching pairs, (4) design a lightweight component, EMI-Adapter, which can merge emotional features into the joint embedding space after training based on the EMID. the highest semantic relevance for each music sample, thereby setting up candidate music-image pairs. Note that this raw dataset solely relies on simple semantic associations, which may result in suboptimal matching performance. To further tackle this issue, we consider the integration of emotional information. #### 3.1.3. Injection of Emotional Information Finally, we adopt a similar method as above to calculate the _<V_, _A_- coordinates of each music sample and the cosine distance from images in its candidate set in Valence-Arousal space so as to acquire the emotional similarity matrix \(E\) between music and alternative images. Then we add semantic and emotional similarity matrices by adjusting appropriate weight parameters \(\alpha\) to generate the final score matrix \(T\) for pairing as follows: \[T=\alpha\cdot S+(1-\alpha)\cdot E \tag{2}\] This matrix determines the compatibility between music and image samples. To produce a set of \(N_{M}*m\) music image pairs, we choose \(m\) images for each music sample based on the score ranking from highest to lowest. To ensure that the newly matched samples are as close as possible in the embedding space, a tiny emotional fusion plugin, _EMI-Adapter_, is designed to regulate the encoding of ImageBind and incorporate emotional information into the shared latent space. We simultaneously train several lightweight network modules, comprising of NCF (He et al., 2017), BMF (Shi et al., 2017), and SVDpp (Shi et al., 2017), as adapters to promote cross-modal alignment based on existing music-image pairs. Results present in Table 1 and Figure 4 illustrate the performance of the different EMI-Adapter plugins. Moreover, the EMI-Adapter has the capability to capture more comprehensive emotional information from the input modes leveraging the foundation of ImageBind, eliminating the reliance on additional emotional annotations provided by the source samples. This significant advancement will greatly facilitate the future expansion of the EMID, enabling it to encompass a broader range of emotional content. ### Filtering and Expansion of Materials #### 3.2.1. Data Filtering To keep data balance, we exclude several music samples with a duration of less than 3.7 seconds from the originally crawled music pieces, leaving behind a final dataset of 1,836 music samples with associated tags, among which 98% of the music clips held a duration of 4.8s to 5.2s, as the longest is 9.4s and the shortest is 3.7s. Regarding the image dataset, for the purpose of psychotherapy, we decide to exclude the _disgusting_ category, which contains elements of threat, violence, and blood. In addition, to avoid the negative impact of image quality on pairing, we manually define thresholds for the mean and variance of image brightness. Images falling below these thresholds are considered invalid or of low quality and then filtered out. Furthermore, we employ the GIT model (Shi et al., 2017) to generate text descriptions for all images. Based on these captions, we remove images that are primarily characterized by textual information rather than visual content, as well as photos of bands or singers performing on stage as far as possible, for the sake of escaping from the impediment of strong semantics to emotional connection. After these filtering steps, the final retained image dataset consists of 19,901 pieces. \begin{table} \begin{tabular}{c|c c} \hline \hline & HR@10 & NDGC@10 \\ \hline NCF & 0.580 & 0.404 \\ BMF & 0.580 & 0.410 \\ SVD & **0.602** & **0.427** \\ \hline \hline \end{tabular} \end{table} Table 1. Performance of different EMI-Adapters. The best results are viewed in bold. Figure 3. Valence-Arousal coordinates of eight emotional categories from the NRC-VAD-Lexicon (Shi et al., 2017) dictionary and the final calculated values #### 3.2.2. Data Expansion To gather more available data, we expand the music samples by leveraging the API of the music listening and recognition platform2. This allows us to obtain 1,361 integral music tracks corresponding to fragments in our original dataset. Next, using the _librosal_ library3, we compute the cross-similarity matrix between the music clips in the existing dataset and original music tracks by employing a sliding window approach with a length of 5 seconds and a step size of 1 second. Based on the idea of dynamic programming, we obtain the similarity score (ranged from 0 to 1) from the cross-similarity matrix. Subsequently, we determine the selection of new music clips by setting a manually defined threshold to the similarity score of each music, primarily aiming to differ from the original ones in terms of auditory content while reserving similar affectivity. These new clips are assigned the same labels as the corresponding source samples. The quantities of the music dataset before and after expansion are presented in Table 2, and the ultimately expanded music dataset involves 10,738 music clips. Footnote 2: [https://audid.io/](https://audid.io/) Footnote 3: [https://github.com/librosa/librosa](https://github.com/librosa/librosa) ### Dataset Statistics The ultimate dataset comprises 32,214 pairs of images and music (setting \(m=3\)). The EMID is accompanied with rich annotations as the music samples are equipped with three types of labels: genre, emotion, and sentiment, among which the genre labels classify the music into 13 emotional dimensions, while the sentiment labels provide a detailed breakdown of the percentage distribution of each music sample across different emotional dimensions, and the emotion labels embody another 11 sensory dimension values that quantify the subjective experience evoked by the music. On the other hand, the image samples are marked with category labels and brief content descriptions. This information assists in further understanding the intricate connections between the two modalities. The location of pairs from the EMID in the cross-emotional space of music and images before and after considering emotional alignment is shown in the following Figure 5. It can be observed that the distribution of data is more balanced on different emotional dimensions after fusing effective information. ## 4. Assessment Experiment ### Experimental Protocol **Preview.** To assess the efficacy of the EMID, we conduct two types of psychological experiments: music-to-image validation and image-to-music validation. In order to explore whether intensifying emotional connections can effectively support cross-modal alignment research, subjects are instructed to determine one optimal match with the assigned example from the other modality according to their subjective intuition, #### 4.1.1. Subjects The study involves 50 subjects from East China Normal University, comprising 29 males and 21 females. The average age of the subjects is approximately 21 years old. All subjects are right-handed, with normal naked or corrected vision. #### 4.1.2. Design Music-to-Image Verification. Each subject is supposed to complete 91 (_7 x 13_) trials as 7 music clips from 13 emotional categories are selected respectively as stimulus materials. In each trial corresponding to one music clip, subjects are informed to choose the most suitable image from the four options based on their personal feeling of music expression after listening to the music clip. Only one of the options is the correct answer determined by the EMID (i.e. the best match estimated from the score matrix \begin{table} \begin{tabular}{l|c c c c c c c c c c c c c} \hline \hline Emotion & **A** & **B** & **C** & **D** & **E** & **F** & **G** & **H** & **I** & **J** & **K** & **L** & **M** & **Total** \\ \hline Before expanding & 45 & 80 & 54 & 131 & 306 & 174 & 367 & 86 & 36 & 124 & 129 & 105 & 199 & 1836 \\ Average threshold & 0.611 & 0.606 & 0.559 & 0.620 & 0.707 & 0.632 & 0.751 & 0.661 & 0.610 & 0.669 & 0.603 & 0.570 & 0.628 & - \\ After expanding & 255 & 545 & 320 & 771 & 1531 & 889 & 1832 & 1036 & 323 & 1014 & 799 & 484 & 939 & 10738 \\ \hline \hline \end{tabular} \end{table} Table 2. The quantities of music dataset before/after expanding and the average of manually set thresholds on 13 emotions. Figure 4. The above curves display the variation of training loss and evaluation results in 50 iterations for three types of networks, with @\(10\) indicating that top-10 images will be recommended. At this point, the EMID version used for training is taken as \(m=3\) and \(\alpha=0.6\). in **section 3.1.3**), and the remaining three images are considered as interference items. All options belong to different emotional categories. Image-to-Music Verification. Each subject is supposed to complete 91 (\(\mathbf{7\times 13}\)) trials as 7 sets of matched images from 13 emotional categories are selected respectively as stimulus materials. In each trial corresponding to one image set, subjects will be presented with three images used for describing the same music clip (i.e. \(m\) images that ultimately pair with the music clip in **section 3.1.3**, setting \(m=3\)). After observing three images, subjects are informed to choose the most suitable music clip from the four options based on their personal feeling of image expression. The settings for correct answers and interference items are the same as above. **Ablation Experiment**. To further probe the effect of emotions on alignment work, we also design an additional small-scale ablation experiment for comparison. To be more specific, we build a variant of the EMID by setting the \(\alpha=1\) in formula 2, thus we attain a dataset merely considering ImageBind semantic matching. By adopting the same partitioning scheme utilized in the previous experiment, we can compare the impact of image pairing before and after emotional alignment on final accuracy while maintaining consistent music coverage. A total of 15 subjects participated in this experiment. Additionally, in order to verify that semantic alignment also plays a crucial role as the first step of our work, we simulate random matching scenarios for two types of validations (actually achieved by randomly generating user responses). The interference term settings in the ablation experiment remain consistent with the previous experiment. #### 4.1.3. Evaluation Based on the response data of different subjects, we calculate the separate accuracy and overall accuracy of music-to-image and image-to-music experiments. Additionally, we conduct an analysis to investigate the distribution of accuracy across different music emotional categories. Note that accuracy refers to the ratio of the number of consistent situations between the subjects' answers and pairs in the EMID to the total number of validation questions. ### Performance Analysis The statistical information of the subject data collected is presented in Table 3. The results show that the music-image matching with emotional alignment outperforms other schemes in overall validation. To be more specific, simulated random matching conforms to our conjecture, with an overall mean accuracy of 0.25. Semantic matching based on ImageBind performs significantly better than random pairing, with a variance of 0.005, the maximum accuracy is 0.621, the minimum is 0.445, and the mean is 0.546. After emotional alignment, the matching accuracy improves further in both music-to-image validation and overall experiment, as the general assessment has a variance of 0.005, with a maximum accuracy of 0.659, a minimum of 0.462, and a mean of 0.560. However, in image-to-music verification, the effect after emotional alignment is slightly inferior to before alignment. This may be due to the strong semantics presented in the three images selected through ImageBind, which can definitely correspond to specific background sounds or instruments in the music. But in music-to-image validation which picks one suitable image based on a more abstract form of music, both semantic and emotional factors will be taken into account as images and music used in psychotherapy are supposed to evoke strong emotions rather than having explicit semantics. Figure 6 presents that the accuracy distribution of music-image matching in the EMID tends to average as a whole in different music emotion categories. In some categories, there are notable differences in accuracy between different validation directions, particularly in categories \(C\) (anxious/tense) and \(J\) (joyful/cheerful). In the case of the former category, the accuracy of music-to-image verification is significantly higher compared to the other direction. This difference may be attributed to the removal of the _diggusting_ classification in the source image dataset that is closest to category \(C\) at the sensory level. On the other hand, the latter is exactly the opposite, possibly due to the large scale of _amusement_ classification in the source image dataset, which possesses similar feelings to category \(J\). Furthermore, we notice that the validation accuracy of category \(I\) (indignant/defiant) is relatively higher, while category \(B\) (annoying) and category \(H\) (erotic/desirous) tend to have lower accuracy. This pattern may be influenced by the incomplete equilibrium distribution of the source dataset. Additionally, the interference between similar emotions may also affect the accuracy of classification. To gain broader insights into these issues, our future research will further explore the reasons behind these accuracy differences and addressing these challenges. ## 5. Conclusion In this paper, we proposed an emotional aligned music-image paired dataset named EMID, which considers the sentimental connections of different modalities in addition to semantic relations. Moreover, we enhanced the existing cross-modal alignment approaches through a simple module called EMI-adapter, which contributes to effectively capturing and integrating emotional information from the indicated samples and projecting it into the joint embedding space. In the end, the psychological experiment demonstrated that \begin{table} \begin{tabular}{l|c c c c} \hline \hline & **Max** & **Min** & **Mean** & **Variance** \\ \hline & \multicolumn{4}{c}{_Random Match_} \\ \hline Music-to-Image & 0.363 & 0.176 & 0.252 & 0.002 \\ Image-to-Music & 0.341 & 0.132 & 0.247 & 0.002 \\ Overall & 0.330 & 0.176 & 0.250 & 0.001 \\ \hline \hline & \multicolumn{4}{c}{_Before Emotional Aligned_} \\ \hline Music-to-Image & 0.604 & **0.429** & 0.516 & 0.005 \\ Image-to-Music & **0.725** & **0.462** & **0.575** & 0.012 \\ Overall & 0.621 & 0.445 & 0.546 & 0.005 \\ \hline \hline & \multicolumn{4}{c}{_After Emotional Aligned (Ours)_} \\ \hline Music-to-Image & **0.648** & 0.407 & **0.556** & 0.003 \\ Image-to-Music & 0.692 & 0.352 & 0.563 & 0.011 \\ Overall & **0.659** & **0.462** & **0.560** & 0.005 \\ \hline \hline & \multicolumn{4}{c}{_After_ & \(E\) & \(E\) \\ \hline Music-to-Image & **0.648** & 0.407 & **0.556** & 0.003 \\ Image-to-Music & 0.692 & 0.352 & 0.563 & 0.011 \\ Overall & **0.659** & **0.462** & **0.560** & 0.005 \\ \hline \hline & \multicolumn{4}{c}{_After_ & \(E\) & \(E\) \\ \hline Music-to-Image & **0.648** & 0.407 & **0.556** & 0.003 \\ Image-to-Music & 0.692 & 0.352 & 0.563 & 0.011 \\ Overall & **0.659** & **0.462** & **0.560** & 0.005 \\ \hline \hline & \multicolumn{4}{c}{_After_ & \(E\) & \(E\) \\ \hline Music-to-Image & **0.648** & 0.407 & **0.556** & 0.003 \\ Image-to-Music & 0.692 & 0.352 & 0.563 & 0.011 \\ Overall & **0.659** & **0.462** & **0.560** & 0.005 \\ \hline \hline & \multicolumn{4}{c}{_After_ & \(E\) \\ \hline Music-to-Image & **0.648** & 0.407 & **0.556** & 0.003 \\ Image-to-Music & 0.692 & 0.352 & 0.563 & 0.011 \\ Overall & **0.659** & **0.462** & **0.560** & 0.005 \\ \hline \hline & \multicolumn{4}{c}{_After_ & \(E\) & \(E\) \\ \hline Music-to-Image & **0.648** & 0.407 & **0.556** & 0.003 \\ Image-to-Music & 0.692 & 0.352 & 0.563 & 0.011 \\ Overall & **0.659** & **0.462** & **0.560** & 0.005 \\ \hline \hline & \multicolumn{4}{c}{_After_ & \(E\) & \(E\) \\ \hline Music-to-Image & **0.648** & 0.407 & **0.556** & 0.003 \\ Image-to-Music & 0.692 & 0.352 & 0.563 & 0.011 \\ Overall & **0.659** & **0.462** & **0.560** & 0.005 \\ \hline \hline & \multicolumn{4}{c}{_After_ & \(E\) & \(E\) \\ \hline Music-to-Image & **0.648** & 0.407 & **0.556** & 0.003 \\ Image-to-Music & 0.692 & 0.352 & 0.563 & 0.011 \\ Overall & **0.659** & **0.462** & **0.560** & 0.005 \\ \hline \hline & \multicolumn{4}{c}{_After_ & \(E\) & \(E\) \\ \hline Music-to-Image & **0.648** & 0.407 & **0.556** & 0.003 \\ Image-to-Music & 0.692 & 0.352 & 0.563 & 0.011 \\ Overall & **0.659** & **0.462** & **0.560** & 0.005 \\ \hline \hline \end{tabular} \end{table} Table 3. Accuracy of Music-Image Matching Study. The best results are viewed in bold. the matching pattern in the EMID is more in line with human cognition compared to traditional schemes, and also more suitable for cross-modal tasks in the field of psychotherapy. Currently, our dataset still suffers issues like insufficient scale, weak music structure, and limited data coverage. These challenges will be further discussed and addressed in our future research.
2305.19386
Higher-order Process Matrix Tomography of a passively-stable Quantum SWITCH
The field of indefinite causal order (ICO) has seen a recent surge in interest. Much of this research has focused on the quantum SWITCH, wherein multiple parties act in a superposition of different orders in a manner transcending the quantum circuit model. This results in a new resource for quantum protocols, and is exciting for its relation to issues in foundational physics. The quantum SWITCH is also an example of a higher-order quantum operation, in that it not only transforms quantum states, but also other quantum operations. To date, no higher-order quantum operation has been completely experimentally characterized. Indeed, past work on the quantum SWITCH has confirmed its ICO by measuring causal witnesses or demonstrating resource advantages, but the complete process matrix has only been described theoretically. Here, we perform higher-order quantum process tomography. However, doing so requires exponentially many measurements with a scaling worse than standard process tomography. We overcome this challenge by creating a new passively-stable fiber-based quantum SWITCH using active optical elements to deterministically generate and manipulate time-bin encoded qubits. Moreover, our new architecture for the quantum SWITCH can be readily scaled to multiple parties. By reconstructing the process matrix, we estimate its fidelity and tailor different causal witnesses directly for our experiment. To achieve this, we measure a set of tomographically complete settings, that also spans the input operation space. Our tomography protocol allows for the characterization and debugging of higher-order quantum operations with and without an ICO, while our experimental time-bin techniques could enable the creation of a new realm of higher-order quantum operations with an ICO.
Michael Antesberger, Marco Túlio Quintino, Philip Walther, Lee A. Rozema
2023-05-30T20:00:03Z
http://arxiv.org/abs/2305.19386v2
# Higher-order Process Matrix Tomography of a passively-stable Quantum SWITCH ###### Abstract The field of indefinite causal order (ICO) has seen a recent surge in interest. Much of this research has focused on the quantum SWITCH, wherein multiple parties act in a superposition of different orders in a manner transcending the quantum circuit model. This results in a new resource for quantum protocols, and is exciting for its relation to issues in foundational physics. The quantum SWITCH is also an example of a higher-order quantum operation, in that it not only transforms quantum states, but also other quantum operations. To date, no higher-order quantum operation has been completely experimentally characterized. Indeed, past work on the quantum SWITCH has confirmed its ICO by measuring causal witnesses or demonstrating resource advantages, but the complete process matrix has only been described theoretically. Here, we perform higher-order quantum process tomography. However, doing so requires exponentially many measurements with a scaling worse than standard process tomography. We overcome this challenge by creating a new _passively-stable_ fiber-based quantum SWITCH using active optical elements to deterministically generate and manipulate time-bin encoded qubits. Moreover, our new architecture for the quantum SWITCH can be readily scaled to multiple parties. By reconstructing the process matrix, we estimate its fidelity and tailor different causal witnesses directly for our experiment. To achieve this, we measure a set of tomographically complete settings, that also spans the input operation space. Our tomography protocol allows for the characterization and debugging of higher-order quantum operations with and without an ICO, while our experimental time-bin techniques could enable the creation of a new realm of higher-order quantum operations with an ICO. ## I Introduction The formalism of higher-order quantum operations (HOOO) provides a framework to view quantum operations as objects that can be subjected to transformations [1; 2; 3; 4; 5; 6]. This framework is particularly useful for analysing causality in quantum mechanics. Since it was first realized that quantum mechanics allows for processes with an indefinite causal order (ICO) [7; 8; 6], the field of quantum causality has seen an increasing level of interest [9]. These processes are interesting for a variety of foundational topics [10; 11; 12; 13; 14; 15], and also because it has been recognized that they can lead to ICO-based enhancements that go even beyond "normal quantum technology." Examples of these advantages include applications in quantum computing [16; 17; 18; 19], quantum communication [20; 21; 22; 23; 24; 25; 26; 27; 28], channel discrimination[29; 30], metrology [31], reversing quantum dynamics [32; 33], and even thermodynamics [34]. Experimentally, work has focused on either implementing various protocols [35; 36; 37; 38; 39; 40; 41; 42] or verifying the ICO of a given experimental implementation [43; 44; 45; 46]. In spite of this large body of work, there has not yet been a complete experimental characterization of process with an ICO. Instead, previous work on ICO has mainly focused on designing and measuring witnesses to essentially provide a yes or no answer to the question "does this process have an indefinite causal order?" On the one hand, this is because no concrete protocol for a complete characterization has yet been presented. On the other hand, this is because the number of experimental settings required for a complete characterization has been prohibitive in past experiments. Here, we overcome both of these hurdles, first presenting a protocol to perform "higher-order process matrix tomography," and then implementing a new experimental method to realize the quantum SWITCH based on time qubits, based on the proposal of [47]. Our new passively-stable implementation is based on active optical elements, and it allows us to acquire sufficient data (estimating almost 10,000 distinct probabilities) to fully reconstruct a process matrix demonstrating an ICO for the first time. In the two-party quantum SWITCH, Alice and Bob each act on a target system (typically taken to be a qubit) in their local laboratories. This target qubit is sent first to one party and then to the other. The order in which the target qubit is shared between the two is coherently controlled via a second _control_ qubit. If the control qubit is prepared in a superposition state, then the two parties act on the target qubit in a superposition of orders (Fig. 1a-c). The quantum SWITCH and the so-called quantum time flip [48; 49; 50], are the only processes which do not respect our standard notions of causality that have been experimentally implemented to date. The quantum SWITCH is an example of a HOQO, in the sense that its inputs are not only the control and target qubits, but also Alice and Bob's operations. All experimental realizations of the quantum SWITCH have been accomplished by encoding both the control and the target systems in a single photon. Typically, the control system is encoded in a path degree-of-freedom, which then determines the order that the photon is routed between the two parties. In practice, this means that a photon is placed in a superposition of the two paths using a beamsplitter, and these paths are then looped between two parties in a manner mimicking the paths of Fig. 1c. The parties then act on a different degree of freedom, such as polarization [35], time bins [37], or orbital angular momentum [44]. The result of all of these approaches is essentially a Mach-Zehnder interferometer, which must be phase stabilized.1 Stabilizing the phase for long enough to acquire the required data for full higher-order quantum process tomography presents a daunting experimental challenge. Footnote 1: Polarization has also been used [36; 44] as a control system, but also in this case the polarization is used to route the photon into two paths in superposition, which also requires a stable phase. We overcome this challenge by implementing a new passively-stable quantum SWITCH. In our experiment, the control system is encoded in a time-bin qubit, the target qubit is encoded in the polarization of the same photon, and active optical switches are used to route the photon between the two parties in superposition of both orders, as in the theoretical proposal of [47]. While two other experiments have achieved an intrinsically stable phase, using a Sagnac-like approach [37; 51], it is not clear how to scale these methods to multiple-parties. Our approach, however, has straight-forward generalization to multiple parties [47], making it a promising new experimental method to create an ICO. Recently, there has been some discussion in the community whether such photonic implementations of the quantum switch are simulations of an ICO [52; 53], with some concluding that they may only have an ICO in a "weak sense" [54], while others conclude that the experiments do have an ICO [55; 56], or that they at least have a quantifiable resource advantage [57]. Here we do not address this debate, but we make use of the mathematical formalism of processes matrices and HOQO to describe our physical experiment. One method to certify ICO is made via the violation of a so-called causal inequality [6]. This is a device-independent technique, similar to the use of a Bell violation to verify entanglement [58]. Unfortunately, it is not yet known if one can implement a quantum process that deterministically violates a causal inequality; moreover, it has been shown that the quantum SWITCH cannot violate causal inequalities [17]. Instead, in the first implementation of the quantum SWITCH [35], the ICO was indirectly proven by performing a game, where a player has to decide if two unitary gates either commute or anti-commute (see App. B1). By winning that game more than one could with a definite causal order, it was concluded the experiment did not have a definite causal order. This method can, in fact, be reframed, in terms of a causal witness [13]. A causal witness is a measurement which can be used to verify if a process is causally non-separable (i.e. if it has an ICO), and it has been experimentally implemented for the quantum SWITCH [44; 45]. Unlike a causal inequality, a causal witness is not device independent, requiring the assumption that the experimenter knows the correct quantum description of the experiment. Recently, progress has been made by relaxing the complete device independent approach, allowing the certification of causal non-separability under semi-device independent assumptions [46; 59; 60], Bell locality-like assumptions [15; 45], and additional device independent no-signalling assumptions [61; 62]. Figure 1: **The quantum SWITCH.** Panel **a)** (**b)** shows the causally ordered process with the control in state \(\ket{0}_{C}\) (\(\ket{1}_{C}\)), where Alice (Bob) acts before Bob (Alice) on the target system. **c)** A superposition of orders with the control qubit in the state \(\frac{1}{\sqrt{2}}\ket{0}_{C}+\ket{1}_{C}\). **d)** The principle of the process tomography on the quantum SWITCH. Alice and Bob perform projective measurements in three different basis and then prepare four different linearly independent states in their output. The same input states are prepared at the past-target. The past-control is fixed to the superposition state for generating the indefinite order. Finally, the future-control is measured in different basis, while the future-target is traced out (the two measurements shown are those that are implemented experimentally; to span the space, a third measurement is needed). In this work, we implement full experimental higher-order process tomography of the quantum SWITCH. For this goal, we generalise the ideas from quantum state and quantum process tomography [63; 64; 65; 66; 2; 67; 68; 69] to tackle tomography of arbitrary higher-order processes, including those without a definite causal order. In particular, we show that it is possible to construct tomographically complete measurement settings on arbitrary quantum processes by using a tomographically complete set of input states, spanning the input state space; a tomographically complete set of measurement-repreparation channels, spanning the input operation spaces; and a tomographically complete set of quantum measurements spanning the output state space (see Fig. 1d). We then employ these ideas to experimentally perform higher-order quantum process tomography on the quantum SWITCH. The rest of the paper is organized as follows. In Section II we introduce the theory of quantum process matrices, using the quantum SWITCH as a paradigmatic example, and we present our causal tomography protocol. In Section III, we discuss our new passively-stable architecture for the quantum SWITCH. Section IV presents our experimental results, and we conclude in Section V. ## II Theory ### Process matrices and the quantum switch The expression quantum process is a general term used to refer to the dynamics of quantum systems, and its precise meaning may depend on the context. For instance, when analysing transformations between quantum states, quantum process refers to a quantum channel, which may be unitary (associated with closed quantum systems and mathematically described by unitary operators) or non-unitary (associated with open quantum systems and mathematically described by Completely Positive Trace Preserving (CPTP) linear maps). In this scenario of transformations between states, one can experimentally determine the dynamics by means of what is known as quantum process tomography [64; 65; 66]. To do so, a complete set of known quantum states is fed into an unknown quantum process \(\mathcal{E}\) and a complete set of measurements is performed on the output of the underlying process for each input state [69]. When performing standard process tomography, one reconstructs, for example, the chi matrix \(\chi\), which takes quantum sates as inputs and returns quantum states as outputs [70]. The chi matrix is often called a process matrix; however, we stress that this chi or process matrix is different from the process matrices discussed in the field of HOQO and ICO, the case which we address here. In this work, we analyse transformations between quantum channels, hence we use the word process to describe the dynamics between quantum operations. An operation that transforms a quantum operation is sometimes referred to as a higher-order quantum operation. Such operations have found applications in several branches of quantum information processing [71; 72; 73; 74; 2; Then, the action of any linear map \(\mathcal{C}\) on a state \(\rho\) can be written in terms of the Choi operator \(C\) as \[\mathcal{C}(\rho)=\mathrm{Tr}_{\mathrm{in}}\left(\rho^{T\mathcal{H}_{\mathrm{in} }}\otimes\mathds{1}^{\mathcal{H}_{\mathrm{out}}}\,C^{\mathcal{H}_{\mathrm{in} }\mathcal{H}_{\mathrm{out}}}\right), \tag{5}\] where \(\rho^{T}\) is the transpose of \(\rho\) in the computational basis and \(\rho\) is an arbitrary density operator acting on \(\mathcal{H}_{\mathrm{in}}\). The Choi-Jamiolkowski isomorphism is very useful to represent quantum operations, due to the fact that a linear map \(\mathcal{C}:\mathcal{L}(\mathcal{H}_{\mathrm{in}})\to\mathcal{L}(\mathcal{H}_{ \mathrm{out}})\) is completely positive if and only if its Choi operator \(C\in\mathcal{L}(\mathcal{H}_{\mathrm{in}}\otimes\mathcal{H}_{\mathrm{out}})\) respects \(C\geq 0\), and the map \(\mathcal{C}\) is trace-preserving if and only if \(\mathrm{Tr}_{\mathrm{out}}(C)=\mathds{1}^{\mathcal{H}_{\mathrm{in}}}\). Since quantum channels are completely positive trace-preserving maps, all quantum channels have a simple and direct characterisation in terms of their Choi operators. Before finishing this subsection we also remark that if \(\mathcal{C}\) is a unitary channel, that is \(\mathcal{C}(\rho)=U\rho U^{\dagger}\) for some unitary operator \(U\), direct calculation shows that its Choi operator may be written as \(C=|U\rangle\!\langle U|\), where \(|U\rangle\!\rangle\) is defined in Eq. 2. A quantum instrument is a quantum operation which has a classical and a quantum output, and it formalises the concept of a quantum measurement which has a post-measurement quantum state. Mathematically, a set of linear maps \(\{\mathcal{C}_{i}\}_{i}\), \(\mathcal{C}_{i}:\mathcal{L}(\mathcal{H}_{\mathrm{in}})\to\mathcal{L}( \mathcal{H}_{\mathrm{out}})\) is a quantum instrument if all \(\mathcal{C}_{i}\) are completely positive and \(\mathcal{C}:=\ \sum_{i}\mathcal{C}_{i}\) is trace preserving. In the Choi operator picture, this is equivalent to having \(C_{i}\geq 0\) and \(\mathrm{Tr}_{\mathrm{out}}\left(\sum_{i}C_{i}\right)=\mathds{1}^{\mathcal{H}_ {\mathrm{in}}}\). A simple and useful class of quantum instrument is the class of _measure-and-reprepare_ instruments. In its most basic form, a measure and reprepare instrument simply performs a measurement described by the operators3\(\{M_{i}\}\), and prepares some fixed state \(\sigma\). Its linear map is described by \(\mathcal{R}_{i}(\rho):=\mathrm{Tr}(\rho\,M_{i})\sigma\), and its Choi operators are given by \(R_{i}\in\mathcal{L}\left(\mathcal{H}_{\mathrm{in}}\otimes\mathcal{H}_{\mathrm{ out}}\right)\), with \(R_{i}=M_{i}^{T}\otimes\sigma\). Footnote 3: A set of operators \(\{M_{i}\}\) represents a quantum measurement (that is, it is a POVM) if \(M_{i}\geq 0\) and \(\sum_{i}M_{i}=\mathds{1}\). ### The quantum switch as a process matrix We are now in position to present process matrices which describe transformations between the quantum channels of different parties. We start by presenting the process vector describing Fig 1a, which is simply a process where a system flows freely from a common past target space \(\mathcal{H}_{P_{\mathrm{t}}}\) to Alice's input space \(\mathcal{H}_{A_{\mathrm{in}}}\). Alice may perform an arbitrary operation as the state goes from \(\mathcal{H}_{A_{\mathrm{in}}}\) to \(\mathcal{H}_{A_{\mathrm{out}}}\). Later, the state goes freely from Alice's output space \(\mathcal{H}_{A_{\mathrm{out}}}\) to Bob's input space \(\mathcal{H}_{B_{\mathrm{in}}}\). Bob may then perform an arbitrary operation as the state goes from \(\mathcal{H}_{B_{\mathrm{in}}}\) to \(\mathcal{H}_{B_{\mathrm{out}}}\). Finally, the state goes freely from Bob's output space \(\mathcal{H}_{B_{\mathrm{out}}}\) to a common future target space \(\mathcal{H}_{F_{\mathrm{t}}}\). The process vector of this quantum process is \[|A\to B\rangle:=|\mathds{1}\rangle\!\rangle^{P_{\mathrm{t}},A_{\mathrm{in}}} \otimes|\mathds{1}\rangle\!\rangle^{A_{\mathrm{out}},B_{\mathrm{in}}}\otimes| \mathds{1}\rangle\!\rangle^{B_{\mathrm{out}},F_{\mathrm{t}}}\,, \tag{6}\] and its process matrix is given by \[W_{A\to B}:=|A\to B\rangle\!\langle A\to B|\,, \tag{7}\] where \(A\to B\) indicates that Alice acts before Bob. (Note that we have not included Alice or Bob's operations in this description; this will be introduced in Sec. IIE.) Analogously, we may define the process where Bob acts before Alice, which will lead to a process vector \[|B\to A\rangle:=|\mathds{1}\rangle\!\rangle^{\mathcal{H}_{\mathrm{P}}, \mathcal{B}_{\mathrm{in}}}\otimes|\mathds{1}\rangle\!\rangle^{\mathcal{B}_{ \mathrm{out}},\mathcal{A}_{\mathrm{in}}}\otimes|\mathds{1}\rangle\!\rangle^{ \mathcal{A}_{\mathrm{out}},F_{\mathrm{t}}}\,, \tag{8}\] and its process matrix is given by \[W_{B\to A}:=|B\to A\rangle\!\langle B\to A|\,. \tag{9}\] The quantum SWITCH is a process which allows one to coherently alternate between \(|A\to B\rangle\) and \(|B\to A\rangle\). For that, we allow the common past and common future to have another system, denoted as a control system, which will be able to coherently alternate between the ordered process. More formally, the common past and common future space are now described by \(\mathcal{H}_{P}=\mathcal{H}_{P_{\mathrm{c}}}\otimes\ \mathcal{H}_{P_{\mathrm{c}}}\) and \(\mathcal{H}_{F}=\mathcal{H}_{F_{\mathrm{c}}}\otimes\mathcal{H}_{F_{\mathrm{t}}}\) respectively, and the Choi vector of the quantum switch is given by \[|w_{\mathrm{switch}}\rangle:=|0\rangle^{P_{\mathrm{c}}} \otimes|A\to B\rangle\otimes|0\rangle^{F_{\mathrm{c}}}\] \[+|1\rangle^{P_{\mathrm{c}}}\otimes|B\to A\rangle\otimes|1\rangle^{F_ {\mathrm{c}}}\,, \tag{10}\] which corresponds to the process matrix \[W_{\mathrm{switch}}:=|w_{\mathrm{switch}}\rangle\!\langle w_{\mathrm{switch }}| \tag{11}\] Almost all known applications of the quantum SWITCH, _e.g._, computational advantages [17], channel discrimination [8], reducing communication complexity [21], semi-device-independent [59] and device-independent certification of indefinite causality [76; 61], do not require the general form of the quantum SWITCH is presented in Eq. (11). Rather, in such applications, one starts with the control qubit in the \(|+\rangle:=\frac{|0\rangle+|1\rangle}{\sqrt{2}}\), so that the process state corresponds to a coherent superposition of processes described by: \[\big{|}w_{\mathrm{switch}}^{+}\big{\rangle}:=\frac{|A\to B\rangle\otimes|0 \rangle^{F_{\mathrm{c}}}+|B\to A\rangle\otimes|1\rangle^{F_{\mathrm{c}}}}{ \sqrt{2}}. \tag{12}\] Additionally, for all such applications, one does not make use of the future target system, hence this qubit is often discarded. Mathematically, discarding a system corresponds to the partial trace. Hence, we construct the simplified version of the SWITCH as \[W_{\mathrm{s}}^{+}:=\mathrm{Tr}_{F_{\mathrm{t}}}\left(|w_{\mathrm{switch}}^{+} \big{\rangle}\!\big{\langle}w_{\mathrm{switch}}^{+}\big{|}\right) \tag{13}\] In this work, we focus on the simplified quantum SWITCH, and as is usual in the literature, we use "quantum SWITCH" to refer to the process described in Eq. (13). ### The general process matrix formalism The process matrix formalism allows one to assign a matrix which perfectly describes transformations between arbitrary quantum objects, in particular, to transform quantum channels into quantum channels. The normalisation constraints from quantum channels (or more general quantum objects) and a generalised notion of completely positive inputs lead to constraints on valid process matrices. In a nutshell, when focused on quantum channels, a matrix \(W\) is a process matrix if it is positive semidefinite and respect a set of affine constraints arising from the channel normalisation conditions. These affine constraints are described in several Refs. such as [6; 17] and may be viewed as causal constraints (for instance, they prevent local loops or the possibility of obtaining negative probabilities via Born's rule). ### Measuring a process matrix One of the main applications of the process matrix formalism is to provide mathematical methods to analyse the dynamics of a quantum process and to predict the outcomes of measurements performed on a quantum process. Below, we describe the scenario considered in this work: 1. \(W\in\mathcal{L}\left(\mathcal{H}_{P}\otimes\mathcal{H}_{A_{\mathrm{in}}} \otimes\mathcal{H}_{A_{\mathrm{out}}}\otimes\mathcal{H}_{B_{\mathrm{in}}} \otimes\mathcal{H}_{B_{\mathrm{out}}}\otimes\mathcal{H}_{F}\right)\) is the process matrix which describes a bipartite scenario with a common past (target) and common future (control). 2. \(\rho\in\mathcal{L}(\mathcal{H}_{P})\) is a quantum state on the common past (target) space. 3. \(A_{a}\in\mathcal{L}(\mathcal{H}_{A_{\mathrm{in}}}\otimes\mathcal{H}_{A_{ \mathrm{out}}})\) are the Choi operators of an instrument on Alice's space. 4. \(B_{b}\in\mathcal{L}(\mathcal{H}_{B_{\mathrm{in}}}\otimes\mathcal{H}_{B_{ \mathrm{out}}})\) are the Choi operators of an instrument on Bob's space. 5. \(M_{c}\in\mathcal{L}(\mathcal{H}_{F})\) are the measurement operators on the common future (control) space. In the scenario described above, if \(W\) is the process matrix, one inputs the state \(\rho\) into the common past, Alice performs the instrument \(\{A_{a}\}\), Bob performs the instrument \(\{B_{b}\}\), the measurement \(\{M_{c}\}\) is performed in the future, the probability that Alice obtains the outcome \(a\), Bob obtains the outcome \(b\), and the future obtains the outcome \(c\) is given by \[p(a,b,c)=\mathrm{Tr}\left(W\,\left(\rho^{P}\otimes A_{a}^{A_{ \mathrm{in}}A_{\mathrm{out}}}\otimes B_{b}^{B_{\mathrm{in}}B_{\mathrm{out}}} \right)^{T}\otimes M_{c}^{F}\right). \tag{14}\] In practice, it is often convenient to have indices to label states, instruments, and measurements. In this work, we will then use \(\{\rho_{w}\}\) to denote a set of states acting in the common past, \(\{A_{a|x}\}\) for a set of instruments in Alice's space (\(a\) labels the classical outcome of the instrument and \(x\) the choice of instrument), \(\{B_{b|y}\}\) for a set of instruments in Bob's space (\(b\) labels the classical outcome of the instrument and \(y\) the choice of instrument), and \(\{M_{c|z}\}\) for a set of measurements in the future space (\(c\) labels the classical outcome and \(z\) the choice of measurement). We can then define the _setting operators_4 as Footnote 4: We use the phrase “setting operator”, since each of these operators will correspond to a single experimental setting. \[S_{abc|xyzw}:=\left(\rho_{w}^{P}\otimes A_{a|x}^{A_{\mathrm{in}}A_{\mathrm{out }}}\otimes B_{b|y}^{B_{\mathrm{in}}B_{\mathrm{out}}}\right)^{T}\otimes M_{c|z} ^{F} \tag{15}\] which leads us to the so-called "generalised Born's rule": \[p\left(abc|xyzw\right)=\mathrm{Tr}\left(W\,S_{abc|xyzw}\right). \tag{16}\] ### Process matrix tomography The goal of quantum tomography is to completely characterise a quantum object by performing known measurements on it. Before discussing process matrix tomography, we revisit the standard case of quantum state tomography, where one aims to characterise an unknown state by analysing the outcomes obtained after performing known measurements on it. If \(M_{a|x}\in\mathcal{L}(\mathbb{C}_{d})\) are known measurement operators, one can make use of the probabilities \(p(a|x)=\mathrm{Tr}\big{(}\rho\,M_{a|x}\big{)}\) to uniquely reconstruct the unknown state \(\rho\). When the set of operators \(\{M_{a|x}\}\) spans the linear space of \(\mathcal{L}(\mathbb{C}_{d})\), the operator \(\rho\) may be obtained via \(p(a|x)\) by standard linear inversion methods. For qubit states, a standard set of tomographically complete measurements is formed by the three Pauli observables \(X\), \(Y\), and \(Z\), which are associated with the measurement operators via their eigenprojections: \[\big{\{}\,|+\rangle\!\langle+|\,,|-\rangle\!\langle-|\,\big{\}},\,\big{\{}\,|y _{+}\rangle\!\langle y_{+}|\,,|y_{-}\rangle\!\langle y_{-}|\,\big{\}},\,\big{\{} \,|0\rangle\!\langle 0|\,,|1\rangle\!\langle 1|\,\big{\}},\] Figure 2: **Probing a quantum process \(W\).** Pictorial illustration on how to probe a quantum process bipartite quantum process \(W\) with a common past and common future. Here \(\rho_{w}\) are quantum states, \(\{A_{a|x}\}\) and \(\{B_{b|y}\}\) are quantum instruments, and \(\{M_{c|z}\}\) are quantum measurements. respectively, where \(\left|\pm\right\rangle=\frac{\left|0\right\rangle\pm\left|1\right\rangle}{\sqrt{2}}\) and \(\left|y_{\pm}\right\rangle=\frac{\left|0\right\rangle\pm\left|1\right\rangle}{ \sqrt{2}}\). In particular, the standard measurement operators from the set \[\mathcal{S}:=\{\left|\psi_{i}\middle\rangle\!\middle\langle\psi_{i}\right|\}_{ i=1}^{4}, \tag{17}\] where \[\left|\psi_{1}\middle\rangle\!\middle\langle\psi_{1}\right| :=\left|0\middle\rangle\!\middle\langle 0\right|,\ \ \left|\psi_{2} \middle\rangle\!\middle\langle\psi_{2}\right|:=\left|1\middle\rangle\! \middle\langle 1\right|, \tag{18}\] \[\left|\psi_{3}\middle\rangle\!\middle\langle\psi_{3}\right| :=\left|+\middle\rangle\!\middle\langle+\right|,\left|\psi_{4} \middle\rangle\!\middle\langle\psi_{4}\right|:=\left|y_{+}\middle\rangle\! \middle\langle y_{+}\right|. \tag{19}\] These measurements are linearly independent, forming a (non-orthonormal) basis for \(\mathcal{L}(\mathcal{C}_{2})\). We now consider the task of performing tomography of a qubit channel. As discussed in section II.2, every quantum channel \(\mathcal{C}:\mathcal{L}(\mathcal{H}_{\text{in}})\to\mathcal{L}(\mathcal{H}_{ \text{out}})\), can be represented by its Choi operator \(C\in\mathcal{L}(\mathcal{H}_{\text{in}}\otimes\mathcal{H}_{\text{out}})\). In this case, tomography can be carried out by preparing a set of states \(\{\rho_{w}\}_{w}\), \(\rho_{w}\in\mathcal{L}(\mathcal{H}_{\text{in}})\) and performing a complete set of measurements on each state. For qubits, the standard measurements are \[\mathcal{M}:=\{M_{i\left|j\right\rangle}\}_{i=1,j=1}^{i=2,j=3}, \tag{20}\] where \(M_{i\left|j\right\rangle}\) is a POVM element, the label \(i\) stands for the outcomes, and \(j\) for the choice of measurements. Hence, \(\mathcal{M}\) is a set with three dichotomic measurements with POVM elements given by \[M_{1\left|1\right\rangle} :=\left|0\middle\rangle\!\middle\langle 0\right|,\ \ \ M_{2\left|1\right\rangle}:=\left|1\middle\rangle\! \middle\langle 1\right| \tag{21}\] \[M_{1\left|2\right\rangle} :=\left|+\middle\rangle\!\middle\langle+\right|,\ \ M_{2\left|2\right\rangle}:=\left|-\middle\rangle\!\middle\langle-\right|\] (22) \[M_{1\left|3\right\rangle} :=\left|y_{+}\middle\rangle\!\middle\langle y_{+}\right|,M_{2\left|3 \right\rangle}:=\left|y_{-}\middle\rangle\!\middle\langle y_{-}\right|. \tag{23}\] Note that, due to the normalisation of probabilities, the measurements of some measurement operators are unnecessary. However, in practice, the orthogonal measurements \(M_{2\left|j\right\rangle}\) are often measured to aid in the data normalization. We will include them here, with a view to our experiment. From these input states and measurements, one estimates the probabilities \(p(a|x,w)=\operatorname{Tr}\left(\mathcal{C}(\rho_{w})\,M_{a\left|x\right\rangle }\right),\) which can also be written as \[p(a|x,w)= \operatorname{Tr}\left(C\,\left(\rho_{w}^{T}\otimes M_{a\left|x \right\rangle}\right)\right)\!, \tag{24}\] \[= \operatorname{Tr}\left(C\,S_{a\left|xw\right\rangle}\right) \tag{25}\] in the Choi formalism, where \(S_{a\left|xw\right\rangle}:=\rho_{w}^{T}\otimes M_{a\left|x\right\rangle}\), where \(S_{a\left|xw\right\rangle}\) is a setting operator for standard quantum process tomography. Now, one way to perform complete tomography is by ensuring that the setting operators \(S_{a\left|xw\right\rangle}\) span the space \(\mathcal{L}(\mathcal{H}_{\text{in}}\otimes\mathcal{H}_{\text{out}})\). Also, thanks to a property usually referred to as "local tomography" [77] if the set of operators \(\{\rho_{w}^{T}\}_{w}\) spans \(\mathcal{L}(\mathcal{H}_{\text{in}})\) and the set \(\{M_{a\left|x\right\rangle}a,x\) spans \(\mathcal{L}(\mathcal{H}_{\text{out}})\), then the set of setting operators \(\left\{\rho_{w}^{T}\otimes M_{a\left|x\right\rangle}\right\}_{w,a,x}\) spans \(\mathcal{L}(\mathcal{H}_{\text{in}}\otimes\ \mathcal{H}_{\text{out}})\). In other words, full quantum channel tomography is always possible if one measures a set of characterised setting operators \(\{S_{a\left|xw\right\rangle}a,x,w\) that span the space \(\mathcal{L}(\mathcal{H}_{\text{in}}\otimes\ \mathcal{H}_{\text{out}})\). In principle, measuring a set of setting operators \(\{S_{a\left|xw\right\rangle}\}_{a,x,w}\) which span \(\mathcal{L}(\mathcal{H}_{\text{in}}\otimes\mathcal{H}_{\text{out}})\), is actually "overkill". More specifically, due to the normalisation condition \(\operatorname{Tr}_{\text{out}}(C)=\mathds{1}_{\text{in}}\), respected by quantum channels, there are linear operators in \(\mathcal{L}(\mathcal{H}_{\text{in}}\otimes\mathcal{H}_{\text{out}})\) which cannot be written as linear combinations of quantum channels,\(e.g.\), \(\left|0\middle\rangle\!\middle\langle 0\right|_{\text{in}}\otimes\mathds{1}_{\text{out}}\). One can then consider a set of operators \(\{S_{a\left|xw\right\rangle}a,x,w\}\) which spans the set of quantum channels, a subspace with dimension strictly smaller than the dimension of \(\mathcal{L}(\mathcal{H}_{\text{in}}\otimes\mathcal{H}_{\text{out}})\). In particular, the linear space \(\mathcal{L}(\mathcal{H}_{\text{in}}\otimes\mathcal{H}_{\text{out}})\) has dimension of \(d_{\text{in}}^{2}\,d_{\text{out}}^{2}\), and the linear span of quantum channels in \(\mathcal{L}(\mathcal{H}_{\text{in}}\otimes\mathcal{H}_{\text{out}})\) has dimension of \(d_{\text{in}}^{2}(d_{\text{out}}^{2}-1)\). We emphasize, however, this does not represent a problem; in fact, in practice, more using an over-complete measurement set is known to minimise the experimental errors in standard quantum tomography [78]. Finally, we now consider tomography of process matrices \(W\in\ \mathcal{L}\left(\mathcal{H}_{P}\otimes\mathcal{H}_{A_{\text{in}}}\otimes \mathcal{H}_{A_{\text{out}}}\otimes\mathcal{H}_{B_{\text{in}}}\otimes \mathcal{H}_{B_{\text{out}}}\otimes\mathcal{H}_{F}\right)\), such as the quantum switch illustrated in Fig. 1d. As discussed before, one way to perform tomography is to measure setting operators \(S_{abc\left|xyzyw\right\rangle}\) which span the linear space \(\mathcal{L}\left(\mathcal{H}_{P}\otimes\mathcal{H}_{A_{\text{in}}}\otimes \mathcal{H}_{A_{\text{out}}}\otimes\mathcal{H}_{B_{\text{in}}}\otimes\mathcal{H}_ {B_{\text{out}}}\otimes\mathcal{H}_{F}\right)\). Also, thanks to local tomography, we may consider sets of states and measurements which span the local space individually. We then consider the set of states given by Eq. 17 and the set of measurements is given by Eq. 20. For tomography of a higher-order process matrix, we then consider the set of measure-and-reprepare instruments (to be used as inputs for Alice and Bob's channels) given by all combinations of the two sets above, that is \[\mathcal{R}:=\{R_{i\left|(j,k)\right\rangle}\}_{i=1,j=1,k=1}^{i=2,j=3,k=4}, \tag{26}\] where \[R_{i\left|(j,k)\right\rangle}:=M_{i\left|j\right\rangle}\otimes\left|\psi_{k} \middle\rangle\!\middle\langle\psi_{k}\right|^{T}. \tag{27}\] The interpretation of Eq. (27) is the following, first, the measurement \(j\) with POVM elements \(\{M_{i\left|j\right\rangle}\}_{i}\) is performed, then, the state \(\left|\psi_{k}\right\rangle\) is prepared. Notice that, in our measure-and-reprepare instruments, the prepared state \(\left|\psi_{k}\right\rangle\) is independent of the measurement choice \(j\) and the obtained outcome \(i\). One can then perform full tomography with the setting operators \[S_{abc\left|xyzw\right\rangle}:=\left|\psi_{w}\middle\rangle\! \middle\langle\psi_{w}\right|^{T^{R}}\otimes A_{a\left|x\right\rangle}^{A_{I}A_{O}} \otimes B_{b\left|y\right\rangle}^{B_{I}B_{O}}\otimes M_{c\left|z\right\rangle}^{F }, \tag{28}\] where \(\left|\psi_{w}\middle\rangle\!\middle\langle\psi_{w}\right|\) are the 4 different quantum states in \(\mathcal{S}\), \(A_{a\left|x\right\rangle}\) and \(B_{b\left|y\right\rangle}\) are each the \(2\times 3\times 4=24\) instrument elements5 of set \(\mathcal{R}\) defined in Eq. (26), and \(2\times 3=6\) measurement elements for the future control space are the measurements \(M_{c\left|z\right\rangle}\) of \(\mathcal{M}\) (Eq. (20)). In total, we then measure \(4\times 24\times 24\times 6=13824\) different settings. We remark that in this tomography approach, we do not make use of any constraints on the process matrices. One could also to reduce the number of required settings by imposing the assumption that valid process matrices necessarily belong to a particular linear subspace, as discussed in subsection II.4. In an ideal theoretical scenario, if we obtain the probabilities \(p\left(abc|xyzw\right)=\mathrm{Tr}\left(W\,S_{abc|xyzw}\right)\) for a tomographically complete set of setting operators, standard linear inversion will uniquely identify the process \(W\). However, due to finite statistics, we never obtain the exact probability \(p\left(abc|xyzw\right)\), but an approximation from measured frequencies. Also, due to measurement precision and other possible sources of errors, we cannot expect to obtain an exact reconstruction of the process matrix. Indeed, performing direct linear inversion often results in unphysical quantum states or processes. Instead, we aim to estimate a physical process matrix that agrees best with the experimental data. In order to estimate our experimental process matrix \(W_{\mathrm{exp}}\), we perform a fitting routine to find the process matrix that best describes our measured data. We find that minimizing the least absolute residuals works quite well. To do this, we numerically search for a process matrix \(W_{\mathrm{exp}}\) that minimizes the following expression: \[r=\frac{1}{N_{\mathrm{Settings}}}\sum_{abcxyzw}\Big{|}p_{\mathrm{exp}}(abc|xyzw )-\mathrm{Tr}\big{(}W_{\mathrm{exp}}\,S_{abc|xyzw}\big{)}\Big{|}, \tag{29}\] where \(N_{\mathrm{Settings}}\) is the number of setting operators, and the minimisation is further subject to the constraint that \(W_{\mathrm{exp}}\) is a valid process matrix. This minimisation can be performed by means of semidefinite programming (SDP), and may be implemented with the help of numerical libraries such as MOSEK. Our MATLAB code implementing this, is available at [79]. The first term under the root are the experimentally measured probabilities, while the second term corresponds to what is predicted by quantum theory for the characterised settings \(S_{abc|xyzw}\). Since \(W_{\mathrm{exp}}\) is the only unknown quantity, the minimization of Eq. 29 delivers a process matrix that fits best to our experimental data, making no assumptions about the specific form of the process matrix. ## III Experiment ### Time-Bin Quantum SWITCH To date, most previous implementations of the quantum SWITCH were based on bulk optics. Since photonic quantum SWITCHs are essentially interferometers, inevitable phase drifts limit the measurement time or require active stabilization [35, 45]. Furthermore, since adding more parties means that the dimension of the control system must be increased, scaling up previous architectures requires more and more spatial modes to be transmitted through the same optic, making it difficult to create a SWITCH with more than two parties. Here, we present a passively-stable, fiber-based architec Figure 3: **Active generation and measurement of time-bin qubits.** a) In this row, an incident single photon is deterministically placed in a superposition of the “short” and “long” time bins, using an ultra-fast optical switch (UFOS). In order to achieve passive phase stability, the measurements, shown in rows b) and c), are actually implemented using the same device as in row a). However, for clarity, we have mirrored the device horizontally. b) This row indicates our measurements in the Z-basis. Here, the UFOS remains in the “bar state”. After traversing the device, the photon remains in a superposition of the two incident time bins (but spread over two paths). In this situation, simply resolving the arrival time of the photon projects into the Z-basis. c) A schematic of our deterministic measurement in the Y-basis. Here the UFOS alternate between the “cross” and bar states so that the short (long) time bin now takes the long (short) path. In this manner, the two time bins interfere on the beamsplitter, so that finding the photon in the upper (lower) path corresponds to projecting the time bin onto \(|y+\rangle\) (\(|y-\rangle\)). ture for the quantum SWITCH where the control system is encoded in a time degree of freedom of the photon. Thus, in our architecture, although the dimension of the control system must still be increased at the same rate, this can be done using additional time bins, but only one spatial mode must traverse each optical element. Furthermore, by using the same interferometer to prepare and measure the control system, all phase fluctuations cancel out, making our setup passively stable. This is important for process tomography, as we must perform many measurements, and the experiment must remain stable during this time. To create the time-bin qubit that we will use to control the order, we start by generating a photon pair, \(\lambda=1550\) nm, using spontaneous parametric down conversion (SPDC). One photon of the pair is directly detected to herald the other photon, setting a time reference for the experiment. The second photon is sent to a 50/50 beamsplitter--a fiber directional coupler (FDC)--which splits the incoming mode into two fibers of different lengths. We then deterministically recombine these two fiber paths using an ultra-fast fiber optical switch (UFOS), see Fig. 3a (and also the yellow section of Fig. 4a) [80]. To do so, we generate an electronic pulse, triggered off of the timing reference generated by detecting the first photon. This pulse is sent to the UFOS which change its state to first route the "photon component" from the short path, followed by the "photon component" in long path, into the upper output mode of the UFOS-MZI (Fig. 3a ii and iii). The result is that the second photon is left in an equal superposition of two time bins in a single fiber (Fig. 3a iv). Note that because the short time bin is transmitted through the FDC, while the long time bin is reflected, one mode picks up a reflection phase, while the other does not. Hence, in our experiment we prepare the control qubit in the state \(\left|y-\right\rangle_{\text{C}}=(\left|S\right\rangle_{\text{C}}-i\left|L \right\rangle_{\text{C}})/\sqrt{2}\), where we have labelled the modes as the "short" ("long") state \(\left|S\right\rangle_{\text{C}}\) (\(\left|L\right\rangle_{\text{C}}\)), when it has taken the short (long) fiber path of the interferometer. The spacing between these two time bins is 150 ns, which is set by the response time of our UFOS. The UFOS we use to route the photon are BATi 2x2 Nanona fiber switches. In addition to creating the time bin qubit, they allow us to route the photon in a controlled way through the quantum SWITCH. Our UFOSs have a response time of 60 ns, with a maximal duty-cycle of 1 MHz, and a cross-channel isolation greater than 20 dB for any polarisation (see [81, 82] for more details). Having created the time bin control qubit, we need to apply the quantum SWITCH operation to the target system, which we encode in the polarization degree-of-freedom of the same photon. To route the photon, we use two additional UFOSs and follow the protocol illustrated in Fig. 4b.i-vi. In particular, we send a voltage pulse train consisting of three low levels and two high levels to the UFOS's. During each low level, the fiber switches are in a "cross state" (output modes are swapped with respect to the input), while during a high level the switch state is set to the "bar state" (input modes transmitted to output modes). As the time-bins approach the quantum SWITCH (Fig. 4b.i) the UFOS's are initially in the cross-state, which routes the short time bin \(\left|S\right\rangle_{\text{C}}\) through Bob's quantum channel (Fig. 4b.ii). Then the UFOSs change to the bar state (Fig. 4b.ii), which sends \(\left|L\right\rangle_{\text{C}}\) through Alice's channel, while \(\left|S\right\rangle_{\text{C}}\) travels over a fiber from the RHS-UFOS to the LHS-UFOS. Then the UFOSs see a low voltage level, and their state is set to cross (Fig. 4b.iv). This sends \(\left|S\right\rangle_{\text{C}}\) through Alice local laboratory, while \(\left|L\right\rangle_{\text{C}}\) loops back to the LHS switch. In Fig. 4b.v the UFOS's are in bar state and hence, Figure 4: **Experimental setup.****a)** The complete experiment. The individual sections are indicated with colors. The red section shows the Sagnac SPDC source that generates heralded single photons. In the yellow section, we illustrate the asymmetric MZI to generate and measure the time-bin control qubit. The orange section shows the target qubit preparation stage, which consists of a PBS and two waveplates. The green area hosts the fiber-based quantum SWITCH. The heralded and heralding photons are detected using SNSPDs, shown in the blue area. By triggering off of the detection event of a heralding photon we use a pulse generator to control the optical switches in the setup. The sub-panels **i)** - **vi)** in panel **b)** show the functionality of the quantum SWITCH. By controlling the state of the optical switches, we route the two time-bins in different orders through Alice and Bob’s quantum channels. After the SWITCH operation, the target qubit has experienced the action of the quantum channels in a different order depending on the state of the time-bin qubit. passes through Bob's channel. At this point \(\left|S\right\rangle_{\text{C}}\) exits the quantum SWITCH. Finally, the fiber switches are set to the cross state (Fig. 4b.vi) so that \(\left|L\right\rangle_{\text{C}}\) leaves the quantum SWITCH. At this point, depending on the control state, the target system has experienced a different order of Alice and Bob's actions, which, as we will describe shortly, act on the polarization state of the photon. Note, that all the lengths of the fibers in the quantum SWITCH are set to ensure the correct routing of time-bins spaced by \(150\) ns. The time-bin quantum SWITCH from Fig. 4b is placed in the full fiber-based setup (Fig. 4a), in which the time-bins are prepared and measured. The quantum SWITCH itself is shown in the green section of panel a). The type-II SPDC photon source[83] is shown in the red section6. Here, a PBS reflects the heralding photon to a single photon detector (blue area), while the other photon is transmitted to the Mach-Zehnder-like time-bin generation interferometer explained above (yellow section). Following this, we have a photon encoding a time-bin qubit in the state \((\left|S\right\rangle_{\text{C}}-i\left|L\right\rangle_{\text{C}})/\sqrt{2}\) in the "upper" output of the interferometer (clockwise direction). The counter-clockwise path of the loop (lower UFOS-MZI output mode) hosts an optical circulator with an empty port to filter out misguided photons, which can arise from the imperfect extinction ratio of our UFOSs. Next, the target system is encoded in the photon's polarization. For this we use a polarizing beam splitter (PBS) and a set of a quarter- and a half-waveplate, shown in the orange section of Fig. 4a. Then we apply the 2-SWITCH operation to the target system described above (green section of Fig.4a and Fig.4b), where we implement Alice and Bob's instruments using short free-space sections containing waveplates and polarizers. Footnote 6: Our source is in a Sagnac configuration, although for this experiment we only pump the Sagnac loop in one direction so as not to generate polarization entanglement After exiting the SWITCH, the photon follows the fiber loop in clockwise direction and approaches the MZI used for time-bin generation (yellow section); now, from the opposite direction in the lower path. At this point, we can decide to measure the control qubit in the computational (\(Z\)) or a superposition (\(Y\)) basis. These measurements are illustrated in Fig. 3b and c, respectively. For measurements in the computational basis, both time-bins are routed by the UFOS along the lower path of the MZI, after which the two time bins split up at the FDC, and are then sent to detectors in the blue region. By measuring the arrival time, with respect to the herald detection, we can distinguish between the short and long time bins. To measure in a superposition basis, we use UFOS-MZI to send the time bins through the opposite paths of the interferometer \(\left(\left|S\right\rangle_{C}\right)\) takes the long path and \(\left|L\right\rangle_{C}\) takes the short path) so that they arrive at the FDC at the same time. In this case, interference occurs at the FDC, and detecting a photon at exiting the upper (lower) port corresponds to projecting the control qubit in \(\left|y+\right\rangle\) (\(\left|y-\right\rangle\)). With this in place, we collect the measurement statistics from different measurement settings by detecting coincidence events between the heralding photon and the FDC output or the circulator output. For each experimental configuration, we record \(\approx 1600\) coincidence counts (\(\approx 21,000\) total single photon counts) over \(10\) s at the FDC and circulator output. The photon source generates \(\approx 1,480,000\) single photons (\(\approx 116,000\) coincidence events) in \(10\) s before the experiment. Thus, our entire quantum SWITCH experiment has an overall insertion loss of \(\approx 18\) dB. All of our measurements are carried out with superconducting nanowire single photon detectors (SNSPD) from PhotonSpot Inc. The result, for a representative set of measurements, is shown in Fig. 5. Therein, the bars are the theory for an ideal quantum SWITCH with the control qubit in \(\left|y-\right\rangle\), described by the process matrix \(W^{y-}\), while the points are our experimentally measured data. Already, one can observe good agreement between theory and experiment. Using unitary operations (rather than measure and reprepare instruments), we can also play the anti-commuting/commuting gate discrimination game, as in Ref. [35]. We find a success probability of \(0.974\pm 0.018\), indicating a high-fidelity of our implementation (the full details of this measurement are presented in the Appendix B.1). ### Experimental Process Matrix Tomography We will now present our experimental reconstruction of the process matrix of our time-bin quantum SWITCH. Figure 5: **Experimentally Estimated Probabilities**. A small subset of the experimentally estimated probabilities. The bars are the theory for the ideal process matrix \(W^{y-}\), and the points are the experimental estimates. The upper (lower) panel shows measurements of the control qubit in the \(Z\) (\(Y\)) basis. The red bars are for outcomes \(\left|0\right\rangle\) and \(\left|y-\right\rangle\), and the blue for outcomes \(\left|1\right\rangle\) and \(\left|y+\right\rangle\). As discussed in Sec. II, to probe the underlying process, Alice and Bob must each implement a complete set of instruments. In our experiment, the target system is encoded in the polarization state of the photon, so Alice and Bob must act on this degree of freedom. Rather than the measurement-reperparation instruments defined in Eqs. 26 and 27, we use a slightly modified form \(\tilde{R}:=\tilde{R}_{i|(j,k)}\) presented in the Appendix Eqs. 10 and 11. In particular, Alice and Bob each have access to three different measurement bases \(\tilde{M}_{i|j}\) where \(j\in 1,2,3\) defines the measurement, and \(i\) defines the outcome. For each \(j\), \(\tilde{M}_{j}:=\tilde{M}_{1|j}-\tilde{M}_{2|j}\) is the observable associated with the POVM \(\{\tilde{M}_{1|j},\tilde{M}_{2|j}\}\). The specific operators we implement are defined in Eq. 12 and 13. For example, \(j=1\) corresponds to the \(Z\) basis: \(\tilde{M}_{1}:=\tilde{M}_{1|1}-\tilde{M}_{2|1}=Z\). Experimentally, we implement these measurements using a polarizer fixed to transmit horizontally polarized light \(|H\rangle\). We set the measurement basis using a quarter waveplate and a half waveplate before the polarizer to the angles given in Eq. 13. To implement the second part of the instrument--the repreparations--we must prepare one of four different states. We experimentally accomplish this using another quarter- and half-waveplate to rotate the photon's polarization if it is transmitted through the polarizer. This allows us to prepare one of the four \(\ket{\tilde{\phi}_{k}}\) states listed in Eq. 14. Thus, overall, both parties can implement the 24 different measurement-repreparation operators defined in Eqs. 10 and 11 (6 different measurement operators times 4 different repreparations) In addition to Alice and Bob's channels, we must send in a complete set of target states, and perform measurements on the control qubit after the SWITCH. To this end, we first prepare the target qubit in the four different input states \(\ket{\tilde{\psi}_{w}}\) given in Eq. 11. We set these states using the quarter-half waveplate pair mounted in the target preparation stage, shown in the orange area of Fig. 3(a) (the exact waveplate angles that we use are listed in Eq. 11). Finally, at the output of the SWITCH we must measure the state of the control qubit. This procedure is illustrated in Fig. 3 panels b) and c). As discussed in Sec. III.1, we use the same beamsplitter to measure and prepare the time-bin qubit, but from opposite directions. As a result, the phase of this measurement basis is fixed to the \(Y\) basis. In our notation in the Appendix Eqs. 9 and 10, this corresponds to a measurement \(\tilde{C}_{1|2}\) and \(\tilde{C}_{2|2}\). Experimentally, a \(\tilde{C}_{1|2}\) versus \(\tilde{C}_{2|2}\) result depends on which port of the FDC the photon exits. As described above, we can additionally measure in the \(Z\) basis by fixing the UFOS-MZI to the bar state on the return trip such that the short and long time-bins do not interfere at the TDC and observing the arrival time of the time-bins. If we find the photon arrives earlier, this is associated with an \(\tilde{C}_{1|1}\) detection event, while if it arrives later corresponds to \(\tilde{C}_{2|1}\). In order to be complete on the future control space, we would require an additional measurement of the control qubit in the \(X\) basis, i.e. we need the measurements \(M_{1|2}\) and \(M_{2|2}\) from Eq. 22. In our experiment, this could be achieved using a fast phase modulator to apply the appropriate phase between the short and long path only on the reverse direction. However, we do not implement this here. Instead, we impose an additional constraint on our tomographic reconstruction. We require that \(\mathrm{Tr}\left(W_{\mathrm{exp}}X^{F}\right)=0\); where \(X=|+\rangle\!\langle+|-|-\rangle\!\langle-|\). Given the passive phase stability of our experiment, this is a very good assumption. We verify this assumption, by comparing reconstructions with and without this constraint. In particular, we find that the fidelity between the process matrices reconstructed with and without this constrain is \(0.999982\), well below our experimental error. Overall, this results in \(24\times 24\times 4\times 4=9216\) setting operators of the form given in Eq. 10 (number of Alice's settings \(\times\) number of Bob's settings \(\times\) number of target states \(\times\) number of control measurements). However, for the control measurements, we have access to both ports of the FDC beamsplitter simultaneously (i.e there is a detector in each output port of the beamsplitter), giving rise to \(4608\) different experimental configurations. From these data we can calculate the probabilities for each given setting operator. Experimentally, we measure count rates associated with each setting operator, which we must then normalize to convert into the required probabilities. To do so, we make use of the normalization condition over the outcomes of all three measurements \[\sum_{abc}p(abc|xyzw)=1. \tag{30}\] Thus, we define a normalization constant for every value of \(x\), \(y\), \(z\), and \(w\) \[N_{xyzw}=\sum_{abc}C(abc|xyzw), \tag{31}\] where \(C(abc|xyzw)\) are the number of coincidences measured between the heralding detector and the detectors after the FDC, corresponding to the setting operator defined by \(a\), \(b\), \(c\), \(x\), \(y\), \(z\), and \(w\). Then our experimentally estimated probabilities are defined as \[p_{\mathrm{exp}}(abc|xyzw)=\frac{C(abc|xyzw)}{N_{xyzw}}. \tag{32}\] A small subset of the resulting probabilities are plotted in Fig. 5. Then, by minimizing Eq. 29 (with the setting operators \(S_{abc|xyzw}\) replaced by the experimental setting operators \(\tilde{S}_{abc|xyzw}\) from Eq. 10) we can reconstruct the process matrix \(W_{\mathrm{exp}}\). Our MATLAB code implementing this minimization is available at [79]. ## IV Results ### Fidelity The experimentally obtained \(64\times 64\) process matrix and the ideal matrix are plotted in Fig. 6 as a 3D-bar chart, where panel a) shows the real part, and panel b) the imaginary part. The solid bars are the experimentally reconstructed process matrix, while the transparent bars are the theoretical process matrix \(W_{s}^{y-}\). The x and y axis numerically label the basis elements. The relatively close agreement between the target process matrix and our experimental process matrix is already evident in this figure. To further assess the agreement between our experiment and theory, we estimate the fidelity of the measured process matrix \(W_{\text{exp}}\) to the ideal matrix \(W_{s}^{y-}\). Since every valid process matrix normalized by its trace is a valid quantum state, we use the conventional expression for calculating the fidelity \(F(\sigma,\rho)=\text{Tr}\left(\sqrt{\sqrt{\sigma}\rho\sqrt{\sigma}}\right)\) with \(\sigma\) and \(\rho\) being different density matrices [70]. This results in a fidelity of \[F(W_{\text{exp}},W_{\text{SWITCH}})=0.920\pm 0.001\,, \tag{33}\] where the error arises is estimated using a Monte Carlo simulation of the entire reconstruction procedure accounting for finite measurement statistics and small waveplate errors of \(1^{\circ}\). Especially given the high-dimension of our process matrix, this fidelity indicates that our experiment is quite close to theory. To quantify the agreement between our experimental data and \(W_{\text{exp}}\) we compare the residuals of our fit \(r\) (defined in Eq. 29) to the average statistical error of our data \(\eta_{\text{stat}}\). The residuals \(r\) can be interpreted as the disagreement between the outcome predicted by \(W_{\text{exp}}\) and the measured experimental outcome, averaged over all measurement settings. For our fit \(r=0.0089\), indicating an excellent match to our experimental data. We estimate our statistical errors as follows. First, we treat the probability to obtain an outcome \(abc\) as a binomial variable: either we detect a photon or we do not. Then we estimate the variance of that setting as \(N_{xyzw}\)\(p(abc|xyzw)\times(1-p(abc|xyzw))\), where \(N_{xyzw}\) is the number of photons detected in all outcomes associated with \(xyzw\) (defined in Eq. 31). Finally, we compute the average error per setting as \[\eta_{\text{stat}}=\frac{1}{N_{\text{settings}}}\sum_{abc|xyzw}\frac{p(abc|xyzw)(1-p( abc|xyzw))}{\sqrt{C}}. \tag{34}\] This is simply the standard error of each setting operator averaged over all settings. Evaluating this for our data, we find \(\eta_{\text{stat}}=0.0056\). Given that \(\eta_{\text{stat}}\approx r\), we conclude that our process matrix fits our data well. ### Causal Non-Separability A bipartite process matrix without a common past is causally non-separable when it cannot be written as a classical mixture of causally ordered processes [13, 6]. When considering bipartite processes with a common past, such as the quantum switch considered in this work, there are different non-equivalent definitions of indefinite causality. In Ref. [13, 43], a bipartite process matrix \(W\in\mathcal{L}\left(\mathcal{H}_{P}\otimes\mathcal{H}_{A_{\text{in}}} \otimes\mathcal{H}_{A_{\text{out}}}\otimes\mathcal{H}_{B_{\text{in}}}\otimes \mathcal{H}_{B_{\text{out}}}\otimes\mathcal{H}_{F}\right)\) with common past and common future is said to be causally separable if it can be written as a convex sum of causally-ordered process matrices. That is, if we can write \[W=pW^{A>B}+(1-p)W^{B>A}, \tag{35}\] where \(p\in[0,1]\), and \(W^{A>B}\) and \(W^{B>A}\) are causally-ordered process (objects also referred to as quantum combs [4, 8], see Appendix C). Alternatively, Ref. [11] proposes the concept of extensible causally-separable processes. This leads to a definition which differs from the one in Eq. (35), but is equivalent to the definition of causal-separability presented in Ref. [84], which considers not only convex mixtures of causally-ordered processes, but also incoherent (hence, classical) control of causal order. The analysis and the numbers presented in this section and in the main part of this paper were obtained Figure 6: **Process tomography data.** This figure shows the experimentally recreated process matrix of the quantum SWITCH. **a)** represents the real part of \(W_{\text{SWITCH}}\) and **b)** the imaginary part. The color gradient along the x-axis does not have a physical meaning; rather, it is color coded in order to identify the individual elements of this 64x64 matrix better. Additionally, the ideal process matrix is represented via the layer of semi-transparent bars. Calculating the fidelity between these two process matrices results in \(F(W_{\text{exp}},W_{s}^{y-})=0.920\pm 0.001\). via the definition of [43], which is the one presented in Eq. (35). However, we stress that the results of our work are not qualitatively affected by the different before-mentioned definitions, in the sense that, in all cases, the process we obtain after tomography is not causally separable, and it is robust against different kinds of noise. In Appendix C, we present a more detailed discussion of such definitions and how they make small quantitative changes in the numbers presented here. One method to quantify the degree to which our quantum process is causally non-separable is by using a causal witness. A causal witness is a Hermitian non-negative operator \(G\) such that \(\operatorname{Tr}\left(GW_{\text{sep}}\right)\geq 0\) for all causally-separable processes. However, for all causally-non-separable processes (such as the quantum SWITCH) one can always find a witness \(G\) such that \(\operatorname{Tr}\left(GW_{\text{sep}}^{y-}\right)<~{}0\). Without additional constraints, the quantity \(\operatorname{Tr}\left(GW\right)\) does not have a physical meaning, and may be artificially inflated by multiplying the witness \(G\) by some constant. However, by setting additional normalisation constraints on the witness \(G\), one may identify the quantity \(\operatorname{Tr}\left(GW\right)\) with how much noise the process \(W\) can tolerate until it becomes causality separable. More concretely, let \(\mathds{1}_{W}:=\frac{\mathds{1}}{d_{P}d_{A_{O}}d_{B_{O}}}\) be the "white noise process", which simply consists of discarding everything and outputting white noise, and let \(W_{\text{sep}}\) be an arbitrary causally-separable process matrix. Ref. [13] shows that the problem \[\min\operatorname{Tr}(GW), \tag{36}\] \[\text{s.t.}\ \operatorname{Tr}\left(GW_{\text{sep}}\right)\geq 0,\quad\forall W_{\text{sep}}\] (37) \[\operatorname{Tr}(G)\leq\operatorname{Tr}(\mathds{1}_{W}) \tag{38}\] is equivalent to its dual formulation \[\min-r, \tag{39}\] \[\text{s.t.}\ \frac{W+r\mathds{1}_{W}}{1+r}\text{ is causally separable }. \tag{40}\] Hence, under the normalisation constraint \(\operatorname{Tr}(G)\leq~{}\operatorname{Tr}(\mathds{1}_{W})\), we ensure the identity \(\operatorname{Tr}(GW)=-r\), which allows us to interpret \(\operatorname{Tr}(GW)\) as how robust \(W\) is against white noise. Alternatively, one may also consider the normalisation \(\operatorname{Tr}(G\Omega)\leq 1\), where \(\Omega\) is an arbitrary process matrix. In this case, the equivalent problem reads as \[\min-r, \tag{41}\] \[\text{s.t.}\ \frac{W+r\Omega}{1+r}\text{ is causally separable }, \tag{42}\] where \(\Omega\) is an arbitrary process matrix. In this case, the value \(\operatorname{Tr}(GW)=-r\) is typically called the "generalised robustness"; it may be viewed as the amount of noise one needs to add to \(W\) to make it causally separable in the worst case scenario. In order for the witness \(G\) to fit the setting operators implemented in our experiment, we impose an additional structure on the witness \(G\) which is given by \[G=\sum_{abcxwyz}\alpha_{a,b,c,x,y,z,w}S_{abc|xyzw}, \tag{43}\] where \(\alpha_{a,b,c,x,y,z,w}\) are arbitrary real numbers and \(S_{abc|xyzw}\) are the setting operators of our experiment (see Sec. II.5). Additionally, for fixed setting operators, finding the maximal violation of a witness \(G\) with the normalization constraints related to white and generalised noise is a Semidefinite Program (SDP) [13], and can be efficiently solved numerically [85]. Our code doing so is also available at [79]. With these tools, we can construct a variety of witnesses. First, we can construct witnesses using the complete measurement set (Eq. 28) or our restricted measurement set (Eq. 111). We can further design witnesses for two different process matrices \(W_{y}^{y-}\) or \(W_{\text{exp}}\). This results in four witnesses: \(G_{\text{y-,all}}\), \(G_{\text{y-, res}}\), \(G_{\text{exp,all}}\) \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline & \(G_{\text{y-,res}}^{\text{WN}}\) & \(G_{\text{exp,res}}^{\text{WN}}\) & \(G_{\text{y-,all}}^{\text{WN}}\) & \(G_{\text{exp,all}}^{\text{WN}}\) \\ \hline \hline \(W_{y}^{y-}\) & \(-2.296\) & \(-2.174\) & \(-2.767\) & \(-2.624\) \\ \hline \(W_{\text{exp}}\) & \(-1.64\pm 0.02\) & \(-1.76\pm 0.01\) & \(-1.96\pm 0.02\) & \(-2.112\pm 0.02\) \\ \hline \end{tabular} \end{table} Table 2: **White Noise Witness Analysis.** A summary of the different white noise witnesses constructed. The witnesses \(G_{i,j}\) and process matrix labels are labelled by two subscripts described in the caption of Tab. 1. \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline & \(G_{\text{y-,res}}^{\text{GR}}\) & \(G_{\text{exp,res}}^{\text{GR}}\) & \(G_{\text{y-,all}}^{\text{GR}}\) & \(G_{\text{exp,all}}^{\text{GR}}\) \\ \hline \hline \(W_{y}^{y-}\) & \(-0.5834\) & \(-0.5525\) & \(-0.5834\) & \(-0.5512\) \\ \hline \(W_{\text{exp}}\) & \(-0.387\pm 0.003\) & \(-0.431\pm 0.003\) & \(-0.370\pm 0.003\) & \(-0.431\pm 0.002\) \\ \hline \end{tabular} \end{table} Table 1: **Generalized Robustness Witness Analysis.** A summary of the different generalized robustness witnesses constructed. The witnesses \(G_{i,j}\) are labelled by two subscripts. The first indicates if the witness was designed for the ideal process matrix \(W^{y-}\) (subscript “\(y-\)”) or the experimental process matrix \(W_{\text{exp}}\) (subscript “exp”). The second subscript indicates if the restricted measurement set (subscript ”res”) or the complete measurement set (subscript “all”) was used for the witness. The first row shows the value of the witness for \(W^{y-}\), and the second row shows the experimental values, which were evaluated as \(\operatorname{Tr}[G_{i,j}W_{\text{exp}}]\). \(G_{\text{exp, res}}\). Where the subscript \(y-\) (exp) indicates that the witness was optimized for the ideal (experimental) process matrix, and the subscript all (res) indicates that the witness was computed using the complete (restricted) measurement set. We can then further construct witness for either the generalized or white noise robustness. The results of the generalized robustness witnesses are summarized in Tab. 1. The first row of Tab. 1 shows the value of the four witnesses evaluated using \(W_{s}^{y-}\), and the second row shows the experimental values, estimated using \(W_{\text{exp}}\). In this case, we see that \(\text{Tr}\big{(}W^{y-}G_{\text{y-,-}\text{res}}^{\text{GR}}\big{)}\approx\text {Tr}\Big{(}W^{y-}G_{\text{y-,all}}^{\text{GR}}\Big{)}\) and \(\text{Tr}\big{(}W^{\text{exp}}G_{\text{exp,res}}^{\text{GR}}\big{)}\approx \text{Tr}\Big{(}W^{\text{exp}}G_{\text{exp,all}}^{\text{GR}}\Big{)}\). In other words, the generalized robustness evaluated either with the complete or restricted setting operators is equal within experimental error. Evidently, the additional measurement on the future control system does not affect the generalized robustness. More interesting for the generalized robustness witnesses, is the performance of the witnesses optimized for our experimental process matrix \(W_{\text{exp}}\). Examining the performance of our experimentally estimated witnesses (the bottom row of Tab. 1), we see that the witnesses designed specifically for our experimental process matrix increase the generalized robustness. In particular, \(\text{Tr}\Big{(}W_{\text{exp}}G_{\text{exp,all}}^{\text{GR}}\Big{)}>\text{Tr} \Big{(}W_{\text{exp}}G_{\text{y-,all}}^{\text{GR}}\Big{)}\) and \(\text{Tr}\big{(}W_{\text{exp}}G_{\text{exp,res}}^{\text{GR}}\big{)}>\text{Tr} \big{(}W_{\text{exp}}G_{\text{y-,res}}^{\text{GR}}\big{)}\). This would not be readily possible without performing process matrix tomography. In Tab. 2 we summarize the results of our white noise witness analysis. In this case, we see a significant difference between the witnesses constructed with the restricted and complete measurement sets, with the complete measurements sets revealing a higher white noise robustness in all cases. Furthermore, in the second row, we can see that each step progressively improves the experimental white noise robustness. The first entry \(\text{Tr}\big{(}G_{\text{y-,res}}^{\text{WN}}W_{\text{exp}}\big{)}=-1.65\pm 0.02\) shows the value that one would obtain for our setup without performing process matrix tomography. i.e. the witness was designed for the ideal process and uses only the experimentally implementable measurement settings. In the next column, \(\text{Tr}\big{(}G_{\text{exp,res}}^{\text{WN}}W_{\text{exp}}\big{)}=-1.76\pm 0.01\) is improved by tailoring the witness for our experiment; however, still using only experimentally implementable measurements. In the next two columns, we improve both of these values further by computing the witness assuming the complete measurement set. The final entry, \(\text{Tr}\Big{(}G_{\text{exp,all}}^{\text{WN}}W_{\text{exp}}\Big{)}=-2.11\pm 0.02\) is significantly higher than the first entry, clearly showing the power of full process matrix tomography. Process matrix tomography allows us to compute properties of the experimental process, without having direct experimental access to them and we can precisely tailor our analysis to our experimental conditions. In the Appendix tables 3 and 4, we show the same analysis, for the alternative definition of causal non-separability. The trends observed therein are the same, although the absolute values of the robustnesses are smaller. ### Worst-Case Process Tomography For the process tomography results presented in Sec. 4, we found the process matrix that fit to our data best, by minimizing Eq. 29. As discussed there, this resulted in a process matrix that describes our data very well. However, one could ask "Are there other causally-separable process matrices that describe the data almost as well?" To answer this question, we perform a "worst-case" version of our process matrix tomography. To do so, rather than minimizing expression Eq. 29, we find the process matrix that maximizes the generalized or white noise robustness, \(\text{Tr}\Big{(}W_{\text{worst}}G_{\text{j,all}}^{\text{GR}}\Big{)}\) or \(\text{Tr}\Big{(}W_{\text{worst}}G_{\text{j,all}}^{\text{WN}}\Big{)}\), respectively. We do this using the witnesses designed for the ideal process matrix and the originally reconstructed experimental process matrix, but always with the complete measurement sets. This maximisation is subject to the constraints that \(W_{\text{worst}}\) is a physical process matrix and that the predictions of \(W_{\text{worst}}\) match the experiment Figure 7: **Worst-Case Process Tomography. Plots of the worst-case generalized robustness (a) and white noise robustness (b) causal witnesses versus the allowed deviation from the experimental data. For these plots, the tomography routine attempted to find a process matrix which maximized the witness (i.e. it searched for the “most causally separable” process matrix), while still agreeing with our experimental data with an average error of \(\epsilon\). For all witnesses considered, the process which minimizes \(\epsilon\) is also the most causally non-separable. For \(\epsilon<0.089\) no valid process matrix is found.** within some error \(\epsilon\): \[\sum_{abcxyzw}\left|\mathrm{Tr}\big{(}S_{abc|xyzw}W\big{)}-p_{\mathrm{exp}}(abc| xyzw)\right|\leq\epsilon. \tag{44}\] Here \(\epsilon\) is closely related to the residuals defined in Eq. 29. In particular, if \(\epsilon<r\) the maximization will fail, as there is no physical process matrix compatible with this constraint. Thus, we perform worst case process tomography for the four witnesses discussed above starting from \(\epsilon=r_{\mathrm{exp}}=0.0089\) and increasing to \(\epsilon=0.015\). The results of this analysis are plotted in Fig. 7a and b. We find that the generalized robustness witnesses are more tolerant \(\epsilon\), finding that our data is only consistent with causally separable process matrices for \(\epsilon\lesssim 0.012\), while the white noise witnesses require \(\epsilon\lesssim 0.0105\). Although this analysis suggests that the causal non-separability is rather fragile, we stress the worst-case nature of this treatment: if a single causally separable process matrix is compatible to our data within \(\epsilon\) it will be returned, even if a causally non-separable data fits our data better. In any case, we see that for a range of experimentally relevant errors our data are only compatible with a causally non-separable process matrix. ## V Discussion In this work, we have presented a protocol to perform process tomography on a higher-order quantum operation, the quantum SWITCH. We discussed how to construct a complete set of measurements. The requirements for this go beyond standard quantum process tomography, wherein one must "only" send a complete set of input states through the process, and perform a complete set of measurements after the process. In particular, because HOQOs take quantum channels as inputs, we must also implement a complete set of quantum channels for each input channel. This can be achieved using measure-and-reprepare instruments. Since this procedure scales even worse than standard process tomography, we implement it using a new phase-stable architecture of the quantum SWITCH, allowing for long integration times. Our photonic quantum SWITCH uses a time-bin qubit as the control system. By recombining time-bin qubits using a passively-stable interferometer, we were able to keep our experiment stable indefinitely. We believe this technique will be beneficial for various time-bin quantum information experiments and may even be scaleable to high dimensional time-bin qudits. This would enable the construction of a multi-party quantum SWITCH. The results of performing quantum process matrix tomography on our experiment show that we have indeed implemented a high-fidelity quantum SWITCH, with a fidelity of \(F=0.920\pm 0.001\). We then used our results to verify the causal non-separability of our experiment, designing causal witnesses specifically for our experimental process matrix. Finally, we implemented a worst-case process tomography, searching for a causally-separable process that could also describe our measurement. To find such a process, we had to allow for a \(\approx 1.5\) times larger disagreement between our measurements and our causally separable model. Although our protocol was presented for the quantum SWITCH, it could be adapted to general HOQOs in a straight-forward manner. In our present work, we performed an over complete set of measurements, but it should be possible to implement a reduced set of measurements by taking into account the constraints on the space of physical process matrices. We also point out that many complexity-reducing techniques from standard state and process tomography, including compressed sensing [86], shadow tomography [87], adaptive tomography [88], _etc._, should apply to our protocol equally well. But we leave these as topics for future work. ## Data availability All the data that are necessary to replicate, verify, falsify and/or reuse this research is available online at [79]. ## Acknowledgements This project was funded in whole, or in part, from the European Union's Horizon 2020 research and innovation programme under grant agreement No 820474 (UNIQORN) and No 899368) (EPIQUS), the Marie Sklodowska-Curie grant agreement No 956071 (AppQInfo), and the QuantERA II Programme under Grant Agreement No 1 6002 - N (PhoMentor); the Austrian Science Fund (FWF) through [F7113] (BeyondC), and [FG5] (Research Group 5); the AFOSR via FA8655-20-1-7030 (PhoQuGraph), and FA9550-21- 1-0355 (QTRUST); the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development and the Christian Doppler Research Association. For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
2302.12274
Machine Learning Microscopic Form of Nematic Order in twisted double-bilayer graphene
Modern scanning probe techniques, like scanning tunneling microscopy (STM), provide access to a large amount of data encoding the underlying physics of quantum matter. In this work, we analyze how convolutional neural networks (CNN) can be employed to learn effective theoretical models from STM data on correlated moir\'e superlattices. These engineered systems are particularly well suited for this task as their enhanced lattice constant provides unprecedented access to intra-unit-cell physics and their tunability allows for high-dimensional data sets within a single sample. Using electronic nematic order in twisted double-bilayer graphene (TDBG) as an example, we show that including correlations between the local density of states (LDOS) at different energies allows CNNs not only to learn the microscopic nematic order parameter, but also to distinguish it from heterostrain. These results demonstrate that neural networks constitute a powerful methodology for investigating the microscopic details of correlated phenomena in moir\'e systems and beyond.
João Augusto Sobral, Stefan Obernauer, Simon Turkel, Abhay N. Pasupathy, Mathias S. Scheurer
2023-02-23T19:00:04Z
http://arxiv.org/abs/2302.12274v1
# Machine Learning Microscopic Form of Nematic Order ###### Abstract Modern scanning probe techniques, like scanning tunneling microscopy (STM), provide access to a large amount of data encoding the underlying physics of quantum matter. In this work, we analyze how convolutional neural networks (CNN) can be employed to learn effective theoretical models from STM data on correlated moire superlattices. These engineered systems are particularly well suited for this task as their enhanced lattice constant provides unprecedented access to intra-unit-cell physics and their tunability allows for high-dimensional data sets within a single sample. Using electronic nematic order in twisted double-bilayer graphene (TDBG) as an example, we show that including correlations between the local density of states (LDOS) at different energies allows CNNs not only to learn the microscopic nematic order parameter, but also to distinguish it from heterostrain. These results demonstrate that neural networks constitute a powerful methodology for investigating the microscopic details of correlated phenomena in moire systems and beyond. ## I Introduction Driven by the impressive improvements in machine learning (ML) in the last couple of years, exploring its potential for quantum many-body physics has recently become subject of intense research [1; 2]. For instance, ML provides powerful tools to solve inverse problems that occur frequently in physics [3; 4; 5; 6]: given a model, it is often straightforward with conventional many-body techniques to compute observables that can be measured experimentally, whereas the often needed inverse problem of extracting the model and underlying microscopic physics from observations is much more challenging and typically even formally ill-defined. A second example of a large class of applications of ML in physics is ML-assisted analysis of experiments, in particular of those yielding image-like data like scanning tunneling microscopy (STM) [7; 8; 9; 10], photoemission [11], and others [12; 13; 14; 15; 16; 17]. In the context of applying ML algorithms to data from imaging techniques like STM, van der Waals moire superlattices [18; 19] are particularly promising for three reasons: (i) they display a huge variety of correlated quantum-many-body phenomena, such as interaction-induced insulating phases [20], magnetism [21], superconductivity [22], electronic nematic order [23; 24; 25; 26], which can also coexist microscopically [27; 26]. Despite intense research on these phenomena over several decades, e.g., in the pnictides or cuprates, their origin and relations are still subject of ongoing debates. However, compared to these microscopic crystalline quantum materials, moire superlattices are (ii) highly tunable; for instance, the density of carriers can be varied within a single sample just by applying a gate voltage (as opposed to chemical doping) and even the interactions can be tuned [28]. This allows producing large data sets of measurements on a single sample, containing a lot of information on the microscopic physics. This aspect, which is crucial for data-driven approaches, is further enhanced by (iii) the large moire unit cells of these systems compared to that of microscopic crystals, increasing the relative spatial resolution of scanning-probe techniques significantly. This enables experiments to probe the structure of the wave functions within the unit cell, and thus provides unprecedented access to the microscopic physics compared to conventional quantum materials. For instance, in the extreme limit of only one degree of freedom (Wannier state or pixel) per unit cell, the broken rotational symmetry of the electron liquid--the defining property of electronic nematic order [29; 30]--is not visible as a consequence of translational symmetry and thus requires a careful analysis of the behavior around impurities [31]. In this work, we explore these advantages of moire superlattices for extracting or 'learning' effective field-theoretical descriptions of their correlated many-body physics from STM data. This can be viewed as an inverse problem and is also conceptually related to the goal of 'Hamiltonian learning' in quantum simulation [32; 33; 34; 35; 36; 37], albeit in rather different regimes and based on different measurement schemes. As a concrete example, we use electronic nematic order in twisted double-bilayer graphene (TDBG) [38; 39; 40; 41; 42; 43; 44]. This moire system consists of two AB-stacked bilayers of graphene that are twisted against each other; as one can see in Fig. 1(a), it exhibits the point group \(D_{3}\), generated by three-fold rotation \(C_{3}\) along the out-of-plane \(z\) axis and two-fold rotation \(C_{2x}\) along the in-plane \(x\) axis. Evidence of electronic nematic order has been observed in previous STM experiments [45; 41] which clearly exhibit stripe-like features breaking the \(C_{3}\) symmetry spontaneously for certain electron concentrations. While simple limiting cases have been compared with the data in Ref. [45], there is no systematic analysis of the microscopic form of nematicity in the system. To fill this gap, we consider the more general case in which all leading terms on the graphene and moire scale describing nematic order in a continuum-model description of TDBG are included. In addition, as it is common in graphene moire systems [46; 23; 24; 25; 41], we also allow for finite strain. The Hamiltonian defining the changes in TDBG resulting from nematic order and strain depends on a set of parameters \(\beta\), which we reconstruct from STM data using convolutional neural networks (CNN) in a supervised learning procedure. As such, our study differs significantly from recent works, which focused on detecting the presence or absence of nematic order [31] or performed a phenomenological data analysis of STM measurements [47] with ML, rather than extracting the underlying microscopic physics as we do here. ## II Results ### Nematic order in TDBG The non-interacting band structure of TDBG features two moire minibands per spin and valley close to charge neutrality, where a variety of correlation-driven phenomena can emerge [38; 39; 40; 41; 42; 43; 44]. In Fig. 1(b), these minibands are denoted as valence (VFB) and conduction flat bands (CFB). The band structure shown is obtained from continuum model calculations close to half filling of the CFB (band filling \(\nu=0.475\)), where electronic nematic order was observed to be the strongest [41], see Appendix A for more details. STM experiments probe the band structure and wave functions of a system by providing direct access to the spatial and energy dependence of the LDOS. Most commonly, the LDOS is studied either for a fixed position \(\mathbf{r}_{0}\) over a range of different energies, \(\mathcal{D}_{\mathbf{r}_{0}}(\omega)\), or for a fixed energy \(\omega_{0}\) covering a spatial region of the system, \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\). The behavior of \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\) and \(\mathcal{D}_{\mathbf{r}_{0}}(\omega)\) following from the continuum model for TDBG for three different energies and high-symmetry positions in the moire unit cell is shown in Fig. 1(c). The \(C_{3}\) rotational and translational symmetry of the moire lattice can be clearly seen in \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\). Meanwhile, \(C_{2x}\) is broken, albeit weakly, as a consequence of the electric field required to control the electron filling to be close to the middle of the CFB in an open-faced STM sample geometry [41]. In graphene moire systems, there are two fundamentally distinct sources of \(C_{3}\) symmetry breaking--strain and electronic nematic order. Postponing the discussion of the former to below, electronic nematic order [29; 30] refers to the spontaneous rotational symmetry breaking as a result of electronic correlations. While recent works also indicate the possibility of nematic charge-density wave states in TDBG [42; 48], where moire translational symmetry is simultaneously broken, we here focus on translationally symmetric nematic order since the STM data of Ref. [41] preserves moire translations. The underlying nematic order parameter we study is a time-reversal- and moire-translation-invariant vector \(\mathbf{\Phi}=\Phi\mathbf{\hat{\Phi}}_{\varphi}\), \(\mathbf{\hat{\Phi}}_{\varphi}=(\cos 2\varphi,\sin 2\varphi)\), transforming under the irreducible representation \(E\) of \(D_{3}\) (or of \(C_{3}\), taking into account the weak \(C_{2x}\) breaking); \(\Phi\) and \(\varphi\) stand for the intensity and orientation of the nematic director, respectively. The microscopic form of nematicity can be modeled by a coupling of \(\mathbf{\Phi}\) to a fermionic bilinear and reads in its most general form in a continuum-model description as [45] \[\begin{split}&\mathcal{H}_{\mathbf{\Phi}}=\int_{\mathbf{r}}\int_{\Delta \mathbf{r}}\mathbf{\Phi}\cdot\mathbf{\phi}_{\sigma,\ell,s,\eta;\;\sigma^{\prime},\ell^{ \prime},s^{\prime},\eta^{\prime}}\left(\mathbf{r},\Delta\mathbf{r}\right)\\ &\times c^{\dagger}_{\sigma,\ell,s,\eta}\left(\mathbf{r}+\Delta\mathbf{r} \right)c_{\sigma^{\prime},\ell^{\prime},s^{\prime},\eta^{\prime}}(\mathbf{r})+ \text{H.c.},\end{split} \tag{1}\] where \(c^{\dagger}\) and \(c\) are the electronic creation and annihilation operators. This general form encompasses couplings Figure 1: (a) Representation in real space of the TDBG heterostructure. Green highlighted domains emphasize the emerging moiré pattern due to the combination of two AB-stacks of graphene bilayers with a relative twist angle \(\theta\). (b) Band structure at \(\theta=1.05^{\circ}\) along highly symmetrical points from the moire Brillouin zone (inset). Solid lines represent conduction and valence flat bands (CFB/VFB) as well as remote bands (R) with valley \(\eta=+\). The chemical potential corresponds to roughly a half-filling fraction (\(\nu=0.475\)) of the CFB. (c) LDOS for three fixed energies [black dotted lines in (b)] as a function of position (top), and for varying energy at fixed high-symmetry positions (bottom) in the moire unit cell (white rhombus). (d) Schematic real-space illustration of two limiting cases of graphene and moire nematicity, along with two sample LDOS plots at a fixed energy in the VFB; both show clear \(C_{3}\) breaking. between the two sublattices \(s=A,B\) of the microscopic graphene sheets, the four graphene layers \(\ell=1,\dots,4\), the valley \(\eta=\pm\) and spin \(\sigma=\uparrow,\downarrow\) degrees of freedom in the tensorial form factor \(\mathbf{\phi}_{\sigma,\ell,s,\eta;\,\sigma^{\prime},\ell^{\prime},s^{\prime},\eta^{ \prime}}(\mathbf{r},\Delta\mathbf{r})\); its two components are required to transform in the same way as \(\mathbf{\Phi}\) under all symmetries of the system. In the following, we will take \(\mathbf{\phi}\) to be trivial in the spin and diagonal in the valley indices, \(\mathbf{\phi}_{\sigma,\ell,s,\eta;\,\sigma^{\prime},\ell^{\prime},s^{\prime},\eta^ {\prime}}=\delta_{\sigma,\sigma^{\prime}}\delta_{\eta,\eta^{\prime}}\mathbf{\phi}_{ \ell,s;\,\ell^{\prime},s^{\prime}}(\eta)\). This is motivated by the weak spin-orbit coupling in graphene [49, 50] and the lack of indications of interaction-induced spin-orbit coupling, which is also strongly constrained [51]. Furthermore, the intervalley-coherent nematicity is known to lead to stronger effects on the remote bands [45] that were not observed experimentally [41]. Since we are working with a continuum theory, the space of possible couplings \(\mathbf{\phi}\) in Eq. (1) is technically infinite dimensional. As such, a complete reconstruction of \(\mathbf{\phi}\) from experimental data is impossible given the finite resolution and energy range of the available data. On top of this, it is not required either as we are primarily interested in understanding the low-energy behavior of the system. In the spirit of gradient expansions commonly used in continuum low-energy field theories, we will therefore only keep the leading terms in \(\mathbf{\Phi}\). There is, however, a subtlety associated with the presence of an additional moire length scale. We will therefore have to consider two basic classes of nematic orders, referred to as graphene (GN) and moire (MN) nematicity [41, 45]. In the case of MN, nematic order is associated with the moire scale, i.e., we choose \(\Delta\mathbf{r}=\mathbf{R}_{m_{1},m_{2}}=m_{1}\mathbf{L}_{1}^{M}+m_{2}\mathbf{L} _{2}^{M}\) in Eq. (1), \(m_{j}\in\mathbb{Z}\), with moire lattice vectors \(\mathbf{L}_{j}^{M}\), to represent the non-trivial transformation behavior of \(\mathbf{\phi}\) under \(C_{3}\). We can thus take it to be diagonal in the remaining internal indices, yielding \[\begin{split}\mathcal{H}_{\mathbf{\Phi}}^{\text{MN}}&= \frac{1}{2}\Phi_{\text{MN}}\int_{\mathbf{r}}\sum_{m_{1},m_{2}\in\mathbb{Z}}\hat{\bm {\Phi}}_{\varphi_{\text{MN}}}\cdot\mathbf{\phi}_{m_{1},m_{2}}(\mathbf{r})\\ &\times c_{\alpha}^{\dagger}(\mathbf{r}+\mathbf{R}_{m_{1},m_{2}})\,c_{ \alpha}(\mathbf{r})+\text{H.c.},\end{split} \tag{2}\] with multi-index \(\alpha=(\sigma,\ell,s,\eta)\). We further focus on the lowest moire-lattice harmonic by setting \(\phi_{m_{1},m_{2}}(\mathbf{r})=\phi_{m_{1},m_{2}}\) and only keeping the terms with the shortest possible \(\mathbf{R}_{m_{1},m_{2}}\). Intuitively, MN order can be thought of as a distortion of the effective inter-moire-unit-cell hopping matrix elements, as illustrated schematically in the lower right panel of Fig. 1(d). Conversely, GN acts as a local order parameter, \(\Delta\mathbf{r}=0\) in Eq. (1), without any explicit reference to the moire scale, \[\mathcal{H}_{\mathbf{\Phi}}^{\text{GN}}=\Phi_{\text{GN}}\int_{\mathbf{r}}\hat{\mathbf{ \Phi}}_{\varphi_{\text{GN}}}\cdot\mathbf{\phi}_{\ell,s;\ell^{\prime},s^{\prime}}( \eta;\mathbf{r})\,c_{\ell,s}^{\dagger}(\mathbf{r})c_{\ell^{\prime},s^{\prime}}(\mathbf{r}). \tag{3}\] Here, the correct transformation properties of \(\mathbf{\phi}\) result from its structure in the internal indices. Focusing on the local intra-layer contributions and the leading (constant) basis function, the most general form reads as \[\mathbf{\phi}_{\ell,s;\ell^{\prime},s^{\prime}}(\eta;\mathbf{r})=\delta_{\ell,\ell^{ \prime}}\psi_{\ell}\begin{pmatrix}(e^{i\alpha_{\ell}\eta\rho_{\varepsilon}} \rho_{x})_{ss^{\prime}}\\ \eta(e^{i\alpha_{\ell}\eta\rho_{\varepsilon}}\rho_{y})_{ss^{\prime}}\end{pmatrix}, \tag{4}\] where Pauli matrices in sublattice space are represented by \(\rho_{j}\); \(\alpha_{l}\) and \(\psi_{l}\) are real-valued parameters. As shown schematically in the upper left panel of Fig. 1(d), one can think of GN as the nematic distortion of the bonds of the individual graphene layers in a way that preserves the graphene translational symmetry. We emphasize that GN and MN should not be viewed as distinct phases; they break the same symmetries and as such in general mix. We thus take \(\mathcal{H}_{\mathbf{\Phi}}^{\text{MN}}+\mathcal{H}_{\mathbf{\Phi}}^{\text{GN}}\) to describe nematicity in TDBG in the following, which depends on the set of parameters \(\beta=\{\alpha_{\ell},\psi_{\ell},\Phi_{\text{MN}},\Phi_{\text{GN}},\varphi_{ \text{MN}},\varphi_{\text{GN}}\}\). The computation of the LDOS for a specific set of parameters can be done straightforwardly from the continuum model. The resulting spatial dependence of the LDOS, \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\), is also shown in Fig. 1(d) for two different values of \(\beta\). As opposed to the plots without nematic order, \(C_{3}\) is now broken, leading to stripes in the VFB, while translational symmetry is still preserved. The inverse problem--inferring the value of the parameters \(\beta\) from a given LDOS pattern--is a much more challenging task. Our goal in the following sections will be to use ML, in particular, CNNs to learn the set \(\beta\) directly from LDOS images. ### ML architecture Using CNNs to solve this inverse problem can be interpreted as a supervised learning task [2], i.e., a regression-like procedure using synthetic LDOS data labeled by their respective value of nematicity parameters \(\beta\). More specifically, our CNNs take as inputs \(65\times 65\) pixels of LDOS images and apply consecutive transformations (represented by a set of weights between each layer) in order to extract meaningful correlations that represent the set \(\beta\). One example of the CNN image inputs is shown in Fig. 2(a). The complete dataset consists of 12000 images which are divided into training (60%), validation (20%) and test (20%) subgroups. Each image is generated for a randomly sampled set of nematic parameters \(\beta\) and the intensities in the LDOS are modified with the addition of Gaussian noise (see Appendix A). The motivation for noise is twofold: to avoid overfitting [52] and to test the stability against and performance of the procedure with noise, which is inevitably present in experimental data. The ML architecture chosen for this task is represented in Fig. 2(a) and its implementation was done with the TensorFlow library [53]. Each convolutional layer is followed by batch normalization and max pooling layers (Conv-Batch-MaxPool). The batch normalization layers normalize the input weights in each stage, and also reduce the number of epochs necessary for convergence [54]. This process is repeated four times, with the convolutional layers having a kernel size of \(3\times 3\) and strides set to 1. The filters follow a sequence of \(16-32-32-16\) with rectified linear unit (ReLU) activation functions [55]. Padding is set to zero such that the reduction of dimensionality is performed only by the MaxPool layers. In turn, these have both strides and pool sizes set to \(2\times 2\). After a Flatten stage, dense layers lead to a dropout before the final layer with filters equal to the number of parameters in \(\beta\). The Flatten layer transforms the data to a one-dimensional shape, and the Dropout reduces overfitting by setting a percentage of 20% adjusted weights to zero [56]. Tests on variations of this architecture are briefly described in Appendix B. The learning procedure is then defined by the minimization of the loss function with respect to the CNN's weights in a backward propagation procedure [57]. The loss function can be represented as the mean squared error (MSE), which is defined as the difference between the true and expected set of parameters \(\beta\) in \(\text{MSE}=\sum_{j}^{N}\left(\beta_{j}^{\text{true}}-\beta_{j}^{\text{ predicted}}\right)^{2}/N\), with \(N\) representing the number of samples in the test dataset. Finally, we consider the adaptive moment estimation (ADAM) for the minimization of the loss function, with a learning rate of 0.001 and batch size equal to 64 [58]. After the completion of the training stage, the algorithm is ready to be deployed to previously unseen data, returning as outputs the parameters \(\beta^{\text{predicted}}\). ### Orientation of nematic director As a first investigation, we consider the task of predicting the orientation \(\varphi\) of the nematic director from \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\) images at a single energy in the VFB [\(\omega_{0}=-15\) meV, see Fig. 1(b)]. For this, we consider a dataset with randomly generated MN and GN intensities \(\Phi^{\text{MN}},\Phi^{\text{GN}}\in[0.001,0.1]\) eV, and \(\varphi^{\text{MN}}=\varphi^{\text{GN}}=\varphi\in[0,\pi]\). Furthermore, \(\psi_{l}=1\) and \(\alpha_{l}=0\) for all layers. The relation between the shape of the LDOS at a single energy \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\) and \(\varphi\) is highly non-trivial for two reasons: even for a given form of nematicity, changing \(\varphi\) generically not just merely rotates the LDOS pattern, due to the lattice, but leads to complex distortions of its structure. Additionally, by sampling \(\mathcal{H}_{\Phi}^{\text{MN}}+\mathcal{H}_{\Phi}^{\text{GN}}\), even if the same bond direction is favored over the \(C_{3}\)-related ones in the LDOS pattern of two samples, the underlying \(\varphi\) can be rather different. As can be seen in the three sample LDOS plots in Fig. 2(b) with different values of \(\varphi\), the correspondence between \(\varphi\) and \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\) is complex and not apparent to the human eye. Using the angles \(\varphi\) as labels to the data is the most straightforward choice, but leads to inaccurate predictions around 0 and \(\pi\) due to the periodicity in the definition of the nematic order parameter, \(\hat{\mathbf{\Phi}}_{\varphi}=(\cos 2\varphi,\sin 2\varphi)=\hat{\mathbf{\Phi}}_{ \varphi+\pi}\). To circumvent this feature, we use the two-component label \(\hat{\mathbf{\Phi}}_{\varphi}\) instead of \(\varphi\) in the training process and then fold the network's prediction back to \(\varphi\) with the \(arctan2\) function [59]. The results, shown in Fig. 2(b), are consistent with the true labels, including at the boundaries of \(\varphi\)'s domain. This shows that even when the precise nature of nematicity (predominantly MN or GN or an admixture of the two) is not known, the director orientation \(\varphi\) can be accurately predicted with our CNN setup from \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\) at a single energy. We have checked that the few outliers in Fig. 2(b) are directly related to small nematic intensities, where \(\varphi\) has virtually no impact on the LDOS and is, thus, impossible to predict. Figure 2: (a) Schematic figure of the CNN architecture used with only one \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\) input channel at an energy \(\omega_{0}\) in the VFB, see Sec. II.2 for details on the architecture and training dataset. (b) Comparison between true and predicted nematic director angles \(\varphi\). Three samples of \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\) (star, pentagon and triangle) are displayed to emphasize that the relation between the LDOS and \(\varphi\) is highly non-trivial as a result of the presence of different forms of nematicity. ### Form of nematicity After successfully learning the director orientation \(\varphi\) in the presence of different nematicities, we proceed into investigating finer details of these couplings by learning the parameters \(\beta=\{\Phi^{\text{MN}},\Phi^{\text{GN}},\alpha_{l}\}\) defined in Eqs. (2-4). To this end, we consider \(\psi_{l}=1\) and \(\alpha_{l}=\alpha\) for all layers. For concreteness, we set \(\varphi^{\text{MN}}=\varphi^{\text{GN}}=\varphi=2\pi/3\), which is one of the possible discrete orientations (\(\varphi^{\text{MN}}=\varphi^{\text{GN}}=2\pi/3,\pi/6\) and symmetry related) of the nematic director in presence of \(C_{2x}\). The dataset now consists of randomly generated MN and GN intensities \(\Phi^{\text{MN}},\Phi^{\text{GN}}\in[0.001,0.1]\) eV, and \(\alpha\in[0,\pi]\). The intensity values are chosen such that the stripes in the VFB resemble the experimental results [41]. As with \(\varphi\), instead of learning the angular variable \(\alpha\) directly, the \(arctan2\) mapping from Section II.3 is applied. Using only the LDOS at a single energy (i.e. one \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\) channel) in the ML architecture for this task does not produce accurate predictions. Additionally, both hyper-parameter optimization and architecture modifications did not lead to any significant improvement, implying that nematic order impacts the electronic structure in complex ways that cascade across energy scales. In fact, this is also intuitively clear since, for example, the samples marked by a star and pentagon in Fig. 3(a) have fundamentally different nematic couplings and yet exhibit visually similar \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\) images at the VFB energy. In experiments, one can typically obtain single point spectra \([\mathcal{D}_{\mathbf{r}_{0}}(\omega)]\) and real space LDOS images at fixed energies \([\mathcal{D}_{\omega_{0}}(\mathbf{r})]\). We can therefore include additional input channels corresponding to \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\) and \(\mathcal{D}_{\mathbf{r}_{0}}(\omega)\) for different energies \(\omega_{0}\) and points \(\mathbf{r}_{0}\), respectively. In the second case, the individual point spectra are transformed to scalegogram images for consistency with the input data for CNNs [5; 60], see upper left inset in Fig. 3(a) and Appendix A. The new architecture is then formed by four channels with \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\) inputs at fixed energies \(\omega_{0}=(-35,-15,1,23)\) meV within the flat and remote bands, such that they resemble visually the corresponding ones in the experimental data of Ref. [41], and three channels for \(\mathcal{D}_{\mathbf{r}_{0}}(\omega)\) scalegogram inputs at stacking positions \(\mathbf{r}_{0}=\{\text{ABAB, BAAC, ABCA}\}\), cf. Fig. 1(c). Each channel is passed through parallel Conv-Batch-MaxPool layers as in Fig. 2(a), but instead of flattening each channel separately, they are concatenated to a Dense-Dropout stage before the last layer [Fig. 3(a)]. In Fig. 3(b-d), predictions on the test data set are represented for (b) \(\alpha\), and (c) the moire and (d) graphene nematic intensities; as can be seen, very good agreement is found between the reconstructed and true parameters. The outliers in \(\alpha\) are related to small \(\Phi_{\text{GN}}\) (brighter colors). From Eqs. (3) and (4), it is clear that for small \(\Phi_{\text{GN}}\), minimal changes will be induced in the LDOS, irrespective of the true value of the phase governed by \(\alpha\). This is a similar behavior to what was observed for outliers in the nematic director prediction. If \(\alpha\) is maintained fixed, we observed (not shown) that predictions for \(\Phi_{\text{GN}}\) and \(\Phi_{\text{MN}}\) get more accurate. The results of Fig. 3 demonstrate that the microscopic form of nematicity can be extracted from the LDOS if significant energy dependence is included in the input data set. ### Including strain As already alluded to above, another possible source of \(C_{3}\) breaking is strain [61; 62; 63; 46], which is believed to be a ubiquitous property of graphene moire superlattices at small twist angles. Breaking the same symmetries as nematic order, strain can obscure the experimental identification of nematic order and their precise interplay is still under debate [23; 24; 25; 64]. Experiments indicate [23; 24; 41; 46] that the most relevant form of strain in graphene superlattices such as twisted bilayer graphene or TDBG is uniaxial heterostrain. In this case, the matrices \(\mathcal{E}_{j}\) describing the in-plane metric deformation of the coordinates in the \(j\)th rotated b Figure 3: (a) CNN architecture used for learning the nematic microscopic parameters. Each ’Conv2D-MaxPool-Dense’ channel refers to the structure from Fig. 2(a). (b) Predicted versus true \(\alpha\) parameter, with outliers (brighter colors) being related to small graphene nematic intensity \(\Phi_{\text{GN}}\). (c-d) Predicted versus true parameters for graphene and moire intensities, with colorbars representing the mean absolute error (MAE) in the intensities. Two examples (star and pentagon) indicate that two very different forms of nematicity can lead to very similar LDOS patterns at a single energy, making the inclusion of several channels necessary. are of the form \[\mathcal{E}_{2}=-\mathcal{E}_{1}=\frac{1}{2}R(\theta_{\epsilon})^{-1}\begin{pmatrix} -\epsilon&0\\ 0&\nu\epsilon\end{pmatrix}R(\theta_{\epsilon}). \tag{5}\] Here \(\nu=0.16\) is the Poisson ratio for graphene and \(R(\theta_{\epsilon})\) is the \(2\times 2\) matrix describing rotations of 2D vectors by angle \(\theta_{\epsilon}\). We see that uniaxial heterostrain is characterized by two variables, the strain intensity \(\epsilon\) and the direction of strain, parameterized by the angle \(\theta_{\epsilon}\). In the following, we allow for the simultaneous presence of uniaxial heterostrain and nematic order, leading to two additional parameters, \(\epsilon\) and \(\theta_{\epsilon}\), in \(\beta\). We will study whether our ML approach is still able to extract the microscopic form of nematicity and also learn the relative strength and direction of strain. Note that the form of nematicity is still given by Eqs. (2-4), with the only difference that we replace \(\mathbf{L}_{j}^{M}\) in the definition of \(\mathbf{R}_{m_{1},m_{2}}\) by the strained moire lattice vectors. The data set for this task is built with nematic intensities \(\Phi^{\text{MN}},\Phi^{\text{GN}}\in[0.001,0.1]\) eV, with the addition of strain parameters \(\epsilon\in[0,0.8]\) % and \(\theta_{\epsilon}\in[0,\pi/3]\). Here, \(\alpha_{l}=0\), \(\psi_{l}=1\) and \(\varphi=\varphi_{\text{MN}}=\varphi_{\text{GN}}=2\pi/3\). The domain for the strain intensities is chosen based on typical values observed in TBG [23], and for \(\theta_{\epsilon}\) on the periodicity of the unstrained system as \(\theta_{\epsilon}\to\theta_{\epsilon}+\pi/3\)[63]. The ML architecture employed in this section is the same as in the previous investigation [Fig. 3(a)]. In Fig. 4(a-d), predictions on the test data set are shown for \(\epsilon\) (a), \(\theta_{\epsilon}\) (b), and the nematic intensities (c-d). At first sight, the result for the strain angle in Fig. 4(b) looks as if the procedure ceased to work, since there are many data points where the true and predicted value of \(\theta_{\epsilon}\) differ significantly. However, when indicating the true strain intensity label \(\epsilon\) for each prediction, it becomes clear that the outliers are related to small values of \(\epsilon\) (brighter colors). As such, this behavior is not a shortcoming of the learning procedure but actually a feature of strain: for small enough \(\epsilon\) in Eq. (5), the angle \(\theta_{\epsilon}\) has no meaning. We have checked that removing the samples with small strain \(\epsilon\) from the training and test data set will lead to accurate predictions of \(\theta_{\epsilon}\). The stability that we find for our learning procedure in the presence of virtually vanishing \(\epsilon\) is, however, important when applying it to experimental data, where the strength of strain is unknown. Most importantly, we see in Fig. 4(c-d) that the nematic couplings can still be accurately predicted when varying strain is present. The MAE is equally distributed in these cases, in contrast to the strain intensity prediction. This shows that not only nematic order can be identified when strain is present, but also its internal structure and the strength of strain that is present at the same time can be resolved when using different channels consisting of both \(\mathcal{D}_{\mathbf{r}_{0}}(\omega)\) and \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\) as inputs. This allows the networks to take into account correlations between different energies in the STM data, which in turn conveys the crucial microscopic physics, enabling the model to disambiguate between lattice and electronic effects. ### Experimental data After demonstrating the effectiveness of CNNs on learning microscopic parameters \(\{\beta_{i}\}\) from a synthetic (theoretical) data set \(D_{\text{th}}\left(\beta_{1},\cdots,\beta_{N_{\text{th}}}\right)\) with \(N_{\text{th}}\) samples, we now proceed into applying the trained ML architecture for predictions of the _a priori_ unknown sets of parameters \(\{\beta_{i}^{\prime}\}\) in an experimental data set \(D_{\text{exp}}\left(\beta_{1}^{\prime},\cdots,\beta_{N_{\text{exp}}}^{\prime}\right)\). For concreteness, we use the same synthetic training data set as in Appendix B, where only the nematic and strain intensities are predicted, i.e., \(\beta=\{\Phi_{\text{MN}},\Phi_{\text{GN}},\epsilon\}\). The data set \(D_{\text{exp}}\) is constituted of both scaloograms \(\mathcal{D}_{\mathbf{r}_{0}}(\omega)\) and \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\) maps for different fillings of the CFB (\(n_{s}\)). More details about the preprocessing of the experimental data \(D_{\text{exp}}\) can be found in Appendix C. In Fig. 5, predictions of the trained CNN for the set \(\{\beta_{i}^{\prime}\}\) show non-zero values of nematicity (a) and strain (b) for all fillings of the CFB. For \(n_{s}\geq 0.47\) (gray region), the experimental data shows the most pronounced signatures of broken rotational symmetry to the human eye, which was previously interpreted as electronic nematic order [41; 45]. Here the CNN predicts MN to dominate over GN, although both are finite (as expected by symmetry). As can be seen in Fig. 5(c), the parameters Figure 4: Predicted versus true values for the strain intensity \(\epsilon\) (a) and angle \(\theta_{\epsilon}\) (b). The prediction for the nematic intensities are depicted in panels (c) and (d). The CNN architecture used to produce these results is described in Fig. 3(a). Similarly to the prediction of the \(\alpha\) parameter in the presence of only nematicity, outliers in \(\theta_{\epsilon}\) are related to small \(\epsilon\). predicted by the CNN nicely reproduce the key features in the experimental data, including the strong stripes in the VFB and the much weaker, albeit finite, signatures of nematicity in the other bands. For smaller fillings, \(n_{s}<0.47\), the experimental data still exhibit distortions that break \(C_{3}\), see Appendix C, but no clear stripe-like features appear. The CNN tries to assign different anisotropy sources to these distorted regions, but the agreement between theoretical prediction and experiment is less accurate than for larger \(n_{s}\). It is clearly possible that, indeed, a crossover from primarily MN to GN occurs when lowering \(n_{s}\), as predicted by the neural network, see Fig. 5(a), in particular, since nematic order is also a plausible instability in non-twisted bilayer graphene [28; 65]. However, we believe that additional experimental data and refined theoretical models are required to conclude whether this is really the case. In contrast to this interplay between the nematic couplings, strain remains relatively constant for all \(n_{s}\), and slightly decreases in Fig. 5(b) for \(n_{s}\geq 0.47\) as it approaches the same order of magnitude of \(\epsilon\in[0.003-0.1\%]\) that is expected for the experimental samples in \(D_{\rm exp}\)[41]. We note that at low fillings the value of strain that is predicted by the neural network is nevertheless significantly greater than the value extracted from experimental topography. This is likely a consequence of subtle differences between the continuum model calculations and the experimental spectroscopy, which the network attempts to accommodate by including finite strain. ## III Discussion We constructed and demonstrated a ML procedure that can extract the form of the nematic order parameter in TDBG from LDOS data. The key ingredient was the use of several channels that capture the correlations among different energies. Our work has several important implications. First, it shows that the presence and even the strength and internal structure of nematic order can be extracted when the sample exhibits significant heterostrain; this is a crucial aspect for moire systems where the issue of distinguishing between nematicity and strain has been subject of debate. Second, our analysis also shows which type of STM data is needed and most useful to extract information about nematicity: as we have seen, the LDOS maps at a single energy, \(\mathcal{D}_{\omega_{0}}(\mathbf{r})\), are not enough to deduce the form of the nematic order parameter and--contrary to what one might have expected--point spectra, i.e., \(\mathcal{D}_{\mathbf{r}_{0}}(\omega)\), contain a lot of helpful complementary information for that task (see also the second model discussed in Appendix D). We emphasize that this form of'solid-state Hamiltonian learning', i.e., of parameterizing the leading terms of a set of microscopic order parameters (like nematic order) or perturbations (such as strain) and extracting their form using multi-channel CNNs can be more broadly applied--to other systems, see Appendix D where we discuss a toy model for twisted-bilayer graphene, and other forms of instabilities. As such, this could open up completely new ways of revealing the form and role of nematic order and other phases for the physics of quantum materials. ###### Acknowledgements. J.A.S. and M.S.S. acknowledge funding by the European Union (ERC-2021-STG, Project 101040651--SuperCorr). Views and opinions expressed are however those of the authors only and do not necessarily reflect those of the European Union or the European Research Council Executive Agency. Neither the European Union nor the granting authority can be held responsible for them. Salary support is also provided by the National Science Foundation via grant DMR-2004691 (S.T.) and by the Office of Basic Energy Sciences, Materials Sciences and Engineering Division, U.S. Department of Energy under Contract No. DE-SC0012704 (A.N.P.). J.A.S. is grateful for discussions with J. P. Valeriano, Sayan Figure 5: Predicted values from the trained CNN to nematic intensities (a) and strain strength (b) as a function of the filling of the CFB (\(n_{s}\)). The gray region (\(n_{s}\geq 0.47\)) indicates the fillings where the continuum model showed more resemblance to the experimental data obtained in Ref. [41]. In panel (c) the experimental \(\mathcal{D}_{\mathbf{r}_{0}}\left(\omega\right)\) channels for \(n_{s}=0.67\) are shown for comparison with the ones obtained from the continuum model with the parameters \(\beta_{\rm exp}=\left\{\Phi_{\rm MN},\Phi_{\rm GN},\epsilon\right\}=\left\{0. 086\,{\rm eV},0.024\,{\rm eV},0.05\%\right\}\) predicted by the trained CNN. Banerjee, Patrick Wilhelm, Igor Reis and Pedro H. P. Cintra. M.S.S. also thanks R. Samajdar, R. Fernandes, and J. Venderbos on a previous collaboration on nematic order in TDBG [45].
2308.07144
Critical properties of the quantum Ashkin-Teller chain with chiral perturbations
We investigate the nature of the phase transitions in the quantum Ashkin-Teller chain in the presence of chiral perturbations. We locate the Lifshitz line separating a region of direct chiral transitions from the region where the transition is through an intermediate floating phase. Furthermore, we identify a small region in the vicinity of the four-state Potts point where chiral perturbations are irrelevant and where the transition remains conformal. Implications to Rydberg atoms experiments are briefly discussed.
Bernhard Lüscher, Frédéric Mila, Natalia Chepiga
2023-08-14T13:49:22Z
http://arxiv.org/abs/2308.07144v2
# Critical properties of the quantum Ashkin-Teller chain with chiral perturbations ###### Abstract We investigate the nature of the phase transitions in the quantum Ashkin-Teller chain in the presence of chiral perturbations. We locate the Lifshitz line separating a region of direct chiral transitions from the region where the transition is through an intermediate floating phase. Furthermore, we identify a small region in the vicinity of the four-state Potts point where chiral perturbations are irrelevant and where the transition remains conformal. Implications to Rydberg atoms experiments are briefly discussed. ## I Introduction The theory of quantum phase transitions in low-dimensional systems plays a central role in modern condensed matter physics [1; 2]. One of the most challenging and long-standing problems is the commensurate-incommensurate (C-IC) melting of a period-\(p\) phase originally formulated in the context of absorbed monolayers [3; 4; 5; 6; 7; 8]. Huse and Fisher noticed that domain walls between different ordered domains [3; 7], for instance \(A|B\) and \(B|A\), might have different free energy contributions, introducing chiral perturbations into the problem. For the period-2 phase these chiral perturbations are irrelevant and the transition is in the Ising universality class. For period-\(p\) phases with \(p\geq 5\) the transition is always through an intermediate critical phase separated from the ordered phase by a Pokrovsky-Talapov [9] and from the disordered phase by a Kosterlitz-Thouless [10] transition. The most interesting cases, however, are transitions out of the period-3 and period-4 phases. In the absence of chiral perturbations the transition out of the \(p=3\) phase is in the 3-state Potts universality class that, as the Ising transition, can be described by a corresponding minimal model of the conformal field theory [11]. For strong chiral perturbations, the transition is a two-step process via an intermediate critical phase, as for \(p\geq 5\). However, in the presence of weak chiral perturbations, the transition is believed to be a direct one in a new chiral universality class [7]. The melting of the \(p=4\) phase, encapsulated by the physics of the Ashkin-Teller model, is even more complicated. Again, in the absence of chiral perturbation, the transition belongs to the Ashkin-Teller universality class that can be described by a conformal field theory with central charge \(c=1\)[11]. However, in this case weak chiral perturbations may or may not give rise to a direct chiral transition [12] depending on the properties of the Ashkin-Teller point itself. For instance, in the limit of the Ashkin-Teller model equivalent to two decoupled Ising chains, a chiral perturbation immediately opens up an intermediate incommensurate Luttinger liquid phase with a diverging correlation length, a phase known as a floating phase [13; 14]. In the opposite limit equivalent to the symmetric four-state Potts model, field theory arguments predict that weak chiral perturbations are irrelevant and that the transition remains direct and conformal [5]. But what happens between these two limits? According to Huse and Fisher [7] there might be room for a chiral transition and numerical results in the context of Rydberg atoms have given the first evidence that this is indeed the case [15]. This scenario is further supported by numerical simulations of the classical symmetric two-dimensional (2D) Ashkin-Teller model on a square lattice where the Lifshitz line - the boundary of chiral transition - was accurately located with a Corner Transfer Matrix Renormalizationgroup (CTMRG) algorithm [16]. Yet another feature of the melting out of a \(p=4\) phase - the possibility of a conformal transition for a non-vanishing chiral perturbation - remains unexplored. If the transition remains conformal even in the presence of chiral perturbations, the dynamical critical exponent \(z\) must keep its value \(z=1\), and there must be a quantitative correspondence between the classical 2D chiral Ashkin-Teller model on a square lattice and its quantum 1D version on a chain. Recent progress in Rydberg atoms experiments [17; 18] brings this old problem into the main focus of theoretical research now in the context of quantum 1D chains. The phase diagram of a Rydberg atoms array is dominated by lobes of integer periodicities \(p=2,3,4,5...\)[18; 19] while the distance-dependent van der Waals interaction makes the disorder phase surrounding these ordered lobes incommensurate. This makes a 1D array of Rydberg atoms an ideal playground to probe quantum commensurate-incommensurate melting [15; 20; 21; 22; 23; 24; 25; 26]. Numerical simulations of quantum phase transitions out of the period-4 lobe in Rydberg chains show that they are qualitatively very similar to the transition out of the period-3 phase [15]: the conformal transition is realized at a single point and is surrounded by a direct chiral transition before the floating phase opens. The new developments in Rydberg atoms open the way to tune the conformal point within the Ashkin-Teller family and, in principle, can bring it to the point where the floating phase opens up immediately or to the point where the conformal transition extends to a finite interval. In the present paper we provide numerical evidence that a conformal transition can be realized even in the presence of weak chiral perturbations and define a scale on which interactions can be treated as _weak_ for the quantum Ashkin-Teller model. We also locate the Lifshitz line, i.e. the position in the phase diagram where the direct chiral transition is replaced by the floating phase. Our main results are summarized in the phase diagram presented in Fig.1. The rest of the paper is organized as follows. In Section II we define the Ashkin-Teller model and provide the details of the numerical algorithm. In Section III we present our results including the detailed overview of the phase diagram and we provide numerical evidence in favour of a conformal transition in the presence of non-zero chiral perturbations. In Section IV we summarized the results and put them into perspective. ## II Model & Methods #### ii.0.1 The Quantum Ashkin-Teller Model In the quantum Ashkin-Teller model two Ising spins per site, \(\hat{\sigma}_{i}\) and \(\hat{\tau}_{i}\), are introduced and embedded into a Hamiltonian of the form \[\begin{split}\hat{H}_{0}&=-\sum_{i}\left(\hat{ \sigma}_{i}^{x}+\hat{\tau}_{i}^{x}+\lambda\hat{\sigma}_{i}^{x}\hat{\tau}_{i}^{ x}\right)\\ &\quad\quad-\beta\sum_{i}\left(\hat{\sigma}_{i}^{z}\hat{\sigma}_{ i+1}^{z}+\hat{\tau}_{i}^{z}\hat{\tau}_{i+1}^{z}+\lambda\hat{\sigma}_{i}^{z} \hat{\sigma}_{i+1}^{z}\hat{\tau}_{i}^{z}\hat{\tau}_{i+1}^{z}\right).\end{split} \tag{1}\] This model displays Kramers-Wannier self-duality at \(\beta=1\). This point corresponds to a quantum phase transition between an ordered phase at large \(\beta\) and a disordered phase at small \(\beta\)[27]. It is in the Ashkin-Teller universality class with exponents that vary continuously with \(\lambda\). A chiral perturbation can be introduced by complementing (1) with a term \(-\delta\sum_{i}\left(\hat{\sigma}_{i}^{z}\hat{\tau}_{i+1}^{z}-\hat{\tau}_{i}^ {z}\hat{\sigma}_{i+1}^{z}\right)^{5}\) to give \[\hat{H}=\hat{H}_{0}-\delta\sum_{i}\left(\hat{\sigma}_{i}^{z}\hat{\tau}_{i+1}^ {z}-\hat{\tau}_{i}^{z}\hat{\sigma}_{i+1}^{z}\right). \tag{2}\] At non-zero \(\delta\) the model is no longer self-dual, and the location of the quantum phase transition as a function of \(\beta\) is not known exactly. Regarding the nature of the transition, on theoretical grounds it is known that the crossover exponent \(\phi\) for the chiral perturbation is given by [5] \[\phi=\frac{3\nu}{2}+\frac{1}{4}-\frac{\nu^{2}}{2\nu-1}. \tag{3}\] Here, \(\nu\) is the correlation length critical exponent which, in the case of vanishing chiral perturbation \(\delta=0\), is known exactly as a function of the parameter \(\lambda\), \[\nu=\frac{1}{2-\frac{\pi}{2}\left[\arccos\left(-\lambda\right)\right]^{-1}}. \tag{4}\] The chiral perturbation \(\delta\) is relevant if \(\phi>0\), which is the case for \(\nu>\nu_{c}=\left(1+\sqrt{3}\right)/4\simeq 0.683\). This leads to a critical value of \(\lambda\)[27; 28], \[\lambda_{c}=-\cos\frac{\pi\left(\sqrt{3}+1\right)}{4\left(\sqrt{3}-1\right)} \simeq 0.9779, \tag{5}\] below which the chiral perturbation \(\delta\) is always relevant. In the aforementioned numerical simulation of the Rydberg chain, it was found that away from the commensurate line, the melting of the period-four phase immediately occurs in the chiral universality class [15] introduced by Huse and Fisher [3]. It is found that the conformal transition along the commensurate line is of the Ashkin-Teller type with a critical exponent of \(\nu\simeq 0.78\)[26; 15], i.e. with a value well into the regime where the chiral perturbation is relevant. Consistently it is found that away from the commensurate line, the chirality of the problem immediately drives the transition into the chiral universality class. Considering the Ashkin-Teller model however, there should be a region in the phase diagram (\(\lambda_{c}<\lambda\leq 1\)) where \(\delta\) is non-vanishing, but the C-IC transition remains in the Ashkin-Teller universality class. The phase diagram is presented in Figure 1. Originally, the Ashkin-Teller model was introduced as a classical model in statistical physics [29]. Via the well established 2D classical to 1D quantum correspondence, the Hamiltonian of the quantum Ashkin-Teller model can be obtained by considering the highly anisotropic limit of the 2D classical model on a square lattice [27]. Numerical Figure 1: Nature of the quantum phase transition between the ordered phase with a four-fold degenerate ground state and the disordered phase of the quantum Ashkin-Teller model defined by the Hamiltonian 2 as a function of \(\lambda\) and \(\delta\). Along \(\delta=0\) the transition is in the Ashkin-Teller universality class. For \(\lambda\lesssim 0.5\) we observe an intermediate floating phase (green) for any non-zero value of chiral coupling \(\delta\). For \(\lambda\gtrsim 0.5\) the transition is direct in the chiral universality class (light blue) below the Lifshitz line (red) and through the floating phase above it. For \(\lambda\gtrsim 0.977\) we find that the transition remains direct and conformal in the Ashkin-Teller universality class (dark blue) for \(\delta\lesssim 0.04\). results on the classical ferromagnetic chiral Ashkin-Teller model on the square lattice obtained with a CTMRG algorithm [16] are in qualitative agreement with the phase diagram presented in Fig. 1. In this paper we explore the vicinity of the four-state Potts point where the main field theory prediction that chiral perturbations are irrelevant is still waiting for numerical verification. #### ii.1.2 Algorithm We address the problem numerically with state-of-the art density matrix renormalization group algorithm [30; 31; 32; 33]. To obtain the ground state in MPS-form we use a two-site DMRG algorithm, see for instance Ref.[33]. The fact that we are dealing with a local Hilbert space of dimension \(d=4\) significantly increases the complexity of the algorithm and in turn limits the maximum bond dimension \(D\) that we can reach to \(D_{\text{max}}=200-500\). To ensure convergence both in bond dimension and in the number of sweeps, the bond dimension was increased after every half-sweep in steps of 20 to arrive at \(D_{\text{max}}\). We treat the wavefunction as converged when the ground-state energy does not change more than by \(10^{-6}\) during one sweep. A boundary term \(-\epsilon\left(\hat{\sigma}_{1}^{z}+\hat{\sigma}_{1}^{z}\hat{\tau}_{1}^{z}+ \hat{\sigma}_{N}^{z}+\hat{\sigma}_{N}^{z}\hat{\tau}_{N}^{z}\right)\), where \(N\) is the system size and epsilon some positive constant of the order of the ground state energy density per site, was introduced to break the 4-fold degeneracy of the ground state and to make sure one ends up in a state with non-vanishing local magnetization \(\left\langle\hat{\sigma_{i}^{z}}\right\rangle\). With this DMRG algorithm we could simulate quantum chains with sufficient accuracy to distinguish chiral and conformal transitions in the vicinity of the four-state Potts point and in the presence of chiral perturbations. #### ii.1.3 Distinguishing chiral transition and floating phase To distinguish the different types of phase transitions, one needs a quantity that reliably sets the transitions apart from one another. Huse and Fisher argue that such a quantity is provided by the product of the incommensurability wave vector \(q\) and the correlation length \(\xi\). Generally it is expected that the wave vector \(q\) approaches the commensurate value \(0\) with an exponent \(\bar{\beta}\). While this exponent is not exactly known in the case of the Ashkin-Teller model, it is argued that \(\bar{\beta}>\nu\) as soon as the transition is conformal. This implies that upon approaching the transition the product \(q\times\xi\) will decay to \(0\). In contrast, for a chiral transition \(\bar{\beta}=\nu\) and the product \(q\times\xi\) remains finite upon approaching the transition. Since the chiral transition is direct, the exponent \(\nu\) should be the same coming from either side of the transition. In the case of a floating phase, the system is critical, i.e. it has a diverging correlation length, while \(q\) is still non-vanishing. This implies that on the incommensurate side one will observe \(q\times\xi\) to be diverging towards the transition. Coming from the commensurate side, the transition will occur via a Pokrovsky-Talapov transition, implying a correlation length diverging with an exponent \(\nu=\frac{1}{2}\)[9]. #### ii.1.4 Extraction of the correlation length and of the wave vector Using the notation introduced in Eq. (1), we consider the connected correlation function of the operators \(\sigma_{i}^{z}\): \[C_{ij}^{\sigma}=\left\langle\hat{\sigma}_{i}^{z}\hat{\sigma}_{j}^{z}\right\rangle -\left\langle\hat{\sigma}_{i}^{z}\right\rangle\left\langle\hat{\sigma}_{j}^{z }\right\rangle, \tag{6}\] where \(i\) and \(j\) denote the different sites. The correlation length \(\xi\) as well as the wave vector \(q\) are extracted by fitting the correlation function to the Ornstein-Zenicke form [34], \[C_{ij}^{\sigma}\propto\frac{\exp\left(-\frac{|i-j|}{\xi}\right)}{\sqrt{|i-j| }}\cos\left(q|i-j|+\varphi\right). \tag{7}\] This is attained by first fitting the correlation length, as is indicated in Figure (2a). When considering \(\log\left(C_{ij}^{\sigma}\right)\), one easily sees that within the bulk, the maxima of the oscillations decay linearly as a function of \(|i-j|\), with a slope of \(-\frac{1}{\xi}\). Making use of this, one can then rescale \(C_{ij}^{\sigma}\) with a factor \(\exp\left(\frac{|i-j|}{\xi}\right)\), which then allows to fit the wave vector \(q\), as indicated in Figure (2b). This procedure works very well for most of the points of the phase diagram. A problem arises when \(q\) is very small, which is the case at the points close to the Ashkin-Teller line. The problem is due to the fact that for small \(q\), the correlation function reaches machine precision before the elapse of a whole period, making it very difficult to properly fit the oscillations with a wave vector \(q\). The way this issue was dealt with was by approaching the transition from a direction in parameter space where the oscillation period was smaller. #### ii.1.5 Identifying the Lifshitz Line As already mentioned, the estimate of the Lifshitz line was obtained by scanning the phase diagram along the \(\delta\) Figure 2: Example of (a) the logarithm of \(C_{ij}^{\sigma}\) as a function of the distance of two sites \(|i-j|\) including a fit for the “crest” of the oscillations and of (b) the corresponding rescaled correlation function, which is then used to extract \(q\). axis for given values of \(\lambda\). The estimate was then made by considering the behaviour of \(\xi\) and \(q\times\xi\) at the transition for different values of \(\delta\). We know that at a chiral transition, the critical exponents for the correlation length coming from either the commensurate or the incommensurate phase (\(\nu_{left}\) and \(\nu_{right}\)) should match. Also if there is an intermediate floating phase there should be a change in curvature on the \(\frac{1}{\xi}\)-plot due to the scaling of the correlation length \(\xi\sim\exp(\frac{c}{\sqrt{\beta-\beta_{c}}})\), as well a diverging behaviour in \(q\times\xi\) close to criticality. Figure (3) shows two plots along the \(\delta\)-axis for \(\lambda=1\). Whereas in the Figure (3a) and (3b) everything points toward a critical floating phase at \(\lambda=1\), \(\delta=1.1\), Figures (3e) and (3f) point towards a chiral transition at \(\lambda=1\), \(\delta=0.7\). Considerations like these enable one to map out the different phases and put an estimate, albeit a very approximate one, for the Lifshitz point. ## III Results #### iii.0.1 Phase Diagram The qualitative phase diagram we found is presented in Figure 1. This diagram should be understood as follows. For every pair of parameters \((\lambda,\delta)\) displayed in the two-dimensional plot there is a particular value of the parameter \(\beta\), introduced in (1), where the C-IC transition occurs. One can understand Figure 1 as the depiction of the nature of phase transitions occurring on a two-dimensional critical surface embedded in a three-dimensional parameter space. This is further illustrated in Figure (4). Around \(\lambda\simeq 0.5\) we find that a non-vanishing value of \(\delta\) immediately causes a floating phase to open up, whereas for \(0.5\lesssim\lambda\lesssim 0.98\) the transition remains direct but in the chiral universality class once a non-vanishing chiral perturbation is introduced. Furthermore, we identify a small region \(\lambda_{c}<\lambda\leq 1\) in which the transition occurs in the Ashkin-Teller universality class for small but non-zero values of \(\delta\). #### iii.0.2 \(\lambda=1\) At \(\lambda=1\) it is conjectured that with increasing \(\delta\), different phase transitions can be observed. As mentioned above, for \(0.9779\leq\lambda\leq 1\) it is expected that when introducing a small but finite perturbation \(\delta\), the incommensurate-commensurate transition still occurs in the Ashkin-Teller universality class. To examine this, simulations were carried out at \(\lambda=1\) for different values of \(\delta\). For a given pair of \(\lambda\) and \(\delta\), the critical point \(\beta_{c}\) was identified, and in its proximity the incommensurability wave vector \(q\) as well as the correlation length \(\xi\) were extracted in order to consider the quantity \(q\times\xi\) close to the transition. For \(\lambda=1\) simulations across the critical surface were carried out for a multitude of values of \(\delta\) Figure 4: Qualitative sketch of the nature of the quantum phase transitions between the ordered phase with a four-fold degenerate ground state and the disordered phase of the quantum Ashkin-Teller model defined by the Hamiltonian of Eq.(2) as a function of \(\lambda\), \(\beta\), and \(\delta\). Figure 3: Plots of \(\frac{1}{\xi}\) and \(q\times\xi\) around criticality at \(\lambda=1\) and \(\delta=1.1\) (a and b), as well as at \(\delta=1.0\) (c and d) and \(\delta=0.7\) (e and f). The simulations were carried out on three different system sizes \(N=400\), \(N=700\) and \(N=1000\). In (a) and (b) one can see indications of an intermediate floating phase, namely the mismatch of the critical exponents \(\nu_{left}\) and \(\nu_{right}\) as well as a tendency to a convex behaviour close to the criticality coming from the left. There is also a clear increase in \(q\times\xi\) close to the critical point. In (c) and (d) the results can be seen to be inconclusive as to whether the transition occurs in the chiral universality class or via a floating phase. In (e) and (f) everything points towards a chiral transition. Each critical point was approached at 3 different angles and the scaling of \(q\times\xi\) of the different cuts was compared. The value of \(q\times\xi\) for each cut at the transition (_intercept_) was extracted. The average intercept for the three cuts was then computed. As can be seen in Figure (5a), as \(\delta\) decreases, so does the value of the intercepts. Taking the errors into consideration, this suggests that \(q\times\xi\) goes to zero around \(\delta\simeq 0.05\). This would suggest that for \(\delta\) smaller than this value, the incommensurate-commensurate transition actually occurs via an Ashkin-Teller type transition. In Figure (6b) we can see \(q\times\xi\) going to zero at the transition at \(\lambda=1\) and \(\delta=0.04\). #### iii.2.3 \(\lambda=0.98\) The same procedure was carried out for \(\lambda=0.98\). As for \(\lambda=1\), we find \(q\times\xi\) going to zero around \(\delta\simeq 0.05\), Figure (5b). Figure (6a) seems to confirm that the incommensurate-commensurate transition for \(\lambda=0.98\) and \(\delta=0.04\) occurs in the Ashkin-Teller universality class. As a comparison, we carried out the same procedure for \(\lambda=0.8\) where we know the chiral perturbation to be immediately relevant. In Figure (5c) we see that, by contrast to \(\lambda=1\) and \(\lambda=0.98\), the quantity \(q\times\xi\) at the transition only vanishes for \(\delta=0\) #### iii.2.4 Chiral Transition It is conjectured that for \(\lambda\) and \(\delta\) large enough there will be a region in the phase diagram where the IC-C transition occurs via a chiral transition, characterized by the convergence of \(q\times\xi\) upon approaching the transition. Indeed, the results of Section 1 seem to confirm this conjecture in the region of the phase diagram with \(\lambda\simeq 1\) and \(\delta\geq 0.05\). This can be examplified by considering the value \(\delta=0.4\) at \(\lambda=1\). This \(\delta\)-value is considerably larger than \(\delta=0.05\), but still small enough that the transition does not occur via a floating phase. The results of the simulation for \(q\times\xi\) of three different cuts are shown in Figure (7a). Independent of the angle at which the transition was approached in parameter space, \(q\times\xi\) approaches the same value. To further substantiate the nature of this transition as Figure 5: \(q\times\xi\) at the C-IC transition as a function of chiral coupling \(\delta\) for (a) \(\lambda=1\), (b) \(\lambda=0.98\), and (c) \(\lambda=0.8\). The dashed lines in (a) and (b) indicate the value of \(\delta^{c}\) associated with the point where \(q\times\xi\) vanishes and below which the transition is conformal. The dotted line is a linear fit. Figure 6: The scaling of the quantity \(q\times\xi\) with the distance to the C-IC transition at (a) \(\lambda=0.98\), \(\delta_{crit}=0.04\) and at (b) \(\lambda=1\), \(\delta_{crit}=0.04\) Figure 7: (a) \(q\times\xi\) as the IC-C transition is approached at \(\lambda=1\), \(\delta=0.4\). Every cut stands for a different direction in the plane spanned by the parameters \((\delta,\beta)\) at which the transition is approached. (b) The inverse of the correlation length across the IC-C transition at \(\lambda=1\), \(\delta=0.4\). For the fit, the two exponents \(\nu_{left}\) and \(\nu_{right}\) as well as the point \(\beta_{critical}\), at which the transition occurs, were fitted simultaneously. chiral, the behaviour of the correlation length was considered on both sides of the transition. Figure (7b) shows that the critical exponents \(\nu\) of the correlation length for both sides of the transition match, as predicted by Huse and Fisher. #### iii.2.5 Lifshitz Line For any value of \(\lambda\) there is a \(\delta\) large enough to open up a floating phase. Since every point where the floating phase opens up stands at the boundary of 3 different phases, we refer to the set of those points as Lifshitz line. An accurate location of the Lifshitz line is an extremely challenging task and the Lifshitz line presented in the Fig.1 is defined up to an error \(\Delta\delta\pm 0.05\). In agreement with classical results, we observe that for \(\lambda\lesssim 0.5\) a floating phase opens up immediately once a chiral perturbation \(\delta\) is introduced. Also, the diagram indicates the small region close to \(\lambda=1\) where we observe an Ashkin-Teller transition for finite \(\delta\). To obtain a qualitative idea of the phase diagram including the Lifshitz-line, simulations were also carried out for values of \(\lambda\) smaller than 1. The exact determination of the Lifshitz point for a given \(\lambda\) is virtually impossible with the methods we used. One can however end up with a reasonable estimate by simply scanning the phase diagram. In Figure 1 one can see a qualitative diagram indicating the range in parameter space of the different types of transitions. ## IV Discussion In the present paper we have investigated the critical properties of the quantum Ashkin-Teller model on a 1D chain with chiral perturbations. The resulting phase diagram is in qualitative agreement with the previously reported phase diagram of the classical symmetric 2D chiral Ashkin-Teller model on the square lattice. We find clear evidence of the presence of a region where the C-IC transition occurs in the chiral universality class predicted by Huse and Fisher. Consistent with the underlying theory we find that for \(\lambda\lesssim 0.98\) a chiral perturbation immediately drives the C-IC transition out of the Ashkin-Teller universality class into either the chiral universality class (\(0.5\lesssim\lambda\)) or the melting via a critical floating phase (\(\lambda\lesssim 0.5\)). We map out the location of the Lifshitz line indicating the boundary between the chiral transition and the floating phase. Furthermore, our methods allowed us to examine the effect of a chiral perturbation in the region \(0.98\lesssim\lambda\lesssim 1\), where, on theoretical grounds, it is known that the perturbation is irrelevant. Indeed we are able to identify a small domain in the phase diagram where we observe a C-IC transition in the Ashkin-Teller universality class for finite perturbations up to the value \(\delta\simeq 0.04\). This region has so far not been observed in the 2D classical chiral Ashkin-Teller model. It would be interesting to see if the boundary of this phase indeed corresponds to the critical value \(\nu=\nu_{c}=\left(1+\sqrt{3}\right)/4\simeq 0.683\), but with our current algorithm the precision on \(\nu\) is not sufficient to check this prediction, and this point is left for future investigation. Finally, let us briefly comment on possible experimental realizations of this phase diagram with arrays of Rydberg atoms trapped in optical tweezers. As stated above, the conformal point at the boundary of the period-4 lobe in the simplest array of Rydberg atoms is characterized by the critical exponent \(\nu\approx 0.78\)[26; 15]. However, recent proposal on multi-component and multi-species Rydberg arrays [35; 36; 37; 38; 39], add new independent parameters that can be individually controlled in experiments. This opens a way to tune the conformal critical point and the Ashkin-Teller asymmetry parameter \(\lambda\) and in turn to manipulate the appearance of the chiral transition at the boundary of the period-4 phase. A priori, it should be possible to realize the Ashkin-Teller critical point with \(\lambda\gtrsim 0.98\) followed by a finite interval of conformal transitions. Kibble-Zurek experiments should in principle be able to check this scenario, but this will require to reach a much higher accuracy than that currently available. ###### Acknowledgements. The authors acknowledge Samuel Nyckees for useful discussions. The work has been supported by the Swiss National Science Foundation (FM) grant 182179 and by the Delft Technology Fellowship (NC). Numerical simulations have been performed on the Dutch national e-infrastructure with the support of the SURF Cooperative, the facilities of the Scientific IT and Application Support Center of EPFL and at the DelftBlue HPC. * ## Appendix A Approaching a Transition from Different Angles in Parameter Space The behaviour of the order parameters across the transition should not depend on the direction in parameter space along which the transition is approached. An illustrative example is given by the transition at \(\lambda=1,\delta=0.12\). First, the point of the C-IC transition in \(\beta\)-space, labeled \(\beta_{c}\), is determined by performing simulations at constant \(\lambda\) and \(\delta\) on the commensurate side and then fitting the diverging correlation length, Figure (8b). Once \(\beta_{c}\) has been determined, the transition is then approached from different angles in the \(\beta-\delta\)-plane. Table 1 shows the three different cuts that were carried out in this particular example. The corresponding plots of \(q\) are shown in Figure (8a).
2306.00387
Power Set of Some Quasinilpotent Weighted shifts on $l^p$
For a quasinilpotent operator $T$ on a Banach space $X$, Douglas and Yang defined $k_x=\limsup\limits_{z\rightarrow 0}\frac{\ln\|(z-T)^{-1}x\|}{\ln\|(z-T)^{-1}\|}$ for each nonzero vector $x\in X$, and call $\Lambda(T)=\{k_x: x\ne 0\}$ the power set of $T$. They proved that the power set have a close link with $T$'s lattice of hyperinvariant subspaces. This paper computes the power set of quasinilpotent weighted shifts on $l^p$ for $1\leq p< \infty$. We obtain the following results: (1) If $T$ is an injective quasinilpotent forward unilateral weighted shift on $l^p(\mathbb{N})$, then $\Lambda(T)=\{1\}$ when $k_{e_0}=1$, where $\{e_n\}_{n=0}^{\infty}$ be the canonical basis for $l^p(\mathbb{N})$; (2) There is a class of backward unilateral weighted shifts on $l^p(\mathbb{N})$ whose power set is $[0,1]$; (3) There exists a bilateral weighted shift on $l^p(\mathbb{Z})$ with power set $[\frac{1}{2},1]$.
Chaolong Hu, Youqing Ji
2023-06-01T06:37:33Z
http://arxiv.org/abs/2306.00387v2
# Power set of some quasinilpotent weighted shifts on \(l^{p}\) ###### Abstract. For a quasinilpotent operator \(T\) on a Banach space \(X\), Douglas and Yang defined \(k_{x}=\limsup\limits_{z\to 0}\frac{\ln\|(z-T)^{-1}x\|}{\ln\|(z-T)^{-1}\|}\) for each nonzero vector \(x\in X\), and call \(\Lambda(T)=\{k_{x}:x\neq 0\}\) the _power set_ of \(T\). \(\Lambda(T)\) have a close link with \(T\)'s lattice of hyperinvariant subspaces. This paper computes the power set of quasinilpotent weighted shifts on \(l^{p}\) for \(1\leq p<\infty\). We obtain the following results: (1) If \(T\) is an injective quasinilpotent forward unilateral weighted shift on \(l^{p}(\mathbb{N})\), then \(\Lambda(T)=\{1\}\) when \(k_{e_{0}}=1\), where \(\{e_{n}\}_{n=0}^{\infty}\) be the canonical basis for \(l^{p}(\mathbb{N})\); (2) There is a class of backward unilateral weighted shifts on \(l^{p}(\mathbb{N})\) whose power set is \([0,1]\); (3) There exists a bilateral weighted shift on \(l^{p}(\mathbb{Z})\) with power set \([\frac{1}{2},1]\) for \(1<p<\infty\). Key words and phrases:Quasinilpotent operator, Power set, Weighted shift, Hyperinvariant subspace 2010 Mathematics Subject Classification: Primary 47A10; Secondary 47B37 Supported by NNSF of China (Grant No.12271202 and No.12031002). ## 1. Introduction Let \(X\) be a complex Banach space, and \(\mathcal{B}(X)\) the algebra of bounded linear operators on \(X\). For \(T\in\mathcal{B}(X)\), let \(\sigma(T)\) be the spectrum of \(T\). Say \(T\) is _quasinilpotent_ if \(\sigma(T)=\{0\}\). The hyperinvariant subspace problem asks: Does every bounded operator \(T\) on an infinite dimensional separable Hilbert space have a non-trivial closed hyperinvariant (i.e., invariant for all the operators commuting with \(T\)) subspace? This question is still open, especially for quasinilpotent operators (see [3, 8, 11, 12]). The _power set_\(\Lambda(T)\) of a quasinilpotent operator \(T\) was introduced by Douglas and Yang (see [1, 2]). For reader's convenience, we recall it below. **Definition 1.1**.: Suppose that \(T\in\mathcal{B}(X)\) is quasinilpotent and \(x\in X\setminus\{0\}\). Let \[k_{(T,x)}=\limsup\limits_{z\to 0}\frac{\ln\|(z-T)^{-1}x\|}{\ln\|(z-T)^{-1}\|}.\] Usually, we briefly write \(k_{x}\) instead of \(k_{(T,x)}\) if there is no confusion possible. Set \(\Lambda(T)=\{k_{x}:x\neq 0\}\), and call it the _power set_ of \(T\). Since \(\lim\limits_{z\to 0}\|(z-T)^{-1}\|=+\infty\) when \(\sigma(T)=\{0\}\), it holds that \(k_{(\alpha x)}=k_{x}\) for all \(\alpha\neq 0\) and \(x\neq 0\). From \[\frac{\|x\|}{|z|+\|T\|}\leq\|(z-T)^{-1}x\|\leq\|(z-T)^{-1}\|\|x\|,\ \forall z\neq 0,\] it follows that \(\Lambda(T)\subset[0,1]\). Moreover, \(\Lambda(A)=\Lambda(T)\) if \(A\) is similar to \(T\) (see [2, Proposition 5.2]). Meanwhile, the following result of Douglas and Yang established a link between the power set and the hyperinvariant subspace problem as follows. **Proposition 1.2** (see [2, Proposition 7.1, Corollary 7.2]).: _Let \(T\in\mathcal{B}(X)\) be a quasinilpotent operator. For \(\tau\in[0,1]\), write \(M_{\tau}=\{x:k_{x}\leq\tau\}\bigcup\{0\}\). Then, \(M_{\tau}\) is a linear subspace of \(X\), and \(A(M_{\tau})\subset M_{\tau}\) for every \(A\in\mathcal{A}^{\prime}(T)\)._ _In particular, when \(\Lambda(T)\) contains two different points \(\tau\) with closed \(M_{\tau}\), \(T\) has a nontrivial closed hyperinvariant subspace._ In fact, \(M_{\tau}\) is not always closed. In [7], Ji and Liu constructed a quasinilpotent operator \(T\) with power set \([0,1]\), for which \(M_{\tau}\) is not closed for each \(\tau\in[0,1)\). But they proved that there exists a subset \(\mathcal{N}\) of \(\operatorname{Lat}T\), which is order isomorphic to \(\Lambda(T)\). So, the following question was raised in [7]. **Question 1**.: For any quasinilpotent operator \(T\in\mathcal{B}(X)\), can one find a subset \(\mathcal{N}\) of \(\operatorname{Lat}T\) which is order isomorphic to \(\Lambda(T)\)? From the reference [5, 7], we observe that the subsets \(\{1\}\) and \([0,1]\) can be the power set of some quasinilpotent operators. Can we find other subsets of \([0,1]\) that correspond to power set of a quasinilpotent operator? Thus, we may naturally consider the following question. **Question 2**.: Which subsets of \([0,1]\) could be the power set of a quasinilpotent operator \(T\in\mathcal{B}(X)\)? Given a Banach space \(X\), we let \(X^{\prime}\) denote its dual space. Suppose that \(T\in\mathcal{B}(X)\), we denote by \(T^{\prime}\) the adjoint of acting on \(X^{\prime}\). Comparing with the fact that \(\sigma(T)=\sigma(T^{\prime})\), one may naturally consider the following question. **Question 3**.: Are \(\Lambda(T)\) and \(\Lambda(T^{\prime})\) intrinsically linked? In light of these questions, this paper is devoted to the calculations of the power set of quasinilpotent weighted shifts on \(l^{p}\). To proceed, we recall some terminology and notation. For \(1\leq p<\infty\), let \(l^{p}(\mathbb{Z})\) be the Banach space of all two-sided \(p\)-summable sequences of complex numbers, and \(l^{p}(\mathbb{N})\) be the Banach space of all sequences \(\{x_{n}\}_{n\in\mathbb{Z}}\in l^{p}(\mathbb{Z})\) for which \(x_{n}=0\) for all \(n\leq 0\). Let \(\{e_{k}\}_{k\in\mathbb{K}}\) be the canonical basis for \(l^{p}(\mathbb{K})\), where the set \(\mathbb{K}\) will be the set \(\mathbb{N}\) or \(\mathbb{Z}\). Say \(T\) is a forward unilateral weighted shift on \(l^{p}(\mathbb{N})\) with weight sequence \(\{w_{n}\}_{n=0}^{\infty}\) if \(T\in\mathcal{B}(l^{p}(\mathbb{N}))\) and if \[Te_{n}=w_{n}e_{n+1},\quad\forall n\in\mathbb{N}.\] Say \(T\) is a backward unilateral weighted shift on \(l^{p}(\mathbb{N})\) with weight sequence \(\{w_{n}\}_{n=1}^{\infty}\) if \(T\in\mathcal{B}(l^{p}(\mathbb{N}))\) and if \[Te_{0}=0,\quad Te_{n}=w_{n}e_{n-1},\quad\forall n\geq 1.\] Say \(T\) is a bilateral weighted shift on \(l^{p}(\mathbb{Z})\) with weight sequence \(\{w_{n}\}_{n=-\infty}^{+\infty}\) if \(T\in\mathcal{B}(l^{p}(\mathbb{Z}))\) and if \[Te_{n}=w_{n}e_{n+1},\quad\forall n\in\mathbb{Z}.\] Now we review some known results of the power set of quasinilpotent weighted shifts. Let \(V\) be the Volterra operator on the Hardy space \(H^{2}(\mathbb{D})\), defined by \((Vf)(z)=\int_{0}^{z}f(\xi)d\xi,f\in H^{2}(\mathbb{D})\). Liang and Yang [10] first observed that \(\Lambda(V)=\{1\}\). Actually, \(V\) is a strongly strictly cyclic quasinilpotent forward unilateral weighted shift on \(H^{2}(\mathbb{D})\) with weight sequence \(\{\frac{1}{n+1}\}_{n=0}^{\infty}\). Then Ji and Liu [7] showed that if \(T\) is a strongly strictly cyclic quasinilpotent forward unilateral weighted shift on \(l^{2}(\mathbb{N})\), the power set \(\Lambda(T)=\{1\}.\) Later, He and Zhu [5] proved that if \(T\) is a strictly cyclic quasinilpotent forward unilateral weighted shift on \(l^{2}(\mathbb{N})\), then \(\Lambda(T)=\{1\}\). Given a weight sequence \(\{w_{n}\}_{n}\), for \(1\leq p<\infty\), let \(T_{p}\) be the weighted shift with weight sequence \(\{w_{n}\}_{n}\) on \(l^{p}\). A natural question is: **Question 4**.: Is the power set \(\Lambda(T_{p})\) independent of \(p\)? In Section 2, we prove that if \(T_{p}\) is an injective quasinilpotent forward unilateral weighted shift on \(l^{p}(\mathbb{N})\), then \(\Lambda(T_{p})=\{1\}\) when \(k_{(T_{p},e_{0})}=1\). As an application, we show that if \(T_{p}\) is a strictly cyclic quasinilpotent forward unilateral weighted shift on \(l^{p}(\mathbb{N})\), then \(\Lambda(T_{p})\) are the same for different \(p\) and \(\Lambda(T_{p})=\{1\}\). This also gives a partial answer to Question 4. Given a forward weighted shift \(T\) on \(l^{p}(\mathbb{N})\) with weight sequence \(\{w_{n}\}_{n=0}^{\infty}\), it is well-known that \(T^{\prime}\) is the backward weighted shift on \((l^{p}(\mathbb{N}))^{\prime}\) with weight sequence \(\{w_{n}\}_{n=0}^{\infty}\). Now we suppose that \(A\) is an injective quasinilpotent backward unilateral weighted shift on \(l^{p}(\mathbb{N})\). Since \(\|(z-A)^{-1}e_{n}\|_{p}^{p}\) is a polynomial in \(\frac{1}{z}\) with degree \(n+1\). So it is not difficult to check that \(k_{e_{n}}=0\) for each \(n\geq 0\). In view of this, one can also obtain that \(k_{x}=0\) when \(x\in\operatorname{span}\{e_{n}:n\geq 0\}\). Thus, it is interesting to consider the following question. **Question 5**.: For any quasinilpotent backward unilateral weighted shift \(A\) on \(l^{p}(\mathbb{N})\), is it true that \(\Lambda(A)=\{0\}\)? In Section 3, we first prove that the power set of a class of backward unilateral weighted shifts contains \(1\). That provides a negative answer to Question 5. Moreover, we show that the power set of some backward unilateral weighted shifts is \([0,1]\). In particular, they include the backward unilateral weighted shift \(A\) with weight sequence \(\{\frac{1}{n+1}\}_{n=0}^{\infty}\). In fact, \(A\) is unicellular and has only countable many invariant subspaces (see [14]). So one can not find a subset \(\mathcal{N}\) of \(\operatorname{Lat}A\) which is order isomorphic to \(\Lambda(A)\). That also provides a negative answer to Question 1. In addition, in view of these results, it also provides some information for Question 3. In Section 4, we construct a quasinilpotent bilateral weighted shift on \(l^{p}(\mathbb{Z})\) with power set \([\frac{1}{2},1]\) for \(1<p<\infty\). This provides a new method to construct a quasinilpotent operator whose power set is an interval segment on \([0,1]\), and brings a new insight into the Question 2. ## 2. Power set of forward unilateral weighted shifts In this section, we aim to investigate the power set of forward unilateral weighted shifts on \(l^{p}(\mathbb{N})\) for \(1\leq p<\infty\). Before proceeding, let us recall a basic property for weighted shifts. Given a weighted shift \(T\) with the weight sequence \(\{w_{n}\}_{n=0}^{\infty}\), there exists a surjective isometry \(U\) on \(l^{p}(\mathbb{N})\) such that \(U^{-1}TU\) is the weighted shift with weight sequence \(\{|w_{n}|\}_{n=0}^{\infty}\) (see [14]). In view of this, say \(T\) have circular symmetry. So without loss of generality, we may assume that \(w_{n}\geq 0\) for all \(n\). Next, we first give the following lemmas. **Lemma 2.1**.: _Let \(T\) be an injective forward unilateral weighted shift on \(l^{p}(\mathbb{N})\) with weight sequence \(\{w_{n}\}_{n=0}^{\infty}\). Suppose \(T\) is quasinilpotent. Then for any polynomial \(\varphi\),_ \[\lim_{z\to 0}\frac{\ln|\varphi(|z|)|}{\ln\|(z-T)^{-1}e_{0}\|_{p}}=0.\] **Proof.** We first prove that \(\lim\limits_{z\to 0}\frac{\ln|z|}{\ln\|(z-T)^{-1}e_{0}\|_{p}}=0\). For \(z\neq 0\), \[\|(z-T)^{-1}e_{0}\|_{p}^{p} =\|\sum\limits_{n=0}^{\infty}\frac{T^{n}}{z^{n+1}}e_{0}\|_{p}^{p}= \|\frac{e_{0}}{z}+\sum\limits_{n=1}^{\infty}\frac{\prod\limits_{j=0}^{n-1}w_{j }}{z^{n+1}}e_{n}\|_{p}^{p}\] \[=\frac{1}{|z|^{p}}+\frac{1}{|z|^{p}}\sum\limits_{n=1}^{\infty} \frac{\big{(}\prod\limits_{j=0}^{n-1}w_{j}\big{)}^{p}}{|z|^{pn}}.\] Given \(\varepsilon>0\), we can pick a natural number \(N\) so that \(\frac{1}{N}<\varepsilon\). Set \(\delta_{1}=\bigg{(}\prod\limits_{j=0}^{N-1}w_{j}\bigg{)}^{\frac{1}{N}}\). Since \[\lim\limits_{n\to\infty}\Big{(}\sup\limits_{k}\prod\limits_{j=k}^{k+n-1}w_{j} \bigg{)}^{\frac{1}{n}}=\lim\limits_{n\to\infty}\|T^{n}\|^{\frac{1}{n}}=0,\] we have \(\delta_{1}<1\). Thus for \(0<|z|<\delta_{1}\), it holds that \(\frac{\big{(}\prod\limits_{j=0}^{N-1}w_{j}\big{)}^{p}}{|z|^{pN}}>1\). Then \[\ln\|(z-T)^{-1}e_{0}\|_{p}^{p}\geq\ln\bigg{(}\frac{\big{(}\prod\limits_{j=0} ^{N-1}w_{j}\big{)}^{p}}{|z|^{pN}}\bigg{)}>0.\] It follows that \(p\sum\limits_{j=0}^{N-1}\ln w_{j}-pN\ln|z|>0\) and \[\frac{\sum\limits_{j=0}^{N-1}\ln w_{j}}{\ln|z|}-N<0.\] Set \(\delta_{2}=e^{\frac{\varepsilon\big{(}\sum\limits_{j=0}^{N-1}\ln w_{j}\big{)} }{\varepsilon N-1}}\). It is not difficult to check that \(\frac{\varepsilon\big{(}\sum\limits_{j=0}^{N-1}\ln w_{j}\big{)}}{\varepsilon N -1}<0\) and hence \(\delta_{2}<1\). Let \(\delta=\min\{\delta_{1},\delta_{2}\}\). Then, when \(0<|z|<\delta\), we can deduce that \[\bigg{|}\frac{\ln|z|}{\ln\|(z-T)^{-1}e_{0}\|_{p}}\bigg{|} =\frac{|\ln|z||}{\frac{1}{|z|^{p}}+\frac{1}{|z|^{p}}\sum\limits_{n =1}^{\infty}\Big{(}\frac{\prod\limits_{j=0}^{n-1}w_{j}}{|z|^{n}}\Big{)}^{p}}\] \[\leq\frac{|\ln|z||}{\frac{N-1}{\frac{1}{p}\ln\Big{(}\frac{\prod \limits_{j=0}^{N-1}w_{j}}{|z|^{N}}\Big{)}^{p}}}=\bigg{|}\frac{\ln|z|}{\sum \limits_{j=0}^{N-1}\ln w_{j}-N\ln|z|}\bigg{|}\] \[=\bigg{|}\frac{1}{\frac{\sum\limits_{j=0}^{N-1}\ln w_{j}}{\ln|z| }-N}\bigg{|}=\frac{1}{N-\frac{\sum\limits_{j=0}^{N-1}\ln w_{j}}{\ln|z|}}.\] Notice that, for \(|z|<\delta_{2}<1\), \[\frac{\sum\limits_{j=0}^{N-1}\ln w_{j}}{\ln|z|} \leq\frac{\sum\limits_{j=0}^{N-1}\ln w_{j}}{\ln\delta_{2}}\] \[=\big{(}\sum\limits_{j=0}^{N-1}\ln w_{j}\big{)}\bigg{(}\frac{ \varepsilon\big{(}\sum\limits_{j=0}^{N-1}\ln w_{j}\big{)}}{\varepsilon N-1} \bigg{)}^{-1}=N-\frac{1}{\varepsilon}.\] It follows that \[N-\frac{\sum\limits_{j=0}^{N-1}\ln w_{j}}{\ln|z|}\geq\frac{1}{\varepsilon}\] and \(\big{|}\frac{\ln|z|}{\ln\|(z-T)^{-1}e_{0}\|_{p}}\big{|}<\varepsilon\). Hence, \[\lim\limits_{z\to 0}\frac{\ln|z|}{\ln\|(z-T)^{-1}e_{0}\|_{p}}=0.\] In view of this, it is easy to see that \[\lim\limits_{z\to 0}\frac{\ln|z|^{k}}{\ln\|(z-T)^{-1}e_{0}\|_{p}}=\lim \limits_{z\to 0}\frac{k\ln|z|}{\ln\|(z-T)^{-1}e_{0}\|_{p}}=0,\ \forall k\geq 0.\] Without loss of generality, assume that \(\varphi(z)=z^{k}(\sum\limits_{j=0}^{n}c_{j}z^{j})\) and \(c_{0}\neq 0\). Then \(|\varphi(|z|)|=|z|^{k}\big{|}\sum\limits_{j=0}^{n}c_{j}|z|^{j}\big{|}\) and \[\lim\limits_{z\to 0}\frac{\ln|\varphi(|z|)|}{\ln\|(z-T)^{-1}e_{0}\|_{p}} =\lim\limits_{z\to 0}\frac{k\ln|z|+\ln\big{|}\sum\limits_{j=0}^{n}c_{j}|z|^{ j}|}{\ln\|(z-T)^{-1}e_{0}\|_{p}}\] \[=\lim\limits_{z\to 0}\Big{(}\frac{k\ln|z|}{\ln\|(z-T)^{-1}e_{0}\|_{p}} +\frac{\ln\big{|}\sum\limits_{j=0}^{n}c_{j}|z|^{j}\big{|}}{\ln\|(z-T)^{-1}e_{0 }\|_{p}}\Big{)}.\] Therefore \[\lim\limits_{z\to 0}\frac{\ln|\varphi(|z|)|}{\ln\|(z-T)^{-1}e_{0}\|_{p}}=0.\] The proof is completed. **Lemma 2.2**.: _Let \(T\) be an injective forward unilateral weighted shift on \(l^{p}(\mathbb{N})\) with weight sequence \(\{w_{n}\}_{n=0}^{\infty}\). Suppose \(T\) is quasinilpotent. Then \(k_{e_{n}}=k_{e_{0}}\) for all \(n\geq 1\)._ **Proof.** Let \(\beta_{0}=1\) and \(\beta_{k}=w_{0}w_{1}\cdots w_{k-1}\) for \(k\geq 1\). For every \(n\geq 1\), \(T^{n}e_{0}=\beta_{n}e_{n}\). So, \[\|(z-T)^{-1}e_{n}\|_{p}^{p} =\|(\beta_{n})^{-1}(z-T)^{-1}T^{n}e_{0}\|_{p}^{p}\] \[=(\beta_{n})^{-p}\|\sum_{k=0}^{\infty}\frac{T^{n+k}}{z^{k+1}}e_{0} \|_{p}^{p}\] \[=(\beta_{n})^{-p}|z|^{pn}\|\sum_{k=0}^{\infty}\frac{T^{n+k}}{z^{n+ k+1}}e_{0}\|_{p}^{p}\] \[=(\beta_{n})^{-p}|z|^{pn}\|\sum_{k=n}^{\infty}\frac{T^{k}}{z^{k+1} }e_{0}\|_{p}^{p}\] \[=(\beta_{n})^{-p}|z|^{pn}\Big{(}\|\sum_{k=0}^{\infty}\frac{T^{k}}{ z^{k+1}}e_{0}\|_{p}^{p}-\|\sum_{k=0}^{n-1}\frac{T^{k}}{z^{k+1}}e_{0}\|_{p}^{p} \Big{)}\] \[=(\beta_{n})^{-p}|z|^{pn}\Big{(}\|(z-T)^{-1}e_{0}\|_{p}^{p}-\|\sum _{k=0}^{n-1}\frac{\beta_{k}}{z^{k+1}}e_{k}\|_{p}^{p}\Big{)}\] \[=(\beta_{n})^{-p}|z|^{pn}\|(z-T)^{-1}e_{0}\|_{p}^{p}\Big{(}1-\|(z- T)^{-1}e_{0}\|_{p}^{-p}\sum_{k=0}^{n-1}\frac{(\beta_{k})^{p}}{|z|^{p(k+1)}} \Big{)}.\] Denote \(\varphi_{n}(\lambda)=(\beta_{n})^{-p}\lambda^{pn}\), \(\psi_{n}(\lambda)=\sum\limits_{k=0}^{n-1}(\beta_{k})^{p}\lambda^{-p(k+1)}\). Then both \(\varphi_{n}(\lambda)\) and \(\lambda^{pn}\psi_{n}(\lambda)\) are polynomials in \(\lambda\). Thus, \[p\ln\|(z-T)^{-1}e_{n}\|_{p}=\ln\varphi_{n}(|z|)+p\ln\|(z-T)^{-1}e_{0}\|_{p}+ \ln\Big{(}1-\|(z-T)^{-1}e_{0}\|_{p}^{-p}\psi_{n}(|z|)\Big{)}.\] Note that \(\lim\limits_{z\to 0}\frac{\ln|z|^{k}}{\ln\|(z-T)^{-1}e_{0}\|_{p}}=0\), \(\forall k\). It follows that \[\lim\limits_{z\to 0}\|(z-T)^{-1}e_{0}\|_{p}^{-p}\psi_{n}(|z|)=0\] and \[\lim\limits_{z\to 0}\ln\Big{(}1-\|(z-T)^{-1}e_{0}\|_{p}^{-p}\psi_{n}(|z|)\Big{)}= \ln 1=0.\] By Lemma 2.1, we have \[\lim\limits_{z\to 0}\frac{\ln\varphi_{n}(|z|)}{\ln\|(z-T)^{-1}e_{0}\|_{p}}=0\] and \(\lim\limits_{z\to 0}\frac{\ln\|(z-T)^{-1}e_{n}\|_{p}}{\ln\|(z-T)^{-1}e_{0}\|_{p}}=1\). Then \[k_{e_{n}} =\limsup\limits_{z\to 0}\frac{\ln\|(z-T)^{-1}e_{n}\|_{p}}{\ln\|(z-T)^{- 1}\|}\] \[=\lim\limits_{z\to 0}\frac{\ln\|(z-T)^{-1}e_{n}\|_{p}}{\ln\|(z-T)^{- 1}e_{0}\|_{p}}\limsup\limits_{z\to 0}\frac{\ln\|(z-T)^{-1}e_{0}\|_{p}}{\ln\|(z-T)^{- 1}\|}\] \[=k_{e_{0}}.\] The proof is completed. For \(k\geq 0\), set \(f_{k}(x)=x_{k}\), \(\forall x=\{x_{n}\}_{n=0}^{\infty}\in l^{p}(\mathbb{N})\). Then \(f_{k}\in(l^{p}(\mathbb{N}))^{\prime}\), the dual space of \(l^{p}(\mathbb{N})\), and \(\|f_{k}\|=1\). If \(1<p<\infty\), for \(f\in(l^{p}(\mathbb{N}))^{\prime}\), there is unique \(\xi=\{\xi_{k}\}_{k=0}^{\infty}\in l^{q}(\mathbb{N})\) such that \(f=\sum\limits_{k=0}^{\infty}\xi_{k}f_{k}\), where \(q=\frac{p}{p-1}\). In this case, \(\|f\|=(\sum\limits_{k=0}^{\infty}|\xi_{k}|^{q})^{\frac{1}{q}}\). If \(p=1\), for \(f\in(l^{p}(\mathbb{N}))^{\prime}\), there is unique \(\xi=\{\xi_{k}\}_{k=0}^{\infty}\in l^{\infty}(\mathbb{N})\) such that \(f=\sum\limits_{k=0}^{\infty}\xi_{k}f_{k}\) in weak-star topology, and \(\|f\|=\sup\limits_{k}|\xi_{k}|\). For convenience, we write \[\|f\|_{q}=\begin{cases}\big{(}\sum\limits_{k=0}^{\infty}|\xi_{k}|^{q}\big{)}^{ \frac{1}{q}}&\text{ if }q\neq\infty;\\ \sup\limits_{k}|\xi_{k}|&\text{ if }q=\infty.\end{cases}\] **Lemma 2.3**.: _Let \(T\) be an injective forward unilateral weighted shift on \(l^{p}(\mathbb{N})\) with weight sequence \(\{w_{n}\}_{n=0}^{\infty}\). Suppose that \(T\) is quasinilpotent. Then \(k_{x}\geq k_{e_{0}}\) for any non-zero \(x\in l^{p}(\mathbb{N})\)._ **Proof.** For \(n\geq 0\), we have \[(z-T)^{-1}e_{n} =\sum\limits_{k=0}^{\infty}\frac{1}{z^{k+1}}T^{k}e_{n}=\sum \limits_{k=0}^{\infty}\frac{w_{n}w_{n+1}\cdots w_{n+k-1}}{z^{k+1}}e_{n+k}\] \[=\sum\limits_{k=0}^{\infty}\frac{\beta_{n+k}}{\beta_{n}z^{k+1}} e_{n+k}=\beta_{n}^{-1}z^{n-1}\sum\limits_{k=n}^{\infty}\frac{\beta_{k}}{z^{k}}e_ {k}.\] Suppose that \(x=\sum\limits_{n=0}^{\infty}\hat{x}_{n}e_{n}\in l^{p}(\mathbb{N})\) and \(x\neq 0\). Then there exists a non-negative integer \(n_{0}\) such that \(\hat{x}_{n_{0}}\neq 0\) and \(\hat{x}_{j}=0,\forall j<n_{0}\). So \[(z-T)^{-1}x =\sum\limits_{n=0}^{\infty}\hat{x}_{n}(z-T)^{-1}e_{n}=\sum \limits_{n=n_{0}}^{\infty}\hat{x}_{n}(z-T)^{-1}e_{n}\] \[=\sum\limits_{n=n_{0}}^{\infty}\hat{x}_{n}\beta_{n}^{-1}z^{n-1} \sum\limits_{k=n}^{\infty}\frac{\beta_{k}}{z^{k}}e_{k}=\sum\limits_{k=n_{0}}^{ \infty}\frac{\beta_{k}}{z^{k}}(\sum\limits_{n=n_{0}}^{k}\hat{x}_{n}\beta_{n}^ {-1}z^{n-1})e_{k}.\] It follows that \[\|(z-T)^{-1}x\|_{p} =\sup\Big{\{}|f((z-T)^{-1}x)|:f\in(l^{p}(\mathbb{N}))^{\prime},\| f\|=1\Big{\}}\] \[=\sup\Big{\{}\sum\limits_{k=n_{0}}^{\infty}\big{|}\frac{\xi_{k} \beta_{k}}{z^{k}}\big{|}\big{|}\sum\limits_{n=n_{0}}^{k}\hat{x}_{n}\beta_{n}^{ -1}z^{n-1}\big{|}:f=\{\xi_{k}\}_{k=0}^{\infty}\in l^{q}(\mathbb{N}),\|f\|_{q}=1 \Big{\}}\] \[\geq\sum\limits_{k=n_{0}}^{\infty}\big{|}\frac{\xi_{k}\beta_{k}}{ z^{k}}\big{|}\big{|}\sum\limits_{n=n_{0}}^{k}\hat{x}_{n}\beta_{n}^{-1}z^{n-1} \big{|}.\] We next compute the integral mean value of \(\|(z-T)^{-1}x\|_{p}\). Let \(z=re^{i\theta}\) with \(r=|z|\). Then \[\frac{1}{2\pi}\int_{0}^{2\pi}\|(re^{i\theta}-T)^{-1}x\|_{p}d\theta \geq\frac{1}{2\pi}\int_{0}^{2\pi}\sum_{k=n_{0}}^{\infty}\big{|} \frac{\xi_{k}\beta_{k}}{(re^{i\theta})^{k}}\big{|}\big{|}\sum_{n=n_{0}}^{k} \hat{x}_{n}\beta_{n}^{-1}(re^{i\theta})^{n-1}\big{|}d\theta\] \[=\sum_{k=n_{0}}^{\infty}\big{|}\frac{\xi_{k}\beta_{k}}{r^{k}} \big{|}\frac{1}{2\pi}\int_{0}^{2\pi}\big{|}\sum_{n=n_{0}}^{k}\hat{x}_{n}\beta_ {n}^{-1}r^{n-1}e^{i(n-n_{0})\theta}\big{|}d\theta\] \[\geq\sum_{k=n_{0}}^{\infty}\big{|}\frac{\xi_{k}\beta_{k}}{r^{k}} \big{|}\frac{1}{2\pi}\big{|}\int_{0}^{2\pi}\sum_{n=n_{0}}^{k}\hat{x}_{n}\beta_ {n}^{-1}r^{n-1}e^{i(n-n_{0})\theta}d\theta\big{|}\] \[=\sum_{k=n_{0}}^{\infty}\big{|}\frac{\xi_{k}\beta_{k}}{r^{k}} \big{|}\big{|}\hat{x}_{n_{0}}\beta_{n_{0}}^{-1}r^{n_{0}-1}\big{|}\] \[\geq\big{|}\hat{x}_{n_{0}}\big{|}\big{|}\sum_{k=n_{0}}^{\infty} \frac{\xi_{k}\beta_{k}}{r^{k}}\beta_{n_{0}}^{-1}r^{n_{0}-1}\big{|}\] \[=\big{|}\hat{x}_{n_{0}}\big{|}\big{|}f((z-T)^{-1}e_{n_{0}})\big{|},\] where \(f=\{\xi_{k}\}_{k=0}^{\infty}\in(l^{p}(\mathbb{N}))^{\prime}\) and \(\|f\|_{q}=1\). Thus for any \(|z|=r\), we have \[\sup_{\theta\in[0,2\pi]}\|(re^{i\theta}-T)^{-1}x\|_{p} \geq\frac{1}{2\pi}\int_{0}^{2\pi}\|(re^{i\theta}-T)^{-1}x\|_{p}d\theta\] \[\geq|x_{n_{0}}|\|(z-T)^{-1}e_{n_{0}}\|_{p}.\] Pick \(z=re^{i\theta}\) so that \[\|(z-T)^{-1}x\|_{p}=\sup_{\theta\in[0,2\pi]}\|(re^{i\theta}-T)^{-1}x\|_{p}.\] Thereby, \[\|(re^{i\theta}-T)^{-1}x\|_{p}\geq|\hat{x}_{n_{0}}|\|(re^{i\theta}-T)^{-1}e_{n_ {0}}\|_{p}. \tag{2.1}\] Note that for \(z\neq 0\), \[\|(z-T)^{-1}e_{n_{0}}\|_{p}=\|(|z|-T)^{-1}e_{n_{0}}\|_{p}.\] And since \(T\) have circular symmetry, we have that \[\|(z-T)^{-1}\|=\|(|z|-T)^{-1}\|,\ \forall z\neq 0.\] So \[k_{e_{n_{0}}}=\limsup_{|z|\to 0}\frac{\ln\|(|z|-T)^{-1}e_{n_{0}}\|_{p}}{\ln\|(|z |-T)^{-1}\|}.\] We can choose a sequence \(\{r_{j}\}_{j=0}^{\infty}\subset[0,+\infty)\) with \(\lim_{j\to\infty}r_{j}=0\) such that \[k_{e_{n_{0}}}=\lim_{j\to\infty}\frac{\ln\|(r_{j}-T)^{-1}e_{n_{0}}\|_{p}}{\ln\|( r_{j}-T)^{-1}\|}. \tag{2.2}\] By (2.1), one can pick a sequence \(\{z_{j}\}_{j=0}^{\infty}\subset\mathbb{C}\) with \(|z_{j}|=r_{j}\) such that \[\liminf_{j\to\infty}\frac{\ln\|(z_{j}-T)^{-1}x\|_{p}}{\ln\|(z_{j}-T)^{-1}e_{n_ {0}}\|_{p}}\geq 1. \tag{2.3}\] Combining (2.2) and (2.3), we can deduce that \[k_{x} =\limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}x\|_{p}}{\ln\|(z-T)^{-1}\|}\] \[\geq\liminf_{j\to\infty}\frac{\ln\|(z_{j}-T)^{-1}x\|_{p}}{\ln\|(z_ {j}-T)^{-1}\|}\] \[=\liminf_{j\to\infty}\frac{\ln\|(z_{j}-T)^{-1}x\|_{p}}{\ln\|(z_{ j}-T)^{-1}e_{n_{0}}\|_{p}}\lim_{j\to\infty}\frac{\ln\|(z_{j}-T)^{-1}e_{n_{0}}\|_{p} }{\ln\|(z_{j}-T)^{-1}\|}\] \[=\liminf_{j\to\infty}\frac{\ln\|(z_{j}-T)^{-1}x\|_{p}}{\ln\|(z_{ j}-T)^{-1}e_{n_{0}}\|_{p}}\lim_{j\to\infty}\frac{\ln\|(r_{j}-T)^{-1}e_{n_{0}}\|_{p} }{\ln\|(r_{j}-T)^{-1}\|}\] \[\geq k_{e_{n_{0}}}.\] By Lemma 2.2, it holds that \(k_{x}\geq k_{e_{0}}\) for any non-zero \(x\in l^{p}(\mathbb{N})\). From the above lemma, we can get the following theorem. **Theorem 2.4**.: _Suppose that \(T\) is an injective quasinilpotent forward unilateral weighted shift on \(l^{p}(\mathbb{N})\). If \(k_{e_{0}}=1\), then \(\Lambda(T)=\{1\}\)._ **Proof.** By Lemma 2.3, we have \(k_{x}\geq k_{e_{0}}=1\) for any non-zero \(x\in l^{p}(\mathbb{N})\). Then by the fact \(0\leq k_{x}\leq 1\), it holds that \(k_{x}=1\). Hence, \(\Lambda(T)=\{1\}\). Recall that \(T\in\mathcal{B}(X)\) is called strictly cyclic if there exists a vector \(x_{0}\in X\) such that \(\{Ax_{0}:A\in\mathcal{A}(T)\}=X\), where \(\mathcal{A}(T)\) denote the closed subalgebra generated by the identity \(I\) and \(T\) in the weak operator topology. Such vector \(x_{0}\) is called a strictly cyclic vector for \(T\) (see [14]). **Corollary 2.5**.: _Let \(T\) be a strictly cyclic quasinilpotent forward unilateral weighted shift on \(l^{p}(\mathbb{N})\) for \(1\leq p<\infty\), then \(\Lambda(T)=\{1\}\)._ **Proof.** Obviously, \(e_{0}\) is a cyclic vector for \(T\). Let \(\Phi\) be the map from \(\mathcal{A}(T)\) to \(X\) such that \(\Phi(A)=Ae_{0}\) for \(A\in\mathcal{A}(T)\). Then \(\Phi\) is invertible (see [14, Page 93]). So, there exists a constant \(c>0\) such that \[c\|A\|\leq\|Ae_{0}\|\leq\|A\|,\quad\forall A\in\mathcal{A}(T).\] Since \(T\) quasinilpotent, for \(z\neq 0\), \((z-T)^{-1}\in\mathcal{A}(T)\). Thus \[\frac{\ln\|(z-T)^{-1}e_{0}\|_{p}}{\ln\|(z-T)^{-1}\|}\geq\frac{\ln C\|(z-T)^{-1 }\|}{\ln\|(z-T)^{-1}\|}=\frac{\ln C}{\ln\|(z-T)^{-1}\|}+1\] and \(\liminf_{z\to 0}\frac{\ln\|(z-T)^{-1}e_{0}\|_{p}}{\ln\|(z-T)^{-1}\|}\geq 1\). It follows from \(0\leq k_{e_{0}}\leq 1\) that \[k_{e_{0}}=\lim_{z\to 0}\frac{\ln\|(z-T)^{-1}e_{0}\|_{p}}{\ln\|(z-T)^{-1}\|}=1.\] By Theorem 2.4, have \(\Lambda(T)=\{1\}\). **Corollary 2.6**.: _Let \(T\) be an injective forward unilateral weighted shift on \(l^{1}(\mathbb{N})\) with weight sequence \(\{w_{n}\}_{n=0}^{\infty}\). If \(\{w_{n}\}_{n=0}^{\infty}\) is decreasing to \(0\), then \(\Lambda(T)=\{1\}\)._ **Proof.** It is not difficult to check that \(\|T^{n}\|=\|T^{n}e_{0}\|=\beta_{n}\) and \(\sigma(T)=\{0\}\). So for \(z\neq 0\), \[\|(z-T)^{-1}\|=\big{\|}\sum_{n=0}^{\infty}\frac{T^{n}}{z^{n+1}}\big{\|} \leq\sum_{n=0}^{\infty}\frac{\|T^{n}\|}{|z|^{n+1}}=\sum_{n=0}^{ \infty}\frac{\beta_{n}}{|z|^{n+1}}\] \[=\|(z-T)^{-1}e_{0}\|_{1}\leq\|(z-T)^{-1}\|.\] Then \[k_{e_{0}}=\limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}e_{0}\|_{1}}{\ln\|(z-T)^{-1}\|}= \limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}\|}{\ln\|(z-T)^{-1}\|}=1.\] By Theorem 2.5, have \(\Lambda(T)=\{1\}\). ## 3. Some backward weighted shifts with power set \([0,1]\) For \(1\leq p<\infty\), let \(T\) be an injective quasinilpotent backward unilateral weighted shift on \(l^{p}(\mathbb{N})\) with weight sequence \(\{w_{n}\}_{n=1}^{\infty}\). Given \(m\geq 1\), for \(z\neq 0\), \[\|(z-T)^{-1}e_{m}\|_{p}^{p}=\|\sum_{i=0}^{m}\frac{1}{z^{i+1}}T^{i }e_{m}\|_{p}^{p} =\|\frac{e_{m}}{z}+\sum_{i=1}^{m}\big{(}\frac{1}{z^{i+1}}\prod_{j= 0}^{i-1}w_{m-j}\big{)}e_{m-i}\|_{p}^{p}\] \[=\frac{1}{|z|^{p}}+\sum_{i=1}^{m}\Big{|}\frac{1}{z^{i+1}}\prod_{ j=0}^{i-1}w_{m-j}\Big{|}^{p}.\] Since \(\|(z-T)^{-1}e_{N}\|_{p}\leq\|(z-T)^{-1}\|\) for any \(N\in\mathbb{N}\), it follows that \[k_{e_{m}} =\limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}e_{m}\|_{p}}{\ln\|(z-T)^{-1}\|}\] \[\leq\limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}e_{m}\|_{p}}{\ln\|(z-T)^{-1 }e_{N}\|_{p}}=\frac{m+1}{N+1}.\] Because of the above formula holds for any \(N\in\mathbb{N}\), it holds that \(k_{e_{m}}\leq 0\). By the fact \(0\leq k_{e_{m}}\leq 1\), we have \(k_{e_{m}}=0\). Further, it is obvious that \(k_{e_{0}}=0\). Therefore, we conclude that \(k_{e_{m}}=0\) for every \(m\geq 0\). In view of this, suppose that \(x\in\operatorname{span}\{e_{n}:n\geq 0\}\), it is not difficult to check that \(k_{x}=0\). So, this gives people a false impression that \(\Lambda(T)=\{0\}\). Actually, that is not the case. In this section, we aim to prove that the power set of a class of backward unilateral weighted shifts is \([0,1]\). Throughout this section, for \(k\geq 0\), we set \(f_{k}(x)=x_{k}\) for all \(x=\{x_{n}\}_{n=0}^{\infty}\in l^{p}(\mathbb{N})\). We first give the following Theorem. **Theorem 3.1**.: _Let \(T\) be an injective quasinilpotent backward unilateral weighted shift on \(l^{p}(\mathbb{N})\) with decreasing weight sequence \(\{w_{n}\}_{n=1}^{\infty}\). If \(\{w_{n}\}_{n=1}^{\infty}\) is \(p^{\prime}\)-summable for some positive number \(p^{\prime}\), then \(1\in\Lambda(T)\)._ **Proof.** Since \(\{w_{n}\}_{n=1}^{\infty}\) is \(p^{\prime}\)-summable for some positive number \(p^{\prime}\), we can pick \(m\geq 1\) so that the sequence \(\{\prod\limits_{j=1}^{m}w_{n+j}\}_{n=0}^{\infty}\) is summable. Now, we set \(x=\{\alpha_{n}\}_{n=0}^{\infty}\), where \(\alpha_{n}=\prod\limits_{j=1}^{m}w_{n+j}\). Then \(x\in l^{p}(\mathbb{N})\) for any \(p\in[1,+\infty)\). We can first observe that \[\|(z-T)^{-1}x\|_{p}\geq\big{|}f_{0}\big{(}(z-T)^{-1}x\big{)}\big{|},\quad\forall z\neq 0\] \[k_{x}=\limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}x\|_{p}}{\ln\|(z-T)^{-1}\|}\geq \limsup_{z\to 0}\frac{\ln\big{|}f_{0}\big{(}(z-T)^{-1}x\big{)}\big{|}}{\ln\|(z-T)^{-1} \|}.\] Next, we compute the value of \(f_{0}\big{(}(z-T)^{-1}x\big{)}\). For \(z\neq 0\), \[\begin{split} f_{0}\big{(}(z-T)^{-1}x\big{)}&=\sum \limits_{n=0}^{\infty}\frac{f_{0}(T^{n}x)}{z^{n+1}}=\frac{\alpha_{0}}{z}+\sum \limits_{n=1}^{\infty}\frac{\prod\limits_{j=1}^{n}w_{j}}{z^{n+1}}\alpha_{n}\\ &=\frac{\alpha_{0}}{z}+\sum\limits_{n=1}^{\infty}\frac{\prod \limits_{j=1}^{n+m}w_{j}}{z^{n+1}}=\frac{\alpha_{0}}{z}+z^{m}\sum\limits_{n=1} ^{\infty}\frac{\prod\limits_{j=1}^{n+m}w_{j}}{z^{n+m+1}}\\ &=\frac{\alpha_{0}}{z}+z^{m}\sum\limits_{k=m+1}^{\infty}\frac{ \prod\limits_{j=1}^{k}w_{j}}{z^{k+1}}\\ &=\frac{\alpha_{0}}{z}+z^{m}\big{(}\sum\limits_{k=1}^{\infty} \frac{\prod\limits_{j=1}^{k}w_{j}}{z^{k+1}}-\sum\limits_{k=1}^{m}\frac{\prod \limits_{j=1}^{k}w_{j}}{z^{k+1}}\big{)}.\end{split} \tag{3.1}\] Since the weight sequence \(\{w_{n}\}_{n=1}^{\infty}\) is decreasing, it follows that \(\|T^{k}\|=\prod\limits_{j=1}^{k}w_{j}\) and \[\|(z-T)^{-1}\|\leq\frac{1}{|z|}+\sum\limits_{k=1}^{\infty}\frac{\|T^{k}\|}{|z |^{k+1}}=\frac{1}{|z|}+\sum\limits_{k=1}^{\infty}\frac{\prod\limits_{j=1}^{k}w _{j}}{|z|^{k+1}}. \tag{3.2}\] Combining (3.1) and (3.2), we have that \[\begin{split} k_{x}&\geq\limsup_{z\to 0}\frac{\ln\big{|}f_{0} \big{(}(z-T)^{-1}x\big{)}\big{|}}{\ln\|(z-T)^{-1}\|}\\ &\geq\limsup_{z\to 0}\frac{\ln\big{|}\frac{\alpha_{0}}{z}+z^{m} \big{(}\sum\limits_{k=1}^{\infty}\frac{\prod\limits_{j=1}^{k}w_{j}}{z^{k+1}}- \sum\limits_{k=1}^{m}\frac{\prod\limits_{j=1}^{k}w_{j}}{|z^{k+1}}\big{)}\big{|} }{\ln\Big{(}\frac{1}{|z|}+\sum\limits_{k=1}^{\infty}\frac{\prod\limits_{j=1}^{ k}w_{j}}{|z|^{k+1}}\Big{)}}\\ &\geq\limsup_{|z|\to 0}\frac{\ln\Big{(}\frac{\alpha_{0}}{|z|}+|z |^{m}\big{(}\sum\limits_{k=1}^{\infty}\frac{\prod\limits_{j=1}^{k}w_{j}}{|z|^{ k+1}}-\sum\limits_{k=1}^{m}\frac{\prod\limits_{j=1}^{k}w_{j}}{|z|^{k+1}}\big{)} \Big{)}}{\ln\Big{(}\frac{1}{|z|}+\sum\limits_{k=1}^{\infty}\frac{\prod\limits_ {j=1}^{k}w_{j}}{|z|^{k+1}}\Big{)}}=1.\end{split} \tag{3.3}\] By the fact \(0\leq k_{x}\leq 1\), it yields that \(k_{x}=1\). Hence \(1\in\Lambda(T)\). **Remark 3.2**.: By Theorem 3.1, one can see that the power set of an injective quasinilpotent backward weighted shift can also contain \(1\). That provides a negative answer to Question 5. Therefore, it will be very interesting to calculate its power set. Next, we will show that the power set of some injective quasinilpotent backward weighted shifts is \([0,1]\). **Lemma 3.3**.: _Suppose that \(T\) is an injective quasinilpotent backward unilateral weighted shift on \(l^{p}(\mathbb{N})\) with weight sequence \(\{w_{n}\}_{n=1}^{\infty}\). If there is a natural number \(m\) such that \(x=\{\alpha_{n}\}_{n=0}^{\infty}\) in \(l^{p}(\mathbb{N})\), where \(\alpha_{0}=1\) and \(\alpha_{n}=\big{(}(n+m)!\prod\limits_{j=1}^{n}w_{j}\big{)}^{-1}\) for \(n\geq 1\), then \([0,k_{x}]\subset\Lambda(T)\)._ **Proof.** For \(0<r\leq 1\), let \(x_{r}=\big{\{}a_{n}\big{\}}_{n=0}^{\infty}\), where \(a_{0}=1\) and \(a_{n}=r^{n}\big{(}(n+m)!\prod\limits_{j=1}^{n}w_{j}\big{)}^{-1}\) for \(n\geq 1\). Then \(x_{1}=x\) and \(x_{r}\in l^{p}(\mathbb{N})\). For \(z\neq 0\), \[\begin{split} f_{0}\big{(}(|z|-T)^{-1}x_{r}\big{)}& =\sum_{k=0}^{\infty}\frac{f_{0}(T^{k}x_{r})}{|z|^{k+1}}=\frac{1}{|z| }\Big{(}1+\sum_{k=1}^{\infty}\frac{1}{(k+m)!}\frac{r^{k}}{|z|^{k}}\Big{)}\\ &=\frac{1}{|z|}\Big{(}1+\frac{|z|^{m}}{r^{m}}\sum_{k=1}^{\infty} \frac{1}{(k+m)!}\frac{r^{(k+m)}}{|z|^{(k+m)}}\Big{)}\\ &=\frac{1}{|z|}+\frac{|z|^{m-1}}{r^{m}}\sum_{n=m+1}^{\infty} \frac{1}{n!}\frac{r^{n}}{|z|^{n}}\\ &=\frac{1}{|z|}+\frac{|z|^{m-1}}{r^{m}}\Big{(}\sum_{n=1}^{\infty }\frac{1}{n!}\frac{r^{n}}{|z|^{n}}-\sum_{n=1}^{m}\frac{1}{n!}\frac{r^{n}}{|z| ^{n}}\Big{)}\\ &=\frac{1}{|z|}+\frac{|z|^{m-1}}{r^{m}}\Big{(}e^{\frac{r}{|z|}}- \sum_{n=0}^{m}\frac{1}{n!}\frac{r^{n}}{|z|^{n}}\Big{)}.\end{split} \tag{3.4}\] It follows that \[\lim_{z\to 0}\frac{\ln\big{|}f_{0}\big{(}(|z|-T)^{-1}x_{r}\big{)} \big{|}}{\ln\big{|}f_{0}\big{(}(|z|-T)^{-1}x\big{)}\big{|}}=\lim_{|z|\to 0}\frac{\ln \Big{(}\frac{1}{|z|}+\frac{|z|^{m-1}}{r^{m}}\big{(}e^{\frac{r}{|z|}}-\sum_{n= 0}^{m}\frac{1}{n!}\frac{r^{n}}{|z|^{n}}\big{)}\Big{)}}{\ln\Big{(}\frac{1}{|z|} +|z|^{m-1}\big{(}e^{\frac{1}{|z|}}-\sum_{n=0}^{m}\frac{1}{n!}\frac{1}{|z|^{n} }\big{)}\Big{)}}=r. \tag{3.5}\] Moreover, for any \(n\geq 1\) we have \[\begin{split} f_{n}\big{(}(z-T)^{-1}& x_{r}\big{)} =\sum_{k=0}^{\infty}\frac{f_{n}(T^{k}x_{r})}{z^{k+1}}\\ &=\frac{1}{(n+m)!\prod\limits_{j=1}^{n}w_{j}}\frac{r^{n}}{z}+ \frac{w_{n+1}}{(n+m+1)!\prod\limits_{j=1}^{n+1}w_{j}}\frac{r^{n+1}}{z^{2}}+ \frac{w_{n+1}w_{n+2}}{(n+m+2)!\prod\limits_{j=1}^{n+2}w_{j}}\frac{r^{n+2}}{ z^{3}}+\cdots\\ &=\frac{1}{(n+m)!\prod\limits_{j=1}^{n}w_{j}}\frac{r^{n}}{z} \Big{(}1+\frac{1}{n+m+1}\frac{r}{z}+\frac{1}{(n+m+1)(n+m+2)}\frac{r^{2}}{z^{2 }}+\cdots\Big{)}\end{split}\] and \[|f_{n}\big{(}(z-T)^{-1}x_{r}\big{)}| \leq\frac{1}{(n+m)!\prod\limits_{j=1}^{n}w_{j}}\frac{r^{n}}{|z|} \Big{(}1+\frac{1}{n+m+1}\frac{r}{|z|}+\frac{1}{(n+m+1)(n+m+2)}\frac{r^{2}}{|z|^ {2}}+\cdots\Big{)}\] \[\leq\frac{1}{(n+m)!\prod\limits_{j=1}^{n}w_{j}}\frac{r^{n}}{|z|} \Big{(}1+\frac{1}{(m+1)}\frac{r}{|z|}+\frac{1}{(m+1)(m+2)}\frac{r^{2}}{|z|^{2}} +\cdots\Big{)}\] \[=\frac{r^{n}}{(n+m)!\prod\limits_{j=1}^{n}w_{j}}|f_{0}\big{(}(|z| -T)^{-1}x_{r}\big{)}|.\] It follows that \[\begin{split}\|(z-T)^{-1}x_{r}\|_{p}&=\Big{(}\sum \limits_{n=0}^{\infty}\big{|}f_{n}\big{(}(z-T)^{-1}x_{r}\big{)}\big{|}^{p} \Big{)}^{\frac{1}{p}}\\ &\leq\big{|}f_{0}\big{(}(|z|-T)^{-1}x_{r}\big{)}\big{|}\bigg{(}1+ \sum\limits_{n=1}^{\infty}\Big{(}\frac{r^{n}}{(n+m)!\prod\limits_{j=1}^{n}w_{ j}}\Big{)}^{p}\bigg{)}^{\frac{1}{p}}\\ &=\big{|}f_{0}\big{(}(|z|-T)^{-1}x_{r}\big{)}\big{|}\|x_{r}\|_{p}.\end{split} \tag{3.6}\] Moreover, it is obvious that \(\|(z-T)^{-1}x_{r}\|_{p}\geq\big{|}f_{0}\big{(}(z-T)^{-1}x_{r}\big{)}\big{|},\, \forall z\neq 0\). So \[\frac{\ln\big{|}f_{0}\big{(}(|z|-T)^{-1}x_{r}\big{)}\big{|}}{\ln\|(z-T)^{-1} \|}\leq\frac{\ln\|(z-T)^{-1}x_{r}\|_{p}}{\ln\|(z-T)^{-1}\|}\leq\frac{\ln\big{|} f_{0}\big{(}(|z|-T)^{-1}x_{r}\big{)}\big{|}\|x_{r}\|_{p}}{\ln\|(z-T)^{-1}\|}.\] On the other hand, since \(T\) have circular symmetry, we have \[\|(z-T)^{-1}\|=\|(|z|-T)^{-1}\|,\,\,\forall z\neq 0.\] Then, we can conclude that \[k_{x_{r}}=\limsup_{|z|\to 0}\frac{\ln\big{|}f_{0}\big{(}(|z|-T)^{-1}x_{r} \big{)}\big{|}}{\ln\|(|z|-T)^{-1}\|},\,\,\forall r\in(0,1]. \tag{3.7}\] Combining (3.5) and (3.7), we deduce \[k_{x_{r}} =\limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}x_{r}\|_{p}}{\ln\|(z-T)^{-1}\|}\] \[=\lim_{|z|\to 0}\frac{\ln\big{|}f_{0}\big{(}(|z|-T)^{-1}x_{r} \big{)}\big{|}}{\ln\big{|}f_{0}\big{(}(|z|-T)^{-1}x\big{)}\big{|}}\limsup_{|z| \to 0}\frac{\ln\big{|}f_{0}\big{(}(|z|-T)^{-1}x\big{)}\big{|}}{\ln\|(|z|-T)^{- 1}\|}\] \[=r\cdot\limsup_{|z|\to 0}\frac{\ln\big{|}f_{0}\big{(}(|z|-T)^{-1}x \big{)}\big{|}}{\ln\|(|z|-T)^{-1}\|}=rk_{x}.\] Since \(0<r\leq 1\) and \(0\in\Lambda(T)\), it holds that \([0,k_{x}]\subset\Lambda(T)\). **Theorem 3.4**.: _Let \(T\) be a backward unilateral weighted shift on \(l^{p}(\mathbb{N})\) with decreasing weight sequence \(\{w_{n}\}_{n=1}^{\infty}\). If there is a positive number \(m_{0}\) such that \(\frac{1}{n+2m_{0}}\leq w_{n+m_{0}}\leq\frac{1}{n}\) for \(n\geq 1\), then \(\Lambda(T)=[0,1]\)._ **Proof.** Pick a positive number \(m>2m_{0}+2\), then for \(n>m_{0}\) we have \[\big{(}(n+m)!\prod_{j=1}^{n}w_{j}\big{)}^{-1} =\big{(}(n+m)!\prod_{j=1}^{m_{0}}w_{j}\prod_{j=m_{0}+1}^{n}w_{j} \big{)}^{-1}\] \[\leq(\prod_{j=1}^{m_{0}}w_{j})^{-1}\frac{(n+2m_{0})!}{(n+m)!}\] \[\leq(\prod_{j=1}^{m_{0}}w_{j})^{-1}\frac{1}{(n+m)(n+m+2)}.\] It follows that \[\begin{split}&\sum_{n=1}^{\infty}\big{(}(n+m)!\prod_{j=1}^{n}w_{j} \big{)}^{-1}\\ &\qquad\leq\sum_{n=1}^{m_{0}}\big{(}(n+m)!\prod_{j=1}^{n}w_{j} \big{)}^{-1}+(\prod_{j=1}^{m_{0}}w_{j})^{-1}\sum_{n=m_{0}+1}^{\infty}\frac{1}{ (n+m)(n+m+2)}<\infty.\end{split} \tag{3.8}\] Let \(x=\{\alpha_{n}\}_{n=0}^{\infty}\), where \(\alpha_{0}=1\) and \(\alpha_{n}=\big{(}(n+m)!\prod\limits_{j=1}^{n}w_{j}\big{)}^{-1}\) for \(n\geq 1\). Then \(x\in l^{p}(\mathbb{N})\) for any \(p\in[1,+\infty)\). By Lemma 3.3, have \([0,k_{x}]\subset\Lambda(T)\). Next, we prove \(k_{x}=1\). From \(w_{n+m_{0}}\leq\frac{1}{n}\), we can deduce that \[\|(z-T)^{-1}\| \leq\sum_{k=0}^{\infty}\frac{\|T^{k}\|}{|z|^{k+1}}=\frac{1}{|z|}+ \sum_{k=1}^{\infty}\frac{\prod\limits_{j=1}^{k}w_{j}}{|z|^{k+1}}\] \[=\frac{1}{|z|}+\sum_{k=1}^{m_{0}}\frac{\prod\limits_{j=1}^{k}w_{j }}{|z|^{k+1}}+\sum_{k=m_{0}+1}^{\infty}\frac{\prod\limits_{j=1}^{k}w_{j}}{|z|^ {k+1}}\] \[\leq\frac{1}{|z|}+\sum_{k=1}^{m_{0}}\frac{\prod\limits_{j=1}^{k}w _{j}}{|z|^{k+1}}+\sum_{k=m_{0}+1}^{\infty}\frac{1}{k!|z|^{k+1}}\] \[=\frac{1}{|z|}+\sum_{k=1}^{m_{0}}\frac{\prod\limits_{j=1}^{k}w_{j }}{|z|^{k+1}}+\frac{1}{|z|}\big{(}\sum_{k=1}^{\infty}\frac{1}{k!|z|^{k}}-\sum_ {k=1}^{m_{0}}\frac{1}{k!|z|^{k}}\big{)}\] \[=\frac{1}{|z|}+\sum_{k=1}^{m_{0}}\frac{\prod\limits_{j=1}^{k}w_{j }}{|z|^{k+1}}+\frac{1}{|z|}\big{(}e^{\frac{1}{|z|}}-\sum_{k=0}^{m_{0}}\frac{1 }{k!|z|^{k}}\big{)}.\] Combining (3.4), we can obtain that \[k_{x} =\limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}x\|_{p}}{\ln\|(z-T)^{-1}\|}\] \[\geq\limsup_{z\to 0}\frac{\ln|f_{0}\big{(}(z-T)^{-1}x\big{)}|}{\ln\|(z- T)^{-1}\|}\] \[\geq\limsup_{|z|\to 0}\frac{\ln\Big{(}\frac{1}{|z|}+|z|^{m-1} \big{(}e^{\frac{1}{|z|}}-\sum\limits_{n=0}^{m}\frac{1}{n!}\frac{1}{|z|^{n}} \big{)}\Big{)}}{\ln\Big{(}\frac{1}{|z|}+\sum\limits_{k=1}^{m_{0}}\frac{\prod \limits_{j=1}^{k}w_{j}}{|z|^{k+1}}+\frac{1}{|z|}\big{(}e^{\frac{1}{|z|}}-\sum \limits_{k=0}^{m_{0}}\frac{1}{k!|z|^{k}}\big{)}\Big{)}}=1.\] Since \(0\leq k_{x}\leq 1\), it yields that \(k_{x}=1\). Hence \(\Lambda(T)=[0,1]\). **Corollary 3.5**.: _Let \(T\) be the backward unilateral weighted shift with weight sequence \(\{\frac{1}{n+1}\}_{n=0}^{\infty}\) on \(l^{p}(\mathbb{N})\) for \(1\leq p<\infty\). Then \(\Lambda(T)=[0,1]\)._ **Proof.** It immediately follows from Theorem 3.4. ## 4. A bilateral weighted shift with power set \([\frac{1}{2},1]\) For \(1<p<\infty\), let \(A\) be an injective forward unilateral weighted shift on \(l^{p}(\mathbb{N})\) with weight sequence \(\{w_{n}\}_{n=1}^{\infty}\). If \(\{w_{n}\}_{n=1}^{\infty}\) is monotone decreasing and \(p^{\prime}\)-summable for some positive number \(p^{\prime}\), then \(A\) is strictly cyclic (see [9, Corollary 3.5]). So, by Corollary 2.5, have \(\Lambda(A)=\{1\}\). In view of this, the aim of this section is to construct an injective quasinilpotent bilateral weighted shift with power set \([\frac{1}{2},1]\) on \(l^{p}(\mathbb{Z})\) for \(1<p<\infty\). For \(k\in\mathbb{Z}\), set \(f_{k}(x)=x_{k}\), \(\forall x=\{x_{n}\}_{n=-\infty}^{+\infty}\in l^{p}(\mathbb{Z})\). Let \(e_{1}\otimes f_{0}\) denote the rank-one operator on \(l^{p}(\mathbb{Z})\) defined as \((e_{1}\otimes f_{0})(x)=f_{0}(x)e_{1},\forall x\in l^{p}(\mathbb{Z})\). Now, we defined an injective quasinilpotent bilateral weighted shift on \(l^{p}(\mathbb{Z})\) as follows: \[T=\begin{bmatrix}B&0\\ e_{1}\otimes f_{0}&A\end{bmatrix} \tag{4.1}\] where \[A=\begin{bmatrix}0\\ w_{1}&0\\ &w_{2}&0\\ &&w_{3}&0\\ &&&\ddots&\ddots\end{bmatrix}\begin{bmatrix}e_{1}\\ e_{2}\\ e_{3},\\ e_{4}\end{bmatrix}\quad B=\begin{bmatrix}0&w_{1}\\ &0&w_{2}\\ &&0&w_{3}\\ &&&0&w_{4}\\ &&&\ddots&\ddots\end{bmatrix}\begin{bmatrix}e_{0}\\ e_{-1}\\ e_{-2}.\\ e_{-3}\\ \vdots\end{bmatrix}\] Note that \(A=B^{tr}\), where \(B^{tr}\) is the transpose of \(B\). And it is not difficult to prove that \(\|(z-A)^{-1}\|=\|(z-B)^{-1}\|\) for \(z\neq 0\). Then we have the following lemmas. **Lemma 4.1**.: _Let \(T\) be as in \((\ref{eq:1})\) above. If \(\{w_{n}\}_{n=1}^{\infty}\) is monotone decreasing and \(p^{\prime}\)-summable for some positive number \(p^{\prime}\), then \(\Lambda(T)\subset[\frac{1}{2},1]\)._ **Proof.** We first compute the value of \(\|(z-T)^{-1}\|\). For \(z\neq 0\), \[(z-T)^{-1}=\begin{bmatrix}(z-B)^{-1}&0\\ -(z-A)^{-1}e_{1}\otimes(z-B^{\prime})^{-1}f_{0}&(z-A)^{-1}\end{bmatrix}, \tag{4.2}\] where \(B^{\prime}\) is the Banach conjugate of \(B\). It follows that \[\|(z-T)^{-1}\|\leq\|(z-A)^{-1}\|+\|(z-B)^{-1}\|+\|(z-A)^{-1}e_{1}\|_{p}\|(z-B^{ \prime})^{-1}f_{0}\|_{q}\] and \[\|(z-A)^{-1}e_{1}\|_{p}\|(z-B^{\prime})^{-1}f_{0}\|_{q}\leq\|(z-T)^{-1}\|,\] where \(q=\frac{p}{p-1}\). Noting that \(e_{1}\) is a strictly cyclic vector of \(A\). So, there exists a constant \(c>0\) such that \[c\|(z-A)^{-1}\|\leq\|(z-A)^{-1}e_{1}\|_{p}\leq\|(z-A)^{-1}\|.\] In view of this, we have \[k_{(A,e_{1})}=\lim_{z\to 0}\frac{\ln\|(z-A)^{-1}e_{1}\|_{p}}{\ln\|(z-A)^{-1}\|}=1. \tag{4.3}\] For convenience, given two functions \(X(z),Y(z):\mathbb{C}\backslash\{0\}\to[0,\infty)\), we write \(X(z)\approx Y(z)\) if \(\lim_{z\to 0}\frac{X(z)}{Y(z)}=1\). Hence, when \(z\) tends to \(0\), we have that \[\ln\|(z-A)^{-1}\|\approx\ln\|(z-A)^{-1}e_{1}\|_{p}.\] Similarly, we can also obtain \[\ln\|(z-A)^{-1}\|=\ln\|(z-B)^{-1}\|=\ln\|(z-B^{\prime})^{-1}\|\approx\ln\|(z-B ^{\prime})^{-1}f_{0}\|_{q}.\] Therefore, as \(z\) tends to \(0\), we have \[\begin{split}\ln\|(z-T)^{-1}\|&\approx\ln\|(z-A)^{-1 }e_{1}\|_{p}\|(z-B^{\prime})^{-1}f_{0}\|_{q}\\ &\approx\ln\|(z-A)^{-1}\|^{2}.\end{split} \tag{4.4}\] Next, we prove that \(k_{(T,x)}\geq\frac{1}{2}\) holds for any non-zero \(x\in l^{p}(\mathbb{Z})\). Let \(x=\left[\begin{array}{c}x_{1}\\ x_{2}\end{array}\right]\in l^{p}(\mathbb{Z})\) and \(x\neq 0\). Then \[(z-T)^{-1}x=\begin{bmatrix}(z-B)^{-1}x_{1}\\ -\big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(x_{1})(z-A)^{-1}e_{1}+(z-A)^{-1}x_{2} \end{bmatrix}. \tag{4.5}\] Now the rest proof is divided into three cases. **Case 1.**\(x_{1}=0,x_{2}\neq 0\). It is clear that \(\|(z-T)^{-1}x\|_{p}=\|(z-A)^{-1}x_{2}\|_{p}\). Since \(k_{(A,x_{2})}=1\), it follows that \[\begin{split} k_{(T,x)}=\limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}x\|_{p}}{ \ln\|(z-T)^{-1}\|}&=\limsup_{z\to 0}\frac{\ln\|(z-A)^{-1}x_{2}\|_{p}}{ \ln\|(z-T)^{-1}\|}\\ &\approx\limsup_{z\to 0}\frac{\ln\|(z-A)^{-1}x_{2}\|_{p}}{ \ln\|(z-A)^{-1}\|^{2}}=\frac{1}{2}.\end{split}\] **Case 2.**\(x_{1}\neq 0,x_{2}=0\). In this case, we can observe that \[\|(z-T)^{-1}x\|_{p}\geq\big{|}\big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(x_{1}) \big{|}\|(z-A)^{-1}e_{1}\|_{p}.\] Notice that \[\big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(x_{1})=f_{0}\big{(}(z-B)^{-1}x_{1}\big{)} =\sum_{k=0}^{\infty}\frac{f_{0}(B^{k}x_{1})}{z^{k+1}}.\] Since \(x_{1}\neq 0\), we have that \[\limsup_{z\to 0}\big{|}\big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(x_{1})\big{|}=+\infty.\] We can pick a sequence \(\{z_{j}\}_{j=0}^{\infty}\subset\mathbb{C}\) with \(\lim\limits_{j\to\infty}|z_{j}|=0\) such that \[|\big{(}(z_{j}-B^{\prime})^{-1}f_{0}\big{)}(x_{1})|>1,\quad\forall j\geq 0. \tag{4.6}\] Combining (4.3) and (4.6), we have \[k_{(T,x)} =\limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}x\|_{p}}{\ln\|(z-T)^{-1}\|}\] \[\approx\limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}x\|_{p}}{\ln\|(z-A)^{-1} \|^{2}}\] \[\geq\limsup_{z\to 0}\frac{\ln\big{|}\big{(}(z-B^{\prime})^{-1}f_{0} \big{)}(x_{1})\big{|}+\ln\|(z-A)^{-1}e_{1}\|_{p}}{\ln\|(z-A)^{-1}\|^{2}}\] \[\geq\limsup_{j\to\infty}\frac{\ln\big{|}\big{(}(z_{j}-B^{\prime}) ^{-1}f_{0}\big{)}(x_{1})\big{|}+\ln\|(z_{j}-A)^{-1}e_{1}\|_{p}}{\ln\|(z_{j}-A )^{-1}\|^{2}}\geq\frac{1}{2}.\] **Case 3.**\(x_{i}\neq 0,i=1,2\). Let \(x_{2}=\sum\limits_{n=1}^{\infty}\alpha_{n}e_{n}\) and \(\widetilde{x}_{2}=\sum\limits_{n=2}^{\infty}\alpha_{n}e_{n}\). We have \[-\big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(x_{1})(z-A)^{-1}e_{1}+(z- A)^{-1}x_{2}\] \[=\big{(}-\big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(x_{1})+\alpha_{1} \big{)}(z-A)^{-1}e_{1}+(z-A)^{-1}\widetilde{x}_{2}\] and \[\|(z-T)^{-1}x\|_{p} \geq\big{\|}\big{(}\alpha_{1}-\big{(}(z-B^{\prime})^{-1}f_{0} \big{)}(x_{1})\big{)}(z-A)^{-1}e_{1}+(z-A)^{-1}\widetilde{x}_{2}\big{\|}_{p}\] \[\geq\big{|}\big{(}\alpha_{1}-\big{(}(z-B^{\prime})^{-1}f_{0} \big{)}(x_{1})\big{)}\big{|}\|(z-A)^{-1}e_{1}\|_{p}-\|(z-A)^{-1}\widetilde{x}_ {2}\|_{p}.\] Notice that \[\limsup_{z\to 0}\big{|}\big{(}\alpha_{1}-\big{(}(z-B^{\prime})^{-1}f_{0} \big{)}(x_{1})\big{)}\big{|}=+\infty.\] We can pick a sequence \(\{\lambda_{j}\}_{j=0}^{\infty}\subset\mathbb{C}\) with \(\lim\limits_{j\to\infty}|\lambda_{j}|=0\) such that \[\big{|}\big{(}\alpha_{1}-((\lambda_{j}-B^{\prime})^{-1}f_{0})(x_{1})\big{)} \big{|}>1,\quad\forall j\geq 0. \tag{4.7}\] Moreover, since \(k_{(A,\widetilde{x}_{2})}=1\) and \(k_{(A,e_{1})}=1\), it follows that \[\ln\Big{(}\big{|}\big{(}\alpha_{1}-\big{(}(z-B^{\prime})^{-1}f_{ 0}\big{)}(x_{1})\big{)}\big{|}\|(z-A)^{-1}e_{1}\|_{p}-\|(z-A)^{-1}\widetilde{x} _{2}\|_{p}\Big{)}\] \[\qquad\approx\ln\Big{(}\big{|}\big{(}\alpha_{1}-\big{(}(z-B^{ \prime})^{-1}f_{0}\big{)}(x_{1})\big{)}\big{|}\|(z-A)^{-1}e_{1}\|_{p}\Big{)}\] when \(z\) tends to \(0\). Then combining (4.3) and (4.7), we have \[k_{(T,x)} =\limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}x\|_{p}}{\ln\|(z-T)^{-1}\|}\] \[\approx\limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}x\|_{p}}{\ln\|(z-A)^{-1}\| ^{2}}\] \[\geq\limsup_{z\to 0}\frac{\ln\big{|}\big{(}\alpha_{1}-\big{(}(z-B^ {\prime})^{-1}f_{0}\big{)}(x_{1})\big{)}\big{|}\|(z-A)^{-1}e_{1}\|_{p}}{\ln\|(z -A)\|^{2}}\] \[\geq\limsup_{j\to\infty}\frac{\ln\big{|}\big{(}\alpha_{1}-\big{(} (\lambda_{j}-B^{\prime})^{-1}f_{0}\big{)}(x_{1})\big{)}\big{|}\|(\lambda_{j}- A)^{-1}e_{1}\|_{p}}{\ln\|(\lambda_{j}-A)^{-1}\|^{2}}\geq\frac{1}{2}.\] In summary, \(k_{(T,x)}\geq\frac{1}{2}\) for all non-zero \(x\in l^{p}(\mathbb{Z})\). Hence, \(\Lambda(T)\subset[\frac{1}{2},1]\). **Lemma 4.2**.: _Let \(T\) be as in \((\ref{eq:T})\) above. If \(\{w_{n}\}_{n=1}^{\infty}\) is monotone decreasing and \(p^{\prime}\)-summable for some positive number \(p^{\prime}\), then \(1\in\Lambda(T)\)._ **Proof.** Pick \(m\geq 1\) such that the sequence \(\{\prod\limits_{j=1}^{m}w_{n+j}\}_{n=0}^{\infty}\) is summable. Write \(\alpha_{n}=\prod\limits_{j=1}^{m}w_{n+j}\) for \(n\geq 0\). And let \(y=\left[\begin{array}{c}\xi_{1}\\ \xi_{2}\end{array}\right]\), where \[\xi_{1}=\left[\begin{array}{c}\alpha_{0}\\ \alpha_{1}\\ \alpha_{2}\\ \vdots\end{array}\right]\begin{array}{c}e_{0}\\ e_{-1}\\ e_{-2}\\ \vdots\end{array}\right]=\left[\begin{array}{c}0\\ 0\\ 0\\ \vdots\end{array}\right]\begin{array}{c}e_{1}\\ e_{2}\\ e_{3}.\\ \vdots\end{array}\] Then \(y\in l^{p}(\mathbb{Z})\) for any \(p\in[1,+\infty)\) and \[(z-T)^{-1}y=\begin{bmatrix}(z-B)^{-1}\xi_{1}\\ -\big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(\xi_{1})(z-A)^{-1}e_{1}+(z-A)^{-1}\xi_ {2}\end{bmatrix}.\] It follows that \[\|(z-T)^{-1}y\|_{p}\geq\big{|}\big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(\xi_{1}) \big{|}\|(z-A)^{-1}e_{1}\|_{p}.\] Notice that \(\big{|}\big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(\xi_{1})\big{|}=\big{|}f_{0} \big{(}(z-B)^{-1}\xi_{1}\big{)}\big{|}\). By (3.3), one can observe that \[\limsup_{z\to 0}\frac{\ln\big{|}\big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(\xi_{1}) \big{|}}{\ln\|(z-B^{\prime})^{-1}\|}\geq 1.\] Since \(\|(z-A)^{-1}\|=\|(z-B)^{-1}\|=\|(z-B)^{-1}\|\), we have \[\limsup_{z\to 0}\frac{\ln\big{|}\big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(\xi_{1}) \big{|}}{\ln\|(z-B^{\prime})^{-1}\|}=\limsup_{z\to 0}\frac{\ln\big{|}\big{(}(z-B^{ \prime})^{-1}f_{0}\big{)}(\xi_{1})\big{|}}{\ln\|(z-A)^{-1}\|}\geq 1.\] Combining (4.3), we can obtain that \[k_{(T,y)} =\limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}y\|_{p}}{\ln\|(z-T)^{-1}\|}\] \[\approx\limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}y\|_{p}}{\ln\|(z-A)^{-1}\|^{ 2}}\] \[\geq\limsup_{z\to 0}\frac{\ln\big{|}\big{(}(z-B^{\prime})^{-1}f_{0} \big{)}(\xi_{1})\big{|}+\ln\|(z-A)^{-1}e_{1}\|_{p}}{\ln\|(z-A)^{-1}\|^{2}}\geq 1.\] By the fact \(0\leq k_{y}\leq 1\), it holds that \(k_{y}=1\). Hence \(1\in\Lambda(T)\). **Theorem 4.3**.: _Let \(T\) be as in \((\ref{eq:1})\) above and \(\{w_{n}\}_{n=1}^{\infty}\) be a decreasing sequence. If there is a positive number \(m_{0}\) such that \(\frac{1}{n+2m_{0}}\leq w_{n+m_{0}}\leq\frac{1}{n}\) for \(n\geq 1\), then \(\Lambda(T)=[\frac{1}{2},1]\)._ **Proof.** By Lemmas 4.1 and 4.2, have \(\Lambda(T)\subset[\frac{1}{2},1]\) and \(1\in\Lambda(T)\). Next, we need only prove \([\frac{1}{2},1)\subset\Lambda(T)\). Pick a natural number \(m>2m_{0}+2\), for \(0<r\leq 1\), set \(x_{r}=\left[\begin{array}{c}\xi_{r}\\ \xi_{0}\end{array}\right]\), where \[\xi_{r}=\left[\begin{array}{c}1\\ r\big{(}(1+m)!w_{1}\big{)}^{-1}\\ r^{2}\big{(}(2+m)!\prod\limits_{j=1}^{2}w_{j}\big{)}^{-1}\\ \vdots\end{array}\right]\begin{array}{c}e_{0}\\ e_{-1}\\ e_{-2}\\ \vdots\end{array}\right]\begin{array}{c}0\\ e_{1}\\ e_{2}\\ \vdots\end{array}\] By (3.8), we have \(x_{r}\in l^{p}(\mathbb{Z})\) for any \(p\in[1,+\infty)\). Then \[(z-T)^{-1}x_{r}=\left[\begin{array}{c}(z-B)^{-1}\xi_{r}\\ -\big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(\xi_{r})(z-A)^{-1}e_{1}+(z-A)^{-1}\xi_ {0}\end{array}\right].\] It follows that \[\|(z-T)^{-1}x_{r}\|_{p}=\|(z-B)^{-1}\xi_{r}\|_{p}+\big{|}\big{(}(z-B^{\prime}) ^{-1}f_{0}\big{)}(\xi_{r})\big{|}\|(z-A)^{-1}e_{1}\|_{p}.\] For \(\|(z-B)^{-1}\xi_{r}\|_{p}\), by (3.6), one can observe that \[\big{|}f_{0}\big{(}(z-B)^{-1}\xi_{r}\big{)}\big{|}\leq\|(z-B)^{-1}\xi_{r}\|_{ p}\leq\big{|}f_{0}\big{(}(z-B)^{-1}\xi_{r}\big{)}\big{|}\|\xi_{r}\|_{p}.\] So, when \(z\) tends to \(0\), we have \[\ln\|(z-B)^{-1}\xi_{r}\|_{p}\approx\ln\big{|}f_{0}\big{(}(z-B)^{-1}\xi_{r} \big{)}\big{|}.\] For \(\|(z-A)^{-1}e_{1}\|_{p}\), by (4.3), we have that \[\ln\|(z-A)^{-1}e_{1}\|_{p}\approx\ln\|(z-A)^{-1}\|.\] For \(\big{|}\big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(\xi_{r})\big{|}\), by (3.4), it follows that Hence, as \(z\) tends to \(0\), we have \[\ln\|(z-T)^{-1}x_{r}\|_{p}\approx\ln\|(z-A)^{-1}e_{1}\|_{p}\big{|}\big{(}(z-B ^{\prime})^{-1}f_{0}\big{)}(\xi_{r})\big{|}.\] Notice that \[\|(z-A)^{-1}\| =\|(z-B)^{-1}\|\geq\frac{1}{\|\xi_{1}\|_{p}}\|(z-B)^{-1}\xi_{1}\|_{p}\] \[\geq\frac{1}{\|\xi_{1}\|_{p}}|f_{0}((z-B)^{-1}\xi_{1})|\] \[=\frac{1}{\|\xi_{1}\|_{p}}\big{|}\frac{1}{z}+z^{m-1}\big{(}e^{ \frac{1}{z}}-\sum_{n=0}^{m}\frac{1}{n!}\frac{1}{z^{n}}\big{)}\big{|}\] and \[\|(z-A)^{-1}\|\leq\frac{1}{|z|}+\sum_{k=1}^{m_{0}}\frac{\prod\limits_{j=1}^{k}w _{j}}{|z|^{k+1}}+\frac{1}{|z|}\big{(}e^{\frac{1}{|z|}}-\sum_{k=0}^{m_{0}}\frac {1}{k!|z|^{k}}\big{)}.\] So, as \(z\) tends to \(0\), we have \(\ln\|(z-A)^{-1}\|\approx\ln e^{\frac{1}{|z|}}\) and \[\ln\|(z-T)^{-1}x_{r}\|_{p}\approx\ln e^{\frac{1}{|z|}}+\ln\big{|}\frac{1}{z}+ \frac{z^{m-1}}{r^{m}}\big{(}e^{\frac{r}{z}}-\sum_{n=0}^{m}\frac{1}{n!}\frac{r ^{n}}{z^{n}}\big{)}\big{|}.\] It follows that \[k_{(T,x_{r})} =\limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}x_{r}\|_{p}}{\ln\|(z-T)^{-1}\|}\] \[\approx\limsup_{z\to 0}\frac{\ln\|(z-A)^{-1}e_{1}\|_{p}\big{|} \big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(\xi_{r})\big{|}}{\ln\|(z-A)^{-1}\|^{2}}\] \[\approx\limsup_{z\to 0}\frac{\ln e^{\frac{1}{|z|}}+\ln\big{|} \frac{1}{z}+\frac{z^{m-1}}{r^{m}}\big{(}e^{\frac{r}{z}}-\sum\limits_{n=0}^{m} \frac{1}{n!}\frac{r^{n}}{z^{n}}\big{)}\big{|}}{\ln(e^{\frac{1}{|z|}})^{2}}\] \[=\limsup_{|z|\to 0}\frac{\ln e^{\frac{1}{|z|}}+\ln e^{\frac{r}{|z|} }\Big{(}\frac{(e^{\frac{r}{|z|}})^{-1}}{|z|}+\frac{|z|^{m-1}}{r^{m}}\big{(}1- (e^{\frac{r}{|z|}})^{-1}\sum\limits_{n=0}^{m}\frac{1}{n!}\frac{r^{n}}{|z|^{n} }\big{)}\Big{)}}{\ln(e^{\frac{1}{|z|}})^{2}}\] \[=\limsup_{|z|\to 0}\frac{\ln e^{\frac{1}{|z|}}+\ln e^{\frac{r}{|z|} }\Big{(}\frac{(e^{\frac{r}{|z|}})^{-1}}{|z|}+\frac{|z|^{m-1}}{r^{m}}\big{(}1- (e^{\frac{r}{|z|}})^{-1}\sum\limits_{n=0}^{m}\frac{1}{n!}\frac{r^{n}}{|z|^{n} }\big{)}\Big{)}}{\ln(e^{\frac{1}{|z|}})^{2}}\] \[=\frac{1+r}{2}.\] Since \(0<r\leq 1\), it yields that \((\frac{1}{2},1]\subset\Lambda(T)\). Finally, we prove that \(\frac{1}{2}\in\Lambda(T)\). Suppose that \(\widetilde{x}=\left[\begin{array}{c}\beta_{1}\\ \beta_{2}\end{array}\right]\), where \[\beta_{1}=\left[\begin{array}{c}1\\ 0\\ 0\\ \vdots\end{array}\right]\begin{array}{c}e_{0}\\ e_{-1}\\ e_{-2}\\ \vdots\end{array}\begin{array}{c}e_{0}\\ e_{-1}\\ e_{-2}\\ \vdots\end{array}\begin{array}{c}e_{0}\\ e_{-1}\\ e_{-2}\\ e_{3}\\ \vdots\end{array}\begin{array}{c}e_{1}\\ e_{2}\\ e_{3}\\ \vdots\end{array}\begin{array}{c}e_{1}\\ e_{2}\\ e_{3}\\ \vdots\end{array}\begin{array}{c}e_{2}\\ e_{3}\\ \vdots\end{array}\begin{array}{c}e_{3}\\ e_{-1}\\ e_{-2}\\ \vdots\end{array}\begin{array}{c}e_{0}\\ e_{-1}\\ e_{-2}\\ e_{-1}\\ \vdots\end{array}\begin{array}{c}e_{0}\\ e_{-1}\\ e_{-2}\\ e_{-1}\\ e_{-2}\\ \vdots\end{array}\begin{array}{c}e_{1}\\ e_{-1}\\ e_{-2}\\ e_{-1}\\ e_{-2}\\ \vdots\end{array}\begin{array}{c}e_{1}\\ e_{-1}\\ e_{-2}\\ e_{-1}\\ e_{-2}\\ \vdots\end{array}\begin{array}{c}e_{1}\\ e_{-1}\\ e_{-2}\\ e_{-1}\\ e_{-1}\\ e_{-2}\\ \vdots\end{array}\begin{array}{c}e_{1}\\ e_{-1}\\ e_{-2}\\ e_{-1}\\ e_{-1}\\ e_{-2}\\ \vdots\end{array}\begin{array}{c}e_{1}\\ e_{-1}\\ e_{-1}\\ e_{-2}\\ e_{-1}\\ e_{-1}\\ e_{-2}\\ \vdots\end{array}\begin{array}{c}e_{1}\\ e_{-1}\\ e_{-1}\\ e_{-1}\\ e_{-1}\\ e_{-2 Then \(\widetilde{x}\in l^{p}(\mathbb{Z})\) and \[(z-T)^{-1}\widetilde{x}=\begin{bmatrix}(z-B)^{-1}\beta_{1}\\ -\big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(\beta_{1})(z-A)^{-1}e_{1}+(z-A)^{-1} \beta_{2}\end{bmatrix}.\] It follows that \[\|(z-T)^{-1}\widetilde{x}\|_{p}=\|(z-B)^{-1}\beta_{1}\|_{p}+\big{|}\big{(}(z-B^ {\prime})^{-1}f_{0}\big{)}(\beta_{1})\big{|}\|(z-A)^{-1}e_{1}\|_{p}.\] Notice that \[\big{|}\big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(\beta_{1})\big{|}=\big{|}f_{0}(( z-B)^{-1}\beta_{1})\big{|}=\frac{1}{|z|}\] and \(\|(z-B)^{-1}\beta_{1}\|_{p}=\frac{1}{|z|^{p}}\). So, when \(z\) tends to \(0\), we have \[\ln\|(z-T)^{-1}\widetilde{x}\|_{p} \approx\ln\big{|}\big{(}(z-B^{\prime})^{-1}f_{0}\big{)}(\beta_{1 })\big{|}\|(z-A)^{-1}e_{1}\|_{p}\] \[\approx\ln\frac{1}{|z|}+\ln\|(z-A)^{-1}\|.\] Then \[k_{(T,\widetilde{x})} =\limsup_{z\to 0}\frac{\ln\|(z-T)^{-1}\widetilde{x}\|_{p}}{\ln\|(z-T) ^{-1}\|}\] \[\approx\limsup_{z\to 0}\frac{\ln\frac{1}{|z|}+\ln\|(z-A)^{-1}\|}{\ln\|( z-A)^{-1}\|^{2}}=\frac{1}{2}.\] Therefore, \(\frac{1}{2}\in\Lambda(T)\). In summary \(\Lambda(T)=[\frac{1}{2},1]\), completing the proof. We close this paper with an interesting question. **Question 6**.: Given a weighted shift \(T\) on \(l^{\infty}\), is there the same power set as \(T\) on \(l^{p}\) for \(p<\infty\)?
2310.12065
A Persuasive Approach to Combating Misinformation
Bayesian Persuasion is proposed as a tool for social media platforms to combat the spread of misinformation. Since platforms can use machine learning to predict the popularity and misinformation features of to-be-shared posts, and users are largely motivated to share popular content, platforms can strategically signal this informational advantage to change user beliefs and persuade them not to share misinformation. We characterize the optimal signaling scheme with imperfect predictions as a linear program and give sufficient and necessary conditions on the classifier to ensure optimal platform utility is non-decreasing and continuous. Next, this interaction is considered under a performative model, wherein platform intervention affects the user's future behaviour. The convergence and stability of optimal signaling under this performative process are fully characterized. Lastly, we experimentally validate that our approach significantly reduces misinformation in both the single round and performative setting and discuss the broader scope of using information design to combat misinformation.
Safwan Hossain, Andjela Mladenovic, Yiling Chen, Gauthier Gidel
2023-10-18T15:57:36Z
http://arxiv.org/abs/2310.12065v2
# A Persuasive Approach to Combating Misinformation ###### Abstract We propose using Bayesian Persuasion as a tool for social media platforms to combat the spread of online misinformation. As platforms can predict the popularity and misinformation features of to-be-shared posts, and users are motivated to only share popular content, platforms can strategically reveal this informational advantage to persuade users to not share misinformed content. Our work mathematically characterizes the optimal information design scheme and the resulting utility when observations are not perfectly observed but arise from an imperfect classifier. Framing the optimization problem as a linear program, we give sufficient and necessary conditions on the classifier accuracy to ensure platform utility under optimal signaling is monotonically increasing and continuous. We next consider this interaction under a performative model, wherein platform intervention through signaling affects the content distribution in the future. We fully characterize the convergence and stability of optimal signaling under this performative process. Lastly, the broader scope of using information design to combat misinformation is discussed throughout. ## 1 Introduction Spreading misinformation has been one of the major critiques of social media platforms. The structure of these platforms, and social media interaction more broadly, rewards users for sharing content to gain popularity (e.g. number of likes and future re-shares), irrespective of its veracity. This has prompted debates on how platforms should police their content. The most common approach for combating misinformation is using fact-checking to tag and censor untruthful content. then tagging or censorship. However, such approaches can hardly keep up with the speed and scale of today's content generation and the validity of content may not be a binary true or false. Censorship moreover leads to thorny debates around the platform regulating freedom of speech. While most Americans support taking steps to restrict misinformation, half of Americans in 2021 agreed that "freedom of information should be prioritized over... restricting false information online."1 Footnote 1: Pew Research Study on Online Misinformation In this paper, we present information design as a viable approach, either as an alternative or complement, for addressing misinformation on platforms. This approach leverages the information asymmetry between a platform and its users. The platform strategically reveals some information that users care about to "persuade" them to not share misinformed content. The users act according to their best interest given the information. This approach does not require the platform to be the arbiter of truth for already shared posts but instead discourages the underlying sharing of misinformed content. Specifically, we model the interaction between a platform and a user under the general framework of Bayesian persuasion [1]. A user-generated post has a two-dimensional hidden state: its popularity if shared and its degree of misinformation. While the user has some prior belief over these states, she does not know the true state of her to-be-shared content. The user's unawareness of her own post's level of misinformation captures the understanding that misinformation is unintentionally introduced, usually driven by social-psychological factors such as a sense of belonging [2], confirmation bias [3], and habitual sharing [4]. The user's utility on the platform depends on her post's true popularity and her chosen action of sharing or not sharing this post. While the user may be indifferent or unaware of the degree of misinformation in the post, the platform's utility depends on both dimensions of the state -- the post's popularity and degree of misinformation, as well as the action taken by the user. This misalignment of the user's and the platform's utility poses a challenge for the platform: the platform does not want the user to share misinformed posts, but the user derives high utility by sharing popular content, which may not be truthful. The platform, however, possesses an informational advantage due to its access to vast troves of historical user and content data. So while the platform, like the user, does not know with certainty the true state of the post, it can leverage its scale and data to build classification models to predict (with certain error rates) the popularity and degree of misinformation of the post. Based on the predictions, the platform can strategically reveal some stochastic information about the post's popularity to the user, hoping to influence the user's action. The user, knowing the platform's strategy and receiving the revealed information, can decide to share or not share the post to maximize her expected utility. The interaction of the platform and the user forms a Bayesian Stackelberg game, with the platform as the leader choosing an information revelation (signaling) scheme first, with the user being the follower who decides on an optimal action for herself based on the received information. The goal of the platform is to choose a signaling scheme to optimize the platform's expected utility at equilibrium and, by doing so, reduce misinformed content on the platform. Our model differs from standard Bayesian persuasion (BP) in that the leader (the platform) does not have perfect knowledge of the state of a post and needs to design a signaling scheme based on predictions of the state. The special characteristic of the misinformation setting is captured by the user's utility only depends on one dimension of the state, popularity, and not on the entire state. Our analysis focuses on understanding the platform's optimal signaling scheme and how the accuracy of the predictor that the platform uses affects the platform's goal of defeating misinformation. In addition to analyzing a single-round model, we also extend our model to a dynamic setting to model the long-term effects of implementing such a policy. In this _performative_ setting, platform interactions over time affect the sharing behaviour of users, and thus the underlying prior distribution of content on the platform changes. We analyze the convergence properties of this dynamic process and its underlying impact on the content quality on the platform. **Our contributions:** We propose an information design approach for social media platforms to address the spread of misinformed content. Platforms use their informational advantage to predict the misinformation and validation states of a user's to-be-shared post, and look to strategically reveal this to users to reduce misinformation on the platform. We formally define this _noisy persuasion_ setting in section 3, and note that it differs from the standard BP framework as platforms can only predict the underlying state and not directly observe it. In section 4, we provide a set of technical and operational results that parallel and generalize known results for the BP setting. Crucially, even in the noisy persuasion setting, signals can be interpreted as action recommendations without loss of generalize, and signaling can never decrease user utility. The properties of optimal signaling under noisy predictions are analyzed in section 5. Specifically, we formulate this problem as a linear program and provide sufficient and necessary conditions for the classifier to ensure the platform utility under optimal signaling is weakly monotone and continuous. These results may be of broader interest. In section 6, we view this platform-user interaction through a performative lens, wherein optimal signaling affects the user's sharing behaviour and correspondingly the content on the platform. We give a complete characterization here, proving stability and convergence properties of this performative process, and outline conditions wherein this leads to a platform with reduced misinformation content. Lastly, in section 7 we comment on technical extension and holistically comment on the broader scope of using information design approaches for combating misinformation. ## 2 Related Works This work proposes a soft approach, which doesn't involve direct censorship or tagging, for combating online misinformation. Jackson et al. [5], Howe et al. [6] and Pennycook and Rand [7] similarly advocate for softer approaches. Howe et al. [6] proposed to cap the depth (limit how many times messages can be relayed) or limit width (make it more difficult for a user to forward a post to many people) of a social network to improve information fidelity on the platform. Jackson et al. [5] observed that allowing people to share only posts that they have indicated are true, what they called self-censoring, reduces the spread of misinformation. Pennycook and Rand [7] meta-analyzed 20 experiments and concluded that interventions that shifted users' attention toward the concept of accuracy could help reduce misinformation sharing online. These works, together with ours, all focus on reducing the sharing of misinformation. Yang et al. [8] and Candogan and Drakopoulos [9] respectively studied reducing the creation and the consumption of inaccurate information through signaling. While also taking an information design approach like ours, they assumed that the platform had perfect knowledge of the content accuracy. Instead of considering intervention, Acemoglu et al. [10] modeled the propagation of misinformation as a game and analyzed its equilibrium. Our technical approach falls under the framework of information design in sender-receiver models, for which an extensive literature has recently emerged. The seminal work on Bayesian persuasion [1] has led to many follow-up studies, exploring, for example, the computational complexity of the sender's optimization problem [11], impacts of restrictions on available signaling schemes [12] and exogenous information distortions [13] on the informativeness and welfare of equilibrium, and robust equilibrium concept when sender has ambiguity over outside information that receiver may have [14]. The information design approach has also been used for studying price discrimination [15], multiplayer games [16], and revenue management [17], just to name a few. Bergemann and Morris [18], Kamenica [19] and Candogan [20] offer comprehensive reviews on work in this area. Our work brings information design as a tool to reduce the spreading of misinformation by rational users. Yea [21], while also considering both misinformation and persuasion, went to the other direction, studying the impact of fake news on persuasion. Our dynamic setting in Section 6 took inspiration from the broader literature of performative prediction [22, 23, 24]. Performative prediction refers to the phenomenon that when predictions are used to support decisions, decisions will affect what is predicted in the future, explaining the change of data distribution over time. In our online misinformation setting, it is not the predictions but rather the signaling process that leads to this dynamic. Our performative model here parallels the stateful one proposed by Brown et al. [25] and naturally captures the phenomenon that both the past distribution and the user's present sharing decisions (influenced by signaling) will affect content distribution on the platform. The platform should take into consideration such distribution change when trying to influence user decisions. From a technical perspective, our results here are novel and are not captured by the existing suite of results in the literature. ## 3 Model Formulation Consider the interaction between a social media _platform_, and a single _user_ on this platform. Without any platform intervention, the user lacks additional information when they draft posts for submission; thus, they take their default action to share this post. Our model considers a platform that can predict features of this draft post and commits to revealing this information to the user according to a probabilistic scheme. The revealed information affects the user's belief about their post's content, affecting their decision to share this post on the platform. The platform wishes to strategically design this information revelation scheme to minimize misinformation on the platform. We now precisely define each aspect of this setting, with the observation that this single-user setting is without loss of generality since the platform could independently leverage such an interaction with any number of users. States and Predictions:Let the sets \(\mathcal{M}=\{1,\ldots,m_{max}\}\) and \(\mathcal{V}=\{1,\ldots,v_{max}\}\) denote the possible misinformation and validation/popularity features state respectively. A post drafted by a user has some true joint feature state \(\theta=(m,v)\) drawn from a prior distribution \(\mu\). The prior encapsulates the distribution of user's shared posts. The platform knows this prior distribution since it is simply composed of past statistics. It is also natural that the user knows the marginal distribution \(\mu(v)\) since this is simply the popularity of their previously shared posts.2 However, the user does not observe the true state \(\theta\) for their drafted content - i.e., they do not know the true popularity of any drafted post a priori. On the other hand, the platform can leverage their data and scale to predict these states of the draft content, with predictions denoted by \(\widehat{\theta}=(\widehat{m},\widehat{v})\). Unlike the canonical setting of [1], we do not assume these predictions to be perfect and capture the inaccuracy of the prediction models used in our _noisy persuasion_ problem as follows: Footnote 2: We do not make any assumptions on whether the user knows the joint distribution \(\mu(m,v)\) or the marginal over misinformation \(\mu(m)\) as it doesn’t affect our analysis. **Definition 1**.: _The uncertainty of the platform's misinformation and validation classifier is captured by the multi-class confusion matrices \(Q^{\mathcal{V}}\in[0,1]^{|\mathcal{V}|\times|\mathcal{V}|}\) and \(Q^{\mathcal{M}}\in[0,1]^{|\mathcal{M}|\times|\mathcal{M}|}\). To simplify notation, let \(Q^{\Theta}=Q^{\mathcal{M}}\otimes Q^{\mathcal{V}}\) denote the combined confusion matrix indexed by \(Q^{\Theta}_{\widehat{\theta},\theta}\), with \(\otimes\) representing the Kronecker product. An element at row \(\widehat{\theta}\) and column \(\theta\) in \(Q^{\Theta}\), denoted by \(Q^{\Theta}_{\widehat{\theta},\theta}\), records \(P(\widehat{\theta}|\theta)\)._ We assume the predictors for \(m\) and \(v\) to be independent and assume confusion matrices \(Q^{\mathcal{M}}\) and \(Q^{\mathcal{V}}\) to be invertible (implying \(Q^{\Theta}\) is invertible). This assumption is quite mild since any matrix that is diagonally dominant, regular, or close enough to the identity matrix \(I_{d}\) is invertible. For instance, for a "good enough" classifier, we have that \(P(\widehat{m}=i|m=i)\gg P(\widehat{m}=j|m=i)\), which implies that its confusion matrix is diagonally dominant and close to the identity matrix.3 In the following, we will note \([Q^{\mathcal{M}}]_{:j}\) the \(j^{th}\) column and \([Q^{\mathcal{M}}]_{:}\) the \(i^{th}\) of \(Q^{\mathcal{M}}\). Footnote 3: Moreoever, for matrices close enough to the identity matrix we can uniformly bound the norm of their inverse by noticing \(\|A^{-1}\|\leq\frac{1}{1-r}\,,\,\forall A\) such that \(\|A-I_{d}\|\leq r<1\). Actions and Utilities:Upon drafting content, the user can choose between one of two actions: \(\mathcal{A}=\{0,1\}\). The action \(0\) corresponds to the user _not sharing_ the content, and \(1\) corresponds to them _sharing_. As discussed, we model the user to be largely motivated by the popularity of the content they post. Accordingly, user utility for their action depends only on the \(v\) feature 4. Platform utility, however, depends on both the misinformation and validation state. We formally define them below: Footnote 4: This is largely a modeling choice. All our technical results still hold if user utility depends on both features **Definition 2**.: _We denote the user utility function as \(w:\mathcal{A}\times\mathcal{V}\to\mathbb{R}\) and the platform utility function as \(u:\mathcal{A}\times\mathcal{M}\times\mathcal{V}\to\mathbb{R}\). We assume both utilities are bounded._ We assume that for both actions, there is at least one feature state \(\theta\) wherein both the user and platform prefer that action. This is a natural assumption in our setting: for example, when content is true and popular both prefer to share and if false and unpopular, both prefer to not share. Similar to the standard persuasion settings, we assume the platform to have knowledge of the user's utility. This allows them to anticipate the user's best action and design a scheme accordingly. Lastly, from a modeling perspective, we consider platform utility to be a proxy for the veracity and quality of content on the platform. Signaling Scheme:The social media platform maintains a set of signals \(\mathcal{S}\) and reveals a signaling scheme before the user drafts any content. This is simply a set of conditional distributions, denoted by \(\pi(s|\widehat{m},\widehat{v})\), which specifies the probability of sending signal \(s\) when the platform predicts the draft content to have state \(\widehat{\theta}=(\widehat{m},\widehat{v})\). Since the scheme \(\pi\) must be committed to a priori, the platform goal is to design \(\pi\) to maximize their expected ex-ante utility, which is formally defined as: **Definition 3**.: _For a signaling scheme \(\pi(s|\widehat{\theta})\), The platform's ex-ante utility for a signaling scheme \(\pi(s|\widehat{\theta})\) is \(\sum_{s}P(s)\operatorname{\mathbb{E}}_{\rho^{s}}[u(a^{*},\theta)]\), where \(\rho^{s}(\theta)=P(\theta|s)\) is the posterior distribution induced by signal \(s\) and scheme \(\pi\), and \(a^{*}=\operatorname*{arg\,max}_{a}\operatorname{\mathbb{E}}_{\rho^{s}}\left[ w(v,a)\right]\) is the optimal receiver action for that posterior belief._ We now summarize the platform-user interaction for an _instance_\(\mathcal{I}=(u,w,\mu)\) as follows: * Platform reveals a signaling scheme \(\pi(s|\widehat{\theta})\) * User drafts content to post on the platform. The true state of this post \(\theta=(m,v)\sim\mu\) is unknown to the user. * Platform uses learning models with joint confusion matrix \(Q^{\Theta}\) to obtain state predictions for this post, \(\widehat{\theta}=(\widehat{m},\widehat{v})\), and then samples a signal \(s\) according to their published scheme \(\pi(s|\widehat{\theta})\). * User observes the signal, computes their posterior belief and takes their optimal action \(a^{*}\). * User attains utility \(w(a^{*},v)\) and platform attains utility \(u(a^{*},m,v)\). Observe that from a game theoretic perspective, this interaction outlines a Stakelberg game with the platform and user taking on the leader and follower roles respectively. The optimal signaling scheme that maximizes the platform's ex-ante utility (definition 3). is the platform's strategy at the Stakelberg Equilibrium. A key thrust of our work is understanding how properties of the prediction accuracy (i.e. confusion matrix \(Q^{\Theta}\)) affect this optimal signaling scheme and the resulting platform utility. Performance ModelWhen the platform does not adopt persuasion, the user simply takes the action that maximizes their utility according to the prior: \(a^{*}=\operatorname*{arg\,max}_{a}\operatorname{\mathbb{E}}_{\mu_{0}}w(a,v)\). We naturally assume this default action is to share and as such, the distribution of content shared over time matches the original prior. Applying persuasion affects this, however, and the user may now share or not share depending on the scheme and the signal received. In other words, implementing persuasion directly affects sharing decisions and thus the distribution of a user's posts published on the platform over time. We model this through a performative angle. At round \(t\), \(\mu_{t}\) represents the distribution of the user's currently published content. The platform deploys a signaling scheme \(\pi_{t}(s|\widehat{\theta})\), and the user observes recommendations from this whenever she drafts content to be posted. Observe that draft posts that were persuaded to be not shared do not end up on the platform, and older content loses its relevance. As such, we model the initial distribution for the next round \(t+1\), as a convex combination of the present prior \(\mu_{t}\) and the distribution of content for those persuaded to share, \(P_{t}(m,v|s=1)\). We formally define the transition dynamics in Section 6. ## 4 Preliminaries ### Simplifying the Signaling Scheme Two practical questions about signaling emerge when one looks to implement the persuasion model we described within the social media context. First, since users only care about validation states and may not agree on the platform's characterization of misinformation, how should platforms reveal their signaling scheme which depends on both \((\widehat{m},\widehat{v})\)? Second, in the presence of predicted states, how large a signal space is needed to attain good results? Platforms can reasonably only use a limited number of signals and ideally these should be clearly interpretable by the user. We show that both issues can be elegantly addressed within our noisy persuasion model. **Lemma 1**.: _For a signalling scheme \(\pi(s|\widehat{m},\widehat{v})\), the probability of observing a signal \(s\) given true realizations \((m,v)\) is given by: \(P(s|m,v)=\sum_{\widehat{m},\widehat{v}}\pi(s|\widehat{m},\widehat{v})Q^{ \mathcal{M}}_{\widehat{m},m}Q^{\mathcal{V}}_{\widehat{v},v}\)._ Proof.: The following is due to total probability law: \(P(s|m,v)=\sum_{\widehat{m},\widehat{v}}P(s|m,v,\widehat{m},\widehat{v})P( \widehat{m},\widehat{v}|m,v)\). We note that given \(\widehat{m}\) and \(\widehat{v}\), signal \(s\) is conditionally independent of \(m,v\) since signaling is directly specified by the former. We also note that the model classifiers are independent. Thus, \(P(s|m,v)=\sum_{\widehat{m},\widehat{v}}\pi(s|\widehat{m},\widehat{v})P( \widehat{m}|m)P(\widehat{v}|v)\). Observe that this lemma immediately implies that users can compute their posterior belief \(P(v|s)\) needed for them to behave optimally insofar as they have access to just the confusion matrix \(Q^{\mathcal{V}}\). We can in fact claim something stronger and settle the first question. It suffices for the platform to reveal the marginal signaling scheme \(\pi(s|\widehat{v})\) for the user to compute her posterior over true states \(P(v|s)\). We formalize this in the following corollary: **Corollary 1**.: _It suffices for the platform to reveal their marginal signaling scheme \(\pi(s|\widehat{v})\) and \(Q^{\mathcal{V}}\) to the user, for them to compute their true posterior \(P(v|s)=\mu(v)\sum_{\widehat{v}}\pi(s|\widehat{v})Q^{\mathcal{V}}_{\widehat{v},v}\)._ Proof.: Consider the platform revealing the marginal scheme \(\pi(s|\widehat{v})=\sum_{\widehat{m}}\pi(s|\widehat{m},\widehat{v})\). Since the signal only depends on \(\widehat{v}\), users can compute \(P(s|v)=\sum_{\widehat{v}}P(s|v,\widehat{v})Q^{\mathcal{V}}_{\widehat{v},v}= \sum_{\widehat{v}}\pi(s|\widehat{v})Q^{\mathcal{V}}_{\widehat{v},v}\), and then compute the desired posterior using the prior: \(P(v|s)=P(s|v)\mu(v)\). This result greatly simplifies the operational aspects of persuasion in this setting. Platforms can reveal a signaling scheme only over feature \(v\) that users agree and care about without any loss of generality. The dependence on the misinformation state becomes implicit and associated metadata like confusion matrix \(Q^{\mathcal{M}}\) can be kept private. Conversely, we find platforms revealing \(Q^{\mathcal{V}}\) to be quite natural and recommended from a transparency perspective as it represents the inaccuracies of a classifier explicitly impacting users. We next consider our second question and show that in fact only \(|\mathcal{A}|\) signals are needed to attain the platform utility under optimal signaling (hereinafter we refer to this as _optimal platform utility_). Thus, signals can simply be interpreted as action recommendations, providing significant operational simplicity. From a technical lens, this can be seen as generalizing the revelation principle style arguments made in the canonical persuasion setting with perfect observations [1, 11]. **Lemma 2**.: _For instance \(\mathcal{I}=(u,w,\mu)\) and joint confusion matrix \(Q^{\Theta}\), let \(u^{*}_{\mathcal{I}}(Q^{\Theta})\) represent the maximal platform utility achievable with an arbitrary number of signals. Then it is also possible for the platform to achieve \(u^{*}_{\mathcal{I}}(Q^{\Theta})\) utility using \(|\mathcal{A}|\) signals (i.e. \(|\mathcal{S}|=|\mathcal{A}|\))._ Proof.: Let \(\pi^{*}\) denote the optimal unrestricted signaling scheme with a total of \(\ell\) signals. Observe that by lemma 1, we can state the posterior over true states \(\theta=(m,v)\) for signal realization \(s\) as: \(P(\theta|s)=\frac{1}{P(s)}\mu(\theta)\sum_{\widehat{\theta}}\pi^{*}(s|\widehat {\theta})Q^{\Theta}_{\widehat{\theta},\theta}\). For each \(a\), let \(S_{a}\) denote the set of signals whose induced posterior under \(\pi^{*}\) leads to optimal action \(a\). Next, consider a signaling scheme that directly recommends an action, satisfying \(|\mathcal{S}|=|\mathcal{A}|\). Define this as follows: \(\pi^{\prime}(a|\widehat{\theta})=\sum_{s\in S_{a}}\pi^{*}(s|\widehat{\theta})\). Next, observe that utility at this optimal scheme, denoted by \(u^{*}_{\mathcal{I}}(Q^{\Theta})=\sum_{a}\sum_{s\in S_{a}}P(s)\sum_{\theta}u( a,\theta)P(\theta|s)=\sum_{a,\theta}\sum_{s\in S_{a}}u(a,\theta)\mu(\theta)P(s| \theta)\). We next use lemma 1 and write this as equal to: \[\sum_{a,\theta,\widehat{\theta}}\mu(\theta)u(a,\theta)Q^{\Theta}_{\widehat{ \theta},\theta}\sum_{s\in S_{a}}\pi^{*}(s|\widehat{\theta})=\sum_{a,\theta} \mu(\theta)u(a,\theta)\sum_{\widehat{\theta}}Q^{\Theta}_{\widehat{\theta}, \theta}\pi^{\prime}(a|\widehat{\theta})\] We note that the inner summand (over \(\widehat{\theta}\)) is equivalent to \(P(s=a|\theta)\) under the new action-recommending signaling scheme. Thus we can write the expected utility under the new signaling is: \(\sum_{a}P(s=a)\sum_{\theta}u(a,m,v)P(\theta|a)=u^{*}_{\mathcal{I}}(Q^{\Theta})\) completing the proof. Lastly, we provide in appendix A a simple worked example outlining the benefits of persuasion in this misinformation setting, under the given simplification. ### Geometry of Noisy Persuasion Kamenica and Gentzkow [1] illustrate an interesting geometric interpretation of optimal signaling in the standard persuasion setting. They note that for any scheme, the posterior distributions induced by the signal realization \(s\), denoted by \(\rho^{s}\), must always satisfy \(\sum_{s}P(s)\rho^{s}=\mu\). Alternatively, the expectation of the induced posteriors must equal the prior, a by-product of Bayes rule known as _Bayes Plausibility_. Using this intuition, the platform aims at maximizing their utility by creating an advantageous set of Bayes plausible beliefs \(\{\rho^{s}\}\) using the signaling scheme. Kamenica and Gentzkow [1] formalize this by constructing a mapping from belief to expected sender utility and show that due to Bayes plausibility, the optimal sender utility is equivalent to evaluating the concave closure of this function at the prior. We now show that this observation can, in fact, be generalized to our setting with predicted states and build on this geometric insight. In the following, we note \(\Delta^{n}\), as the probability simplex embedded in \(\mathbb{R}^{n}\). **Definition 4**.: _Let \(\widehat{\rho}(\widehat{\theta})\) denote a belief over predicted states \(\widehat{\theta}\), and \(\rho(\theta)\) a belief over true states \(\theta\). Then for instance \(\mathcal{I}\), define \(\overline{u}(\widehat{\rho}):\Delta^{|\Theta|}\to\mathbb{R}\) as the mapping from \(\widehat{\rho}\) to the corresponding platform expected utility (w.r.t. to the corresponding true belief \(\rho\)) for the optimal user action at that belief, \(\mathbb{E}_{\rho(\theta)}[u(a^{*}(\rho(\widehat{\theta})),\theta)]\). Further, let \(co(\widehat{\rho})\) denote the convex hull of the graph of \(\overline{u}(\widehat{\rho})\) and \(cl(\widehat{\rho})=\sup\{\overline{u}(\widehat{\rho})|(\widehat{\rho},z)\in co (\widehat{\rho})\) denote the concave closure of this graph._ To interpret this definition, note that for a belief over predicted states \(\widehat{\rho}\), one can compute the corresponding belief over true states \(\rho\) using the confusion matrix (lemma 3). Then for an instance \(\mathcal{I}\), \(a^{*}(\widehat{\rho})\) denotes the optimal user action for the corresponding true belief \(\rho(\theta)\). With \(a^{*}(\widehat{\rho})\), one can compute the platform's expected utility \(\overline{u}(\widehat{\rho})\) over the corresponding true state belief \(\rho(\theta)\). \(co(\widehat{\rho})\) denotes the convex hull of all such \((\widehat{\rho},\overline{u}(\widehat{\rho}))\) points. The concave closure \(cl(\widehat{\rho})\) can be geometrically interpreted as the boundary of this convex hull, or the tightest concave function enveloping \(\overline{u}(\widehat{\rho})\). Lastly, Bayes Plausibility in our setting implies \(\sum_{s}\widehat{\rho}(s)=\widehat{\mu}_{0}\), where \(\widehat{\mu}\) is the corresponding belief over predicted states for prior \(\mu\). Proposition 1 now relates the optimal platform utility to the concave closure. **Lemma 3**.: _Given a belief over true states \(\rho(\theta)\in\Delta^{|\Theta|}\), the corresponding belief over predicted states is \(\widehat{\rho}(\widehat{\theta})=\sum_{\theta}\rho(\theta)Q^{\Theta}_{\widehat {\theta},\theta}\). Similarly, for belief \(\widehat{\rho}(\widehat{\theta})\in\Delta^{|\Theta|}\), the corresponding belief over true states is: \(\rho(\theta)=\sum_{\widehat{\theta}}\widehat{\rho}(\widehat{\theta})V^{\Theta} _{\theta,\widehat{\theta}}\), where \(V^{\Theta}\) denotes the inverse of matrix \(Q^{\Theta}\)._ Proof.: First, it is easy to see that \(\widehat{\rho}(\widehat{\theta})=\sum_{\theta}P(\widehat{\theta}|\theta) \rho(\theta)=\sum_{\theta}\rho(\theta)Q^{\Theta}_{\widehat{\theta},\theta}\). For the other direction, note that \(\widehat{\rho}=Q^{\Theta}\rho\), where \(\rho\) and \(\widehat{\rho}\) are probability vectors over all \(\theta\) and \(\widehat{\theta}\). Since \(Q^{\Theta}\) is invertible, we note that \(\mathcal{V}^{\mathcal{T}}\widehat{\rho}=\rho\). Thus, we can write \(\rho(\theta)=\sum_{\widehat{\theta}}V^{\Theta}_{\theta,\widehat{\theta}} \widehat{\rho}(\widehat{\theta})\). **Proposition 1**.: _The platform utility achieved by optimal signaling on instance \(\mathcal{I}\) is equal to \(cl(\widehat{\mu})\), where \(\widehat{\mu}=\sum_{\theta}\mu(\theta)Q^{\Theta}_{\widehat{\theta},\theta}\) and \(\mu(\theta)\) is the prior._ Proof.: Given a prior \(\mu(\theta)\) over true states \(\theta\), \(\widehat{\mu}\), which we call the noisy prior, can be interpreted as the corresponding belief over predicted states. The Bayes plausibility condition, which immediately follows from Bayes rule, implies that for any signaling scheme \(\pi\), \(\widehat{\mu}(\widehat{\theta})=\sum_{s}P(s)\widehat{\rho}(\widehat{\theta}|s)\), where \(\widehat{\rho}(\widehat{\theta}|s)\) (also denoted by \(\widehat{\rho}^{s}\)) is the induced belief over predicted states upon receiving signal \(s\). Thus, the expected utility of any signaling scheme must be in the set \(\{\overline{u}(\widehat{\mu})|(\widehat{\mu},\overline{u}(\widehat{\mu}))\in co (\widehat{P})\}\), since this includes all convex combinations of induced beliefs that equal the noisy prior, and their corresponding expected platform utility (which is simply the same convex combination of expected utilities at those beliefs). Thus \(z^{*}=\sup\{\overline{u}(\widehat{\mu})|(\widehat{\mu},\overline{u}(\widehat {\mu}))\in co(\widehat{P})\}=cl(\widehat{\mu})\) is the maximum utility achievable by any signaling scheme with an arbitrary number of signals. However, by lemma 2 we know there exists a signaling scheme with \(|S|=|A|\) that can also achieve this utility. Theorem 1 can be seen as generalizing the geometric result of Kamenica and Gentzkow [1] to the predicted state setting. Indeed, when \(Q^{\Theta}\) is identity, our result is equivalent to theirs. We next look to build on this. Consider a function \(w_{a}(\widehat{\rho})\) which represents the user's expected utility (over true states) for taking action \(a\) at belief \(\widehat{\rho}\). Observe that this is a linear function (i.e. a hyperplane) since the mapping from \(\widehat{\rho}(\widehat{\theta})\) to \(\rho(\theta)\) is linear, and expectation is a linear operator. We make two observations. First, if we analogously define \(\overline{w}(\widehat{\rho})\) for the user (their expected utility at a belief due to optimal action), we note this is equivalent to \(\max(w_{0}(\widehat{\rho}),w_{1}(\widehat{\rho}))\) which is convex everywhere. Thus, the expected user utility due to Bayes plausible signaling can never decrease, which is operationally beneficial. We formally state this in corollary 2. Second, if for a belief \(\widehat{\rho}\), \(w_{0}(\widehat{\rho})=w_{1}(\widehat{\rho})\), then \(\widehat{\rho}\) represents the threshold where the receiver's optimal action changes. This threshold is a hyperplane since it is the intersection of two hyperplanes, and we call this the _indifference plane_ since the user is indifferent to both actions at this point5. Since the \(\overline{u}(\widehat{\rho})\) function is based on the optimal user action at belief \(\widehat{\rho}\), \(\overline{u}(\widehat{\rho})\) is possibly discontinuous over this indifference plane since this is where the optimal action for the user changes. We now show in lemma 4 that posteriors induced by a strictly optimal signaling scheme (strictly improves upon the platform utility at the prior belief) are inextricably linked to the indifference plane. **Corollary 2**.: _The expected user utility can never decrease due to any signaling scheme._ **Lemma 4**.: _The concave closure \(cl(\widehat{\rho})\) is continuous, piecewise linear, and possibly non-differentiable over the indifference plane. Further, the posteriors \(\widehat{\rho}^{\mathrm{s}}\) induced by strictly optimal signaling is either on the simplex boundary or the indifference plane, and \(cl(\widehat{\rho}^{\mathrm{s}})=\overline{u}(\widehat{\rho}^{\mathrm{s}})\)._ Proof.: We observe that \(\overline{u}(\widehat{\rho})\) is piece-wise linear due to the mapping from \(\widehat{\rho}(\widehat{\theta})\) to \(\rho(\theta)\) being linear and expectation being a linear operator, with possible discontinuities at beliefs wherein the receiver is indifferent. Since \(co(\widehat{\rho})\) is the convex hull of this piecewise linear function defined for all \(\widehat{\rho}\), and \(cl(\widehat{\mu})\) corresponds to the boundary of this convex hull, this must be continuous, piecewise linear and possibly non-differentiable over the indifference planes. Next, invoking lemma 4 in Kamenica and Gentzkow [1] implies that at any induced posterior \(\widehat{\rho}^{\mathrm{s}}\) for an optimal scheme, either (1) the belief is on the boundary, (2) the receiver must be indifferent to multiple actions at this belief, or (3) for any other belief wherein the receiver optimal action is not \(a^{*}(\widehat{\rho}^{\mathrm{s}})\), it is strictly better for the platform that \(a^{*}(\widehat{\rho}^{\mathrm{s}})\) is taken. Consider the posterior induced by signal \(1\), \(\widehat{\rho}^{1}\), and suppose the user optimal action is \(a\). Condition (3) implies that for all beliefs wherein the optimal receiver action is _not share_, the platform would strictly prefer the share action. However, recall that in section 3, we assumed that for each action \(a\), there is a state (and thus a corresponding belief) where the user and platform both prefer this action. Thus, only the first two can hold. Lastly, to show \(cl(\widehat{\rho}^{\mathrm{s}})=\overline{u}(\widehat{\rho}^{\mathrm{s}})\), consider the line segment \(\ell_{1}\) connecting the induced posteriors of the optimal signaling scheme: \((\widehat{\rho}^{0},\overline{u}(\widehat{\rho}^{0}))\) and \((\widehat{\rho}^{1},\overline{u}(\widehat{\rho}^{1}))\). Observe that the optimal sender utility is obtained by evaluating this line segment at the noisy prior \(\widehat{\mu}\). If the line connecting these two points (including the end-points) is part of \(cl(\widehat{\rho})\), then our claim holds. If not, then this line must be in the interior of the convex hull since \(cl(\widehat{\rho})\) represents the boundary of this convex hull. Then, there exist points \((\widehat{\rho}^{\mathrm{s}}_{0},\overline{u}(\widehat{\rho}^{\mathrm{s}}_{0}))\) and \((\widehat{\rho}^{\mathrm{s}}_{1},\overline{u}(\widehat{\rho}^{\mathrm{s}}_{1}))\) such that the line between these two is strictly above \(\ell_{1}\). Evaluating this line segment at the prior \(\widehat{\mu}\) (and thus satisfying Bayes plausibility) yields a strictly higher value than at \(\ell_{1}\), contradicting our original claim that we start with an optimal signaling scheme. ## 5 Optimal Noisy Persuasion Properties While the geometric perspective gives us insight into the properties of optimal signaling, it does not readily provide an algorithm to compute this. In this section, we first resolve this algorithmic question by providing a linear program (LP) that computes the optimal signaling scheme under inaccurate predictions. We then characterize how the resulting optimal platform utility (definition 5) is affected by the joint confusion matrix \(Q^{\Theta}\). This is of operational significance since classifier accuracy is not an exogenous quantity, but something platforms can modify and develop. Leveraging our LP, we provide necessary and sufficient conditions on \(Q^{\Theta}\) for optimal platform utility to be weakly monotone, and show that for any instance \(\mathcal{I}\), optimal platform utility changes continuously as a function of \(Q^{\Theta}\). **Definition 5**.: _We denote \(u^{*}_{\mathcal{I}}(Q^{\Theta})\) as the optimal platform utility for an instance \(\mathcal{I}=(u,w,\mu)\) as a function of the confusion matrices._ Dughmi and Xu [11] shows that the optimal signaling scheme can be expressed as the solution to a linear program. Intuitively, this approach perceives signals as action recommendations and encodes the receiver's best response at a belief from an incentive compatibility perspective. Since we show in Lemma 2, using \(|\mathcal{A}|\) signals suffices in our setting to achieve optimal utility, we propose a modified version of their LP for our noisy persuasion setting. **Theorem 1**.: _For two actions \(a_{i},a_{j}\), and state \(v\), define \(\Delta w(a_{i},a_{j},v)=w(a_{i},v)-w(a_{j},v)\). Then interpreting signals as action recommendations, the optimal signaling scheme for the platform in the noisy persuasion setting is given by the following LP:_ \[\text{maximize:}\qquad\sum_{a_{i}}^{|\mathcal{A}|}\sum_{\theta=( m,v)}u(a_{i},\theta)\mu(\theta)\sum_{\widehat{\theta}}\pi(s=a_{i}|\widehat{ \theta})Q^{\Theta}_{\widehat{\theta},\theta}\] \[\text{subject to:}\ \forall(\widehat{\theta}):\sum_{a_{i}}\pi(a_{i}| \widehat{\theta})=1\] \[\forall(a_{i},a_{j}):\sum_{\theta=(m,v)}\Delta w(a_{i},a_{j},v) \mu(\theta)\sum_{\widehat{\theta}}\pi(s=a_{i}|\widehat{\theta})Q^{\Theta}_{ \widehat{\theta},\theta}\geq 0\] Proof.: The user chooses an action \(i\) over action \(j\) if going with the former leads to a higher expected utility under the induced posterior. The platform then must choose the signaling scheme that maximizes its utility given this constraint. Since we are working with \(|\mathcal{A}|\) signals, each signal can be interpreted as an action recommendation, and the user constraint can be cast through an incentive-compatibility (IC) lens. Thus, when signal \(a_{i}\) is recommended, the user chooses it over \(a_{j}\) if and only if \(\sum_{\theta}\Delta w(a_{i},a_{j},v)\rho(\theta|s=a_{i})P(s=a_{i})=\sum_{ \theta}\Delta w(a_{i},a_{j},v)P(s=a_{i}|\theta)\mu(\theta)\) at which point we invoke lemma 1 and attain the constraint expression in the statement. Since this constraint implies rational users always take the recommended action, the platform's expected ex-ante utility achieved can be expressed as \(\sum_{a_{i}}P(s=a_{i})\sum_{\theta}\rho(\theta|s=a_{i})u(a_{i},\theta)\). We write \(P(s=a_{i})\rho(m,v|s=a_{i})=\mu(m,v)P(s=a_{i}|m,v)\) and appeal again to lemma 1. Since the feasible region is clearly bounded, we now show this to be always non-empty as well, implying strong duality. Let \(a^{*}\) denote the strictly optimal action for the user at the prior belief. That is: \(\forall a_{j}\), \(\sum_{v}\mu(v)w(a^{*},v)>\sum_{v}\mu(v)w(a_{j},v)\). Next, consider the following signaling scheme: \(\pi(s=a^{*}|\widehat{\theta})=1,\,\forall\widehat{\theta}\). Clearly, this satisfies the simplex constraints. To see that it strictly satisfies the IC constraints, observe that since the probability of recommending any action aside from \(a^{*}\) is \(0\), we only need to consider constraints corresponding to \((a^{*},a_{j})\). Our signalling scheme implies: \(\sum_{\widehat{\theta}}\pi(a_{i}|\widehat{\theta})Q^{\Theta}_{\widehat{\theta },\theta}=\sum_{\widehat{\theta}}Q^{\Theta}_{\widehat{\theta},\theta}=1\). Thus the IC constraint for \((a^{*},a_{j})\) becomes \(\sum_{m,v}\Delta w(a^{*},a_{j},v)\mu(m,v)\geq 0\) which holds due since \(a^{*}\) is optimal for the user at the prior belief. The solution to this LP also has a nice interpretation. Since signals are interpreted as action recommendations, the IC constraints imply that when action \(a\in\{0,1\}\) is recommended, the optimal action for the user at their signal-induced posterior \(P(v|s=a)\) is the action \(a\). We refer to this phenomenon as _optimal signaling is persuasive_. Not only does this have operational advantages (i.e. platforms can guarantee following recommended action is in user's best interest), it also becomes prescient when we study the performative model and must reason about the distribution of users who share at a given round. Before providing our main result on monotonicity, we first state a key lemma that leverages the structure of the LP in Theorem 1 to disentangle the impact of the confusion matrix \(Q^{\Theta}\) from the objective and the IC constraints. **Lemma 5**.: _The optimal signaling LP can be reformulated as follows, with \(c\) and \(B\) only depending on \(\mathcal{I}\) (and not on \(Q^{\Theta}\))._ \[\text{maximize:}\langle c,\boldsymbol{\tilde{\pi}}\rangle \tag{1}\] \[\text{subject to:}\;\;B\boldsymbol{\tilde{\pi}}\geq\mathbf{0}\; \;\text{and}\;\;Q^{\Theta}\boldsymbol{\pi}=\boldsymbol{\tilde{\pi}}\;\;\text{ and}\;\;\mathbf{0}\leq\boldsymbol{\pi}\leq\mathbf{1} \tag{2}\] Proof.: We start with the LP formulation of Theorem 1 and express this as follows with the observation that we have 2 actions and 2 signals: \[\text{maximize} \sum_{\theta}\mu(\theta)u(0,\theta)+\sum_{\theta}\,(u(1,\theta)- u(0,\theta))\mu(\theta)\sum_{\widehat{\theta}}\pi(1|\widehat{\theta})Q^{\Theta}_{ \widehat{\theta},\theta}\] \[\text{subject to:}\;\;\;\sum_{\theta}\Delta w(1,0,\theta)\mu( \theta)\sum_{\widehat{\theta}}\pi(1|\widehat{\theta})Q^{\Theta}_{\widehat{ \theta},\theta}\geq 0\] \[\sum_{\theta}\Delta w(0,1,\theta)\mu(\theta)\sum_{\widehat{ \theta}}(1-\pi(1|\widehat{\theta}))Q^{\Theta}_{\widehat{\theta},\theta}\geq 0\] \[\forall\widehat{\theta},\;\;0\leq\pi(1|\widehat{\theta})\leq 1\] Note that the first term in the objective is a constant which can be ignored from an optimization perspective. Thus, we can rewrite the above expression in the following matrix form: \[\text{maximize} \sum_{\theta}\,(u(1,\theta)-u(0,\theta))\mu(\theta)\tilde{\pi}(1|\theta)\] \[\text{subject to:} \sum_{\theta}\Delta w(1,0,\theta)\mu(\theta)\tilde{\pi}(1|\theta)\geq 0\] \[\sum_{\theta}\Delta w(0,1,\theta)\mu(\theta)(1-\tilde{\pi}(1| \theta))\geq 0\] \[Q^{\Theta}\boldsymbol{\pi}=\boldsymbol{\tilde{\pi}}\text{ and }\mathbf{0}\leq\boldsymbol{\pi}\leq\mathbf{1}\] where \(\boldsymbol{\pi}\) is vector whose coordinates are \(\pi(1|\widehat{\theta})\) and \(\boldsymbol{\tilde{\pi}}\) is vector whose coordinates are \(\tilde{\pi}(1|\theta)\). Lastly, observe that the last two constraint imply \(\boldsymbol{\tilde{\pi}}\in[\mathbf{0},\mathbf{1}]\). The set \(\{Q^{\Theta}\boldsymbol{\pi}\mid\mathbf{0}\leq\boldsymbol{\pi}\leq\mathbf{1}\}\) can be interpreted geometrically. This set corresponds to the _parallelepiped_ induced by the columns of \(Q^{\Theta}\). In other words, for a set of vectors \(\{\boldsymbol{v}_{1},\ldots,\boldsymbol{v}_{k}\}\), we define \(paral(\boldsymbol{v}_{1},\ldots,\boldsymbol{v}_{k})\) as the set of vectors that can be expressed as: \(\beta_{1}\boldsymbol{v}_{1}+\cdots+\beta_{k}\boldsymbol{v}_{k}\), with \(\beta_{i}\in[0,1]\). One interpretation of 5 is that Bayesian persuasion with noisy labels can be seen as standard Bayesian persuasion with an additional constraint that the signaling scheme over the _true labels_\(\boldsymbol{\tilde{\pi}}\) has to belong to the parallelepiped of the columns of the confusion matrix \(Q^{\Theta}\). This idea is at the core of the proof of our main result, which claims that the utility is monotonous with respect to the set inclusion of the convex hull of the column \([Q^{\Theta}]_{:i},\ldots,[Q^{\Theta}]_{:|\Theta|}\) of \(Q^{\Theta}\). **Theorem 2**.: _Given two symmetric confusion matrices \(Q^{\Theta}_{1}\) and \(Q^{\Theta}_{2}\) the optimal sender utility is guaranteed to be weakly monotone, \(u^{\star}_{\mathcal{I}}(Q^{\Theta}_{2})\geq u^{\star}_{\mathcal{I}}(Q^{\Theta} _{1})\) for any instance \(\mathcal{I}\), if and only if for each column \(i\in[1,\ldots,|\Theta|]\), \([Q^{\Theta}_{1}]_{:i}\in co([Q^{\Theta}_{2}]_{:1},\ldots,Q^{\Theta}_{2}]_{:| \Theta|})\), where \(co([Q^{\Theta}_{2}]_{:1},\ldots,Q^{\Theta}_{2}]_{:|\Theta|})\) is the convex hull of the columns of \(Q^{\Theta}_{2}\)._ Proof.: \(\Rightarrow\) We start by considering the sufficient condition for the optimal platform utility to increase wherein the columns of \(Q^{\Theta}_{1}\) can be written as a convex combination of the columns of \(Q^{\Theta}_{2}\). Denote the convex weights by vectors \(\boldsymbol{\lambda_{1}},\ldots,\boldsymbol{\lambda_{|\Theta|}}\). Then the following holds: \(Q^{\Theta}_{2}L=Q^{\Theta}_{1}\), where \(L\) is defined as \(L=\begin{bmatrix}\mathbf{\lambda_{1}}&\cdots&\mathbf{\lambda_{|\Theta|}}\end{bmatrix}\). Notice that \(L\) is a doubly-stochastic matrix. Specifically, observe that \(Q_{2}^{\Theta}L\mathbf{1}=Q_{1}^{\Theta}\mathbf{1}=\mathbf{1}\), which implies \(L\mathbf{1}=({Q_{2}^{\Theta}}^{-1})\mathbf{1}=\mathbf{1}\). Let \((\mathbf{\vec{\pi_{1}}}^{*},\mathbf{\vec{\pi_{1}}}^{*})\) denote optimal solution for \(LP(Q_{1}^{\Theta})\). We will show that \((\mathbf{\vec{\pi_{2}}},\mathbf{\vec{\pi_{2}}})=(\mathbf{\vec{\pi_{1}}}^{*},L\mathbf{\vec{\pi_{ 1}}}^{*})\) is a feasible solution for \(\text{LP}(Q_{2}^{\Theta})\). Notice that all constraints and objective of \(\text{LP}(Q_{2}^{\Theta})\) depend only on \(\mathbf{\vec{\pi}}\), except following constraints: \(Q_{2}^{\Theta}\mathbf{\vec{\pi_{2}}}=\mathbf{\vec{\pi_{2}}}\) and \(\mathbf{0}\leq\mathbf{\vec{\pi_{2}}}\leq\mathbf{1}\). Note the first constraint is satisfied by \(Q_{2}^{\Theta}\mathbf{\vec{\pi_{2}}}=Q_{2}^{\Theta}L\mathbf{\vec{\pi_{1}}}=Q_{1}^{ \Theta}\mathbf{\vec{\pi_{1}}}^{*}=\mathbf{\vec{\pi_{2}}}\), while second constraint is \[\mathbf{\vec{\pi_{2}}}=L\mathbf{\vec{\pi_{1}}}^{*}=\begin{pmatrix}\langle[L]_{1:},\bm {\vec{\pi_{1}}}^{*}\rangle\\ \vdots\\ \langle[L]_{|\Theta|;},\mathbf{\vec{\pi_{1}}}^{*}\rangle\end{pmatrix} \tag{3}\] satisfied by the fact that \(L\) is double stochastic matrix. \(\Leftarrow\) We now show the stated condition is necessary. That is, if the condition is not satisfied, there always exist instances wherein the utility is not monotone. For symmetric confusion matrices, we show that the columns of \([Q_{1}^{\Theta}]_{:i}\) belonging to \(paral([Q_{2}^{\Theta}]_{:1},\ldots,[Q_{2}^{\Theta}]_{:|\Theta|})\) is equivalent to \([Q_{1}^{\Theta}]_{:i}\in co([Q_{2}^{\Theta}]_{:1}),\ldots,[Q_{2}^{\Theta}]_{:| \Theta|})\). By definition of parallepiped: \([Q_{1}^{\Theta}]_{:i}=\beta_{1}[Q_{2}^{\Theta}]_{:1}+\cdots+\beta_{|\Theta|}[Q _{2}^{\Theta}]_{:|\Theta|}\) where \(\beta_{\ell}\in[0,1]\,\forall\ell\). Next observe that \(\sum_{j}\,[Q_{1}^{\Theta}]_{j:i}=1\) and further, \(\sum_{j}\sum_{\ell}\beta_{\ell}[Q_{2}^{\Theta}]_{j:\ell}=\sum_{\ell}\beta_{\ell }\sum_{j}\,[Q_{2}^{\Theta}]_{j:\ell}=\sum_{\ell}\beta_{\ell}=1\), which is exactly the convex hull condition. Thus, the convex hull structure is equivalent to the parallelepiped structure when \(Q_{1}^{\Theta}\) and \(Q_{2}^{\Theta}\) are symmetric stochastic matrices. Now, assume there exists a column of matrix \(Q_{1}^{\Theta}\), such that \([Q_{1}^{\Theta}]_{:i}\notin paral([Q_{2}^{\Theta}]_{:1},\ldots,[Q_{2}^{\Theta} ]_{:|\Theta|})\) then there exists instance \(\mathcal{I}=(u,w,\mu)\) such that \(u_{\mathcal{I}}^{*}(Q_{1}^{\Theta})>u_{\mathcal{I}}^{*}(Q_{2}^{\Theta})\), violating the monotone condition. Consider an instance wherein the receiver is indifferent to both actions at all states. Thus, the first two constraints of \(LP(Q^{\Theta})\) are always satisfied. Then for \(Q_{1}^{\Theta}\), let \(\phi_{1}\) denote the set of any \(\mathbf{\vec{\pi}}\) such that \(Q_{1}^{\Theta}\mathbf{\pi}=\mathbf{\vec{\pi}}\), with \(\mathbf{0}<\mathbf{\pi}<\mathbf{1}\), define \(\phi_{2}\) similarly for \(Q_{2}^{\Theta}\). Since there exists a column of matrix \(Q_{1}^{\Theta}\), such that \([Q_{1}^{\Theta}]_{:i}\notin paral([Q_{2}^{\Theta}]_{:1},\ldots,[Q_{2}^{\Theta} ]_{:|\Theta|})\) we know that there exists a point, denoted by \(\mathbf{\tilde{\vec{\pi}}_{1}}\) which belongs to \(paral([Q_{1}^{\Theta}]_{:1},\ldots,[Q_{1}^{\Theta}]_{:|\Theta|})\), but not \(paral([Q_{2}^{\Theta}]_{:1},\ldots,[Q_{2}^{\Theta}]_{:|\Theta|})\). Since \(\mathbf{\tilde{\vec{\pi}}_{1}}\in paral([Q_{1}^{\Theta}]_{:1},\ldots,[Q_{1}^{ \Theta}]_{:|\Theta|})\) we know that \(\exists\mathbf{\vec{\pi}_{1}}\) whose values are between \(\mathbf{0}\) and \(\mathbf{1}\) such that \(Q_{1}^{\Theta}\mathbf{\vec{\pi}_{1}}=\mathbf{\tilde{\vec{\pi}}_{1}}\). On contrary, since \(\mathbf{\tilde{\vec{\pi}}_{1}}\notin paral([Q_{2}^{\Theta}]_{:1},\ldots,[Q_{2}^{ \Theta}]_{:|\Theta|})\) we know that \(\mathbf{\tilde{\vec{\pi}}_{2}}\) whose values are between \(0\) and \(1\) such that \(Q_{2}^{\Theta}\mathbf{\vec{\pi}_{2}}=\mathbf{\tilde{\vec{\pi}}_{1}}\). Therefore, we know that \(\mathbf{\tilde{\vec{\pi}}_{1}}\) such that \(\mathbf{\tilde{\vec{\pi}}_{1}}\in\phi_{1}\) and \(\mathbf{\tilde{\vec{\pi}}_{1}}\notin\phi_{2}\). Now, by Hyperplane Separation Theorem [26, Exercise 2.22], we know that \(\exists\mathbf{b}\) such that \(\langle\mathbf{b},\mathbf{\tilde{\vec{\pi}}_{1}}\rangle>c_{1}\) and \(\langle\mathbf{b},\mathbf{\vec{\pi_{2}}}\rangle<c_{2}\), \(\forall\mathbf{\vec{\pi}_{2}}\in\Phi_{2}\) such that \(c_{1}>c_{2}\). Now notice that we can simply achieve our objective expression by \((u(a_{1},\theta)-u(a_{2},\theta))\mu(\theta)=b_{\theta}\) where \(\theta=1\ldots|\Theta|\), and \(b_{\theta}\) is the \(\theta\)-th coordinate of vector \(\mathbf{b}\). Notice that in the edge case, where \(\mu(\theta)=0\) for some \(\theta\) values the initial LP problem reduces to the same LP structure with a smaller dimension, and the proof is exactly the same. Therefore, we see that we can find examples where \(u_{\mathcal{I}}^{*}(Q_{1}^{\Theta})>u_{\mathcal{I}}^{*}(Q_{2}^{\Theta})\). The result above is quite insightful as the conditions for optimal platform utility to monotonically increase can be easily parsed from the \(Q^{\Theta}\) matrix, noting that inclusion in the convex hull is very easy to verify computationally. We next present the second key result of this section that shows that the change in utility due to confusion matrix \(Q^{\Theta}\) is continuous. This is advantageous operationally since making slight modifications to the underlying classifier will not drastically affect the persuasion outcome. The proof of this result is technical and relies on the geometric insights developed in section 4. To sketch we show that the concave closure function \(cl(\widehat{\rho})\) is Lipschitz in \(\widehat{\rho}\), and that the change from \(Q_{1}^{\Theta}\) to \(Q_{2}^{\Theta}\) leads to a bounded vertical and lateral shift in the closure function due to this Lipschitz property, giving rise to Lipschitz continuity in \(Q^{\Theta}\). **Theorem 3**.: _For instance \(\mathcal{I}\), the maximum platform utility, \(u_{\mathcal{I}}^{*}(Q^{\Theta})\), is Lipschitz continuous in the confusion matrix \(Q^{\Theta}\)._ Proof.: We wish to show that for any instance \(\mathcal{I}\) and any pair \((Q_{1}^{\Theta},Q_{2}^{\Theta})\), there exists a constant \(L\) such that: \(|u_{\mathcal{I}}^{*}(Q_{1}^{\Theta})-u_{\mathcal{I}}^{*}(Q_{2}^{\Theta})|\leq L ||Q_{1}^{\Theta}-Q_{2}^{\Theta}||_{\infty}\), where we use the \(\ell_{\infty}\) matrix norm. We first show that the concave closure function \(cl(\widehat{\rho})\) is Lipschitz in \(\widehat{\rho}\). Next, we show that the change from \(Q_{1}^{\Theta}\) to \(Q_{2}^{\Theta}\) leads to a bounded vertical shift in the closure function due to this Lipschitz property. Lastly, the point at which we evaluate the concave closure function to determine the optimal sender utility also changes in a bounded manner. The combined effects are all bounded and give rise to our Lipschitz constant. From lemma 4, we know that on either side of the plane of indifference, \(cl(\widehat{\rho})\) is a continuous linear function. Since we have 2 actions, there is a single plane of indifference. We will aim to tightly upper bound the directional derivative for each joint state \(\widehat{\theta}=(\widehat{m},\widehat{v})\) of the linear regions on either side of this indifference plane. Since the concave closure function is linear on either side of the indifference plane, it suffices to consider the maximum value of \(\frac{\Delta u}{\Delta\widehat{\rho}_{\widehat{\theta}}}\) from the indifference plane, where \(\widehat{\rho}_{\widehat{\theta}}\) denotes the coordinate corresponding to \(\widehat{\theta}\). Pick an arbitrary belief \(\widehat{\rho}\) on the indifference plane. The slope along a direction \(\widehat{\theta}\) is naturally upper bounded by \(\frac{u_{s}^{max}-u_{s}^{min}}{\min(\widehat{\rho}_{\widehat{\theta}},1- \widehat{\rho}_{\widehat{\theta}})}\). We now look to tighten this. Indeed, if there exists another belief \(\widehat{\rho}^{\prime}\) on the indifference plane such that \(\widehat{\rho}_{\widehat{\theta}}^{\prime}\) is closer to \(0.5\), this would imply the change in utility is larger than \(u^{max}-u^{min}\), which is not possible. To tighten this, let \(\widehat{\rho}_{\widehat{\theta}}^{min}\) and \(\widehat{\rho}_{\widehat{\theta}}^{max}\) denote the smallest and largest values of coordinate \(\widehat{\theta}\) for beliefs \(\widehat{\rho}\) that lie on the indifference plane. Then the largest value of the directional derivative of the closure function along the \(\widehat{\theta}\) direction is given by \(c_{\widehat{\theta}}=\max\left(\frac{u^{max}-u^{min}}{1-\widehat{\rho}_{ \widehat{\theta}}^{min}},\frac{u^{max}-u^{min}}{\widehat{\rho}_{\widehat{ \theta}}^{max}}\right)\). Thus, when belief changes from \(\widehat{\rho}_{1}\) to \(\widehat{\rho}_{2}\), the maximum change in the concave closure function is upper-bounded by \(|\widehat{\rho}_{1}-\widehat{\rho}_{2}|\sum_{\widehat{\theta}}c_{\widehat{ \theta}}\), with \(c=\sum_{\widehat{\theta}}c_{\widehat{\theta}}\) being the Lipschitz constant. For a belief over predicted states \(P(\widehat{\theta})=\widehat{\rho}\), the corresponding belief over true states \(P(m,v)\) changes under the different confusion matrices. This affects the closure graph in two ways. First, it vertically shifts the underlying \(\overline{u}\) function since \(\overline{u}(\widehat{\rho})=\mathbb{E}_{\rho(\theta)}[u(a^{*},m,v)]\). More formally, for a given belief \(\widehat{\rho}\) the change due to the shift from \(Q_{1}^{\Theta}\) to \(Q_{2}^{\Theta}\) is given by: \(\overline{u}(\widehat{\rho};Q_{1}^{\Theta})-\overline{u}(\widehat{\rho};Q_{ 2}^{\Theta})\) \[=\sum_{\theta}u(a^{*},\theta)\sum_{\widehat{\theta}}\widehat{\rho}(\widehat{ \theta})(V_{1}^{\Theta}(\theta,\widehat{\theta})-V_{2}^{\Theta}(\widehat{ \theta},\theta))\leq|\Theta|u^{max}||V_{1}^{\Theta}-V_{2}^{\Theta}||\] which follows due to lemma 1. Note that \(V_{1}^{\Theta}\) and \(V_{2}^{\Theta}\) are inverses of the confusion matrix \(Q_{1}^{\Theta}\) and \(Q_{2}^{\Theta}\). Since we consider non-degenerate confusion matrices \(Q^{\Theta}\) whose inverses satisfy \(\|V^{\Theta}\|\leq M\),6 we can appeal to the sub-multiplicavity of the matrix norm and state: \(||V_{1}^{\Theta}-V_{2}^{\Theta}||\leq M^{2}||Q_{1}^{\Theta}-Q_{2}^{\Theta}||\). Footnote 6: For instance if \(\|Q^{\Theta}-I_{d}\|\leq r<1\), we would have \(M\leq\frac{1}{1-r}\). Second, let \(\rho(\theta)\) denote a belief wherein the user has equal expected utility for both actions. In other words, the user is indifferent. The predicted state belief that corresponds to this true belief also shifts due to the change from \(Q_{1}^{\Theta}\) to \(Q_{2}^{\Theta}\). In other words, the indifference plane is laterally shifted. To precisely characterize this, observe that for a belief over true state \(P(\theta)\), the change in corresponding predicted belief is: \(\widehat{\rho}(\widehat{\theta};Q_{1}^{\Theta})-\widehat{\rho}(\widehat{ \theta};Q_{2}^{\Theta})\leq|\Theta|||Q_{1}^{\Theta}-Q_{2}^{\Theta}||\) which also follows form lemma 1. There is thus a region of length \(|\Theta|||Q_{1}^{\Theta}-Q_{2}^{\Theta}||\) where the optimal action is different under \(cl(\widehat{\rho};Q_{1}^{\Theta})\) and, \(cl(\widehat{\rho};Q_{2}^{\Theta})\). The largest difference between the two closure functions for a belief \(\widehat{\rho}\) occurs in this region since one function could be increasing and the other decreasing (for a belief outside of this region, the optimal action is the same thus, both functions are both increasing or decreasing). However, since the closure function is lipschitz with constant \(c\), the maximum difference at any belief \(\widehat{\rho}\) is given by: \(|cl(\widehat{\rho};Q_{1}^{\Theta})-cl(\widehat{\rho};Q_{1}^{\Theta})|\leq 2c| \Theta|||Q_{1}^{\Theta}-Q_{2}^{\Theta}||+|\Theta|M^{2}u^{max}||Q_{1}^{\Theta}-Q _{2}^{\Theta}||\). Lastly, recall that by theorem 1, the optimal platform utility is equal to evaluating the closure function at the noisy prior \(\widehat{\mu}(\widehat{\theta})\). For a prior \(\mu(\theta)\), we have established that the predicted belief shifts at most \(|\Theta|||Q_{1}^{\Theta}-Q_{2}^{\Theta}||_{1}\). Since the closure function is Lipschitz, the change in the evaluation point from \(\widehat{\mu}(\widehat{\theta}\;Q_{1}^{\Theta})\) to \(\widehat{\mu}(\widehat{\theta}\;Q_{2}^{\Theta})\) leads to a difference of at most \(c|\Theta|||Q_{1}^{\Theta}-Q_{2}^{\Theta}||_{1}\). Combining this with the fact that the closure functions are vertically by at most \((2c+M^{2}u^{max})|\Theta|||Q_{1}^{\Theta}-Q_{2}^{\Theta}||_{1}\) at any belief, implies that the total change in optimal utility due to change from \(Q_{1}^{\Theta}\) to \(Q_{2}^{\Theta}\) is at most \(|\Theta|(3c+M^{2}u^{max})||Q_{1}^{\Theta}-Q_{2}^{\Theta}||_{1}\). Thus, the optimal sender utility for an instance \(\mathcal{I}\) as a function of confusion matrix \(Q^{\Theta}\) is \(|\Theta|(3c+M^{2}u^{max})\)-Lipschtiz. ## 6 Performance Perspectives We now consider the dynamics of Bayesian persuasion over time as signaling affects the distribution of content on the platform. Recall that without persuasion, users simply take the best action with respect to their prior, which we naturally assume to be "share". As such the distribution of content in the platform is unchanged from the starting prior. With persuasive signaling, we now stochastically induce different beliefs in the user, and their decision to share is affected accordingly. This means that after some time, the content distribution of this user will skew toward the type of content it was recommended to share. In other words, the prior distribution between \(\mu_{t}\) and \(\mu_{t+1}\) is affected by the signaling strategy employed by the platform.7 Correspondingly, the platform's optimal signaling at \(t+1\) will change from that at time \(t\). We consider the utility function and predictions models fixed and look to model this interaction between optimal signaling and the resulting prior. Footnote 7: The actual time between \(t\) and \(t+1\) is unimportant for our technical analysis and we leave it for practitioners and domain experts to further shed light on. The nascent literature around performative prediction captures a similar tension in the classification setting [22, 23, 24]. Therein, an optimal classifier is deployed for a given distribution, which is then affected by the chosen classifier. While sharing a number of parallels, these works generally require the underlying optimization problem to be unconstrained and strongly convex, which may be appropriate for classification settings but does not hold for our optimal signaling linear program. Further, they generally model the distribution update to be purely based on the optimization variable, where the past has no effect. Transition in our social media setting need not be so stateless, however, as older posts still exist on the platform, although with diminished relevance. Inspired by Brown et al. [25], we consider a stateful performative model where we interpolate between the earlier prior and the new distribution affected by signaling, precisely described below: **Definition 6**.: _The performative persuasion process for an instance \(\mathcal{I}=(u,w,\mu_{0})\) with joint confusion matrices \(Q^{\Theta}\) is defined as follows:_ * _At each round_ \(t\)_, with prior_ \(\mu_{t}(\theta)\)_, the platform chooses an optimal signaling scheme_ \(\pi_{t}^{*}(s|\widehat{\theta})\)_,_ \(s\in\{0,1\}\) _(Theorem_ 1_)._8__ Footnote 8: If there are multiple optimal signaling schemes, we assume ties are broken by choosing schemes whose posterior are closest to the earlier round’s posterior. * _Since optimal signaling is persuasive, users follow the recommendation (to share or not to share)._ * _The next round's prior distribution is given by:_ \(\mu_{t+1}=\lambda\mu_{t}+(1-\lambda)\rho_{t}(\theta|s=1)\)_, where_ \(\rho_{t}(\theta|s=1)\) _is the distribution of content that was shared this round, and_ \(\lambda\in[0,1]\) _a hyper-parameter._ The key question in this setting is whether such a process converges, and what the convergent point is. Parallel to this, another important notion is _stability_, the standard convergence point in performative literature. This resembles a Nash equilibrium notion, ensuring optimal signaling does not change the underlying distribution. We precisely define both for our setting below: **Definition 7**.: _We denote the performative process to converge to distribution \(\mu^{*}\) if for any \(\varepsilon>0\), there exists a \(T_{c}\) such that for \(t>T_{c}\), \(\mu_{t}-\mu^{*}<\varepsilon\). We denote a distribution \(\mu^{s}\) to be a stable point if there exists a \(T_{s}\) such that for \(t>T_{s}\), \(\mu_{t}=\mu^{s}\)._ We now prove the first key result in this section, outlining that the performative process always converges to the optimal posterior belief induced by the share signal (referred to as the share posterior) at round \(0\) with prior \(\mu_{0}\). This proof relies on the geometric insight developed in section 4.2. We leverage the fact that the performative process does not affect the concave closure function, and show that Bayes plausibility implies \(\mu_{t}\) always lies between the first two optimal posteriors induced at the first round. **Theorem 4**.: _For \(\lambda>0\), the performative process always converges to the best optimal posterior induced by the share signal in the first round: \(\rho_{0}^{1}(\theta)=\sum_{\widehat{\theta}}\widehat{\rho}_{0}^{1}V^{\Theta}_{ \theta,\widehat{\theta}}\), which has utility \(\overline{u}(\widehat{\rho}_{0}^{1})\).9_ Footnote 9: \(\widehat{\rho}_{0}^{1}\) is the posterior induced by signal \(1\) (share) at round \(0\). which has utility \(\overline{u}(\widehat{\rho}_{0}^{1})\) Proof.: We first express the performative dynamics with respect to the observed states \(\widehat{\theta}=(\widehat{m},\widehat{v})\). Observe the following: \(\lambda\widehat{\mu}_{t}(\widehat{\theta})+(1-\lambda)\widehat{\rho}_{t}( \widehat{\theta}|s=1)=\sum_{\theta}Q^{\Theta}_{\widehat{\theta},\widehat{ \theta}}[\lambda\mu_{t}(\theta)+(1-\lambda)\rho_{t}(\theta|s=1)]\), which follows due to lemma 3. Next, by invoking the performative dynamics and lemma 3 again, we have that this is equivalent to \(\sum_{\theta}\mu_{t+1}(\theta)Q^{\Theta}_{\widehat{\theta},\theta}=\widehat{ \mu}_{t+1}(\widehat{\theta})\). In other words: \(\widehat{\mu}_{t+1}(\widehat{\theta})=\lambda\widehat{\mu}_{t}(\widehat{ \theta})+(1-\lambda)\widehat{\rho}_{t}(\widehat{\theta}|s=1)\). Next, observe that the performative process only changes the prior (and thus the optimal signaling scheme), but does not affect the platform belief to expected utility function \(\overline{u}(\widehat{\rho})\) nor its concave closure \(cl(\widehat{\rho})\). Both of these depend only on the platform and user utility. Next, consider the scenario at \(t=0\) with noisy prior \(\widehat{\mu}_{0}(\widehat{\theta})\), whereupon the platform commits to an optimal signaling scheme \(\pi_{0}^{*}(s|\widehat{\theta})\). If there are multiple optimal schemes (thus multiple optimal posteriors that can be induced) and since this is the first round, let the platform break the tie by choosing the pair for whom the corresponding \(\overline{u}(\widehat{\rho}_{0}(\widehat{\theta}|s=1))\) is the largest. Denote this pair of optimal signaling schemes by \(\{\widehat{\rho}_{0}^{0},\widehat{\rho}_{0}^{1}\}\). Next, for \(\alpha\in[0,1]\), consider the line segment \(\ell(\alpha)=(1-\alpha)z_{0}+\alpha z_{1}\) which connects the points \(z_{0}=[\widehat{\rho}_{0}^{0},cl(\widehat{\rho}_{0}^{0})]\) and \(z_{1}=[\widehat{\rho}_{0}^{1},cl(\widehat{\rho}_{0}^{1})]\). By lemma 4, we know that the endpoints of this line segment, corresponding to the optimal induced posteriors, also lie on the \(\overline{u}(\widehat{\rho})\). In other words, \([\widehat{\rho}_{0}^{x},cl(\widehat{\rho}_{0}^{x})]=[\widehat{\rho}_{0}^{x}, \overline{u}(\widehat{\rho}_{0}^{x})]\) for \(x\in\{0,1\}\). By Bayes plausibility, there exists an \(\alpha_{0}\) such that the first element of \(\ell(\alpha_{0})\), denoted by \(\ell(\alpha_{0})_{0}=\widehat{\mu}_{0}\). Theorem 1 further states that the expected platform utility under the optimal scheme is given by \(cl(\widehat{\mu}_{0})=\sum_{x\in 0,1}P_{0}(x)\overline{u}(\widehat{\rho}_{0}^{x})\), where \(P_{0}(x)\) is the probability of signal \(x\) at round \(0\). Observe that this point is on the line segment \(\ell\) - i.e. \(\ell(\alpha_{0})=[\widehat{\mu}_{0},cl(\widehat{\mu}_{0})]\). Thus, we have that the line segment \(\ell\) matches the \(cl(\widehat{\rho})\) at both the endpoints and one interior point. Since the \(cl(\widehat{\rho})\) function is concave piece-wise linear continuous, it must imply \(cl(\widehat{\rho})\) coincides with the line segment \(\ell\) between beliefs \(\widehat{\rho}_{0}^{0}\) and \(\widehat{\rho}_{0}^{1}\). The performative dynamic implies the next prior, \(\widehat{\mu}_{1}\) is a convex combination of \(\widehat{\mu}_{0}\) and \(\widehat{\rho}_{0}^{1}\). Thus, there exists an \(\alpha_{1}\) such that \(\ell(\alpha_{1})_{0}=\widehat{\mu}_{1}\), since the performative process moves to a point between the earlier prior and the \(s=1\) posterior, both of which are on the \(\ell\) line segment. More specifically, \(\alpha_{1}=(1-\lambda)+\lambda\alpha_{0}\). The value of optimal signaling at \(\widehat{\mu}_{1}\) is again equivalent to \(cl(\widehat{\mu}_{1})\) by theorem 1. Since the curve \(\ell(\cdot)\) coincides with \(cl(\cdot)\) between \(\widehat{\rho}_{0}^{0}\) and \(\widehat{\rho}_{0}^{1}\), and \(\widehat{\mu}_{1}\) lies in this region, the value of optimal signaling is equal to \(\ell(\alpha_{1})_{1}=cl(\widehat{\mu}_{1})\). Thus, \(\widehat{\rho}_{0}^{0}\) and \(\widehat{\rho}_{0}^{1}\) are still optimal induced posteriors for prior at \(\widehat{\mu}_{1}\), and recall we break ties by choosing posteriors closest to the last round. In other words, \(\widehat{\rho}_{1}^{x}=\widehat{\rho}_{0}^{x}\), with \(P_{1}(s=0)=1-\alpha_{1}\) and \(P_{1}(s=1)=\alpha_{1}\) is an optimal scheme for the platform at \(\widehat{\mu}_{1}\). It is evident that this invariant will be maintained throughout this process. Precisely, for each \(\mu_{t}\) in this process, there exists a \(\alpha_{t}\in[0,1]\) such that \(\ell(\alpha_{t})_{0}=\widehat{\mu}_{t}\), which implies \(cl(\mu_{t})=\ell(\alpha_{t})_{1}\), which implies signaling such that \(\widehat{\rho}_{t}^{x}=\widehat{\rho}_{0}^{x}\), and \(P_{t}(s=0)=1-\alpha_{t}\) and \(P_{1}(s=1)=\alpha_{t}\) is optimal for the platform at \(\mu_{t}\). This in turn means \(\mu_{t+1}\) can be expressed as \(\ell(\alpha_{t+1})_{0}\), with \(\alpha_{t+1}=(1-\lambda)+\lambda\alpha_{t}\). Since \(\alpha_{t},\lambda\in[0,1]\), the following always holds: \(\alpha_{t}<\alpha_{t+1}<1\); thus, as \(t\to\infty\), \(\alpha_{t}\to 1\) and \(\widehat{\mu}_{t}\to\widehat{\rho}_{0}^{1}\). The convergence of this process has some nice properties. First, it is easy for the platform to anticipate the optimal utility at convergence since it is simply the utility of the share posterior induced by optimal signaling in the first round. Thus, given the distribution history and the platform's prediction models, it makes it operationally simple for the platform to evaluate if persuasion over time is beneficial for a user. Furthermore, the result implies the optimal platform utility throughout the process is monotone. However, observe that this result implies nothing about stability, since \(\mu_{t}\) is always changing, albeit slightly for large \(t\). This leads us to answer in Theorem 5 the remaining technical questions: (1) does this process admit a stable point, and (2) what is the convergence behavior when \(\lambda=0\)? This proof also relies on the geometric insights of optimal noisy persuasion. **Theorem 5**.: _If \(\lambda=0\), the performative process converges in \(2(|\Theta|-1)\) iterations to a stable point where the platform always recommends to share._ Proof.: Observe that any belief \(\widehat{\rho}\in\Delta^{|\Theta|}\) can be equivalently represented as a vector in \({|\Theta|-1}\) dimensional space. As such, hereinafter, we will refer to \(\widehat{\rho}\in\mathbb{R}^{|\Theta|-1}\). Recall that \(w_{a}(\widehat{\rho})\) is a linear function representing the expected user utility at belief \(\widehat{\rho}\) for taking action \(a\), and the plane of indifference, which we denote by \(\mathcal{D}\) is the intersection of these two planes, with \(dim(\mathcal{D})\leq|\Theta|-1\). If \(w_{0}(\widehat{\rho})\) and \(w_{1}(\widehat{\rho})\) are co-planer (thus like \(\widehat{\rho}\), \(dim(\mathcal{D})=|\Theta|-1\)), the user is indifferent at _all_ beliefs \(\widehat{\rho}\). Since ties break in favour of the platform, \(\overline{u}(\widehat{\rho})=max(u_{0}(\widehat{\rho}),u_{1}(\widehat{\rho}))\), where \(u_{a}(\widehat{\rho})\) denotes the expected platform utility when action \(a\) is taken at belief \(\widehat{\rho}\). Since the mapping from \(P(\widehat{\theta})\) to \(P(\theta)\) is continuous (lemma 1) and expectation is linear, \(u(\widehat{\rho},x)\) is linear and \(\overline{u}(\widehat{\rho})\) is continuous and piecewise linear. Further, since this is a max over two linear functions, \(\overline{u}(\widehat{\rho})\) is either convex or concave everywhere. As observed by Kamenica and Gentzkow [1], when this function is concave, signaling cannot strictly improve utility, and when convex, the optimal induced posterior is at the simplex boundary. Note that when signaling cannot improve utility, uninformative signaling that does not change the prior is optimal, and we reach a stable point. We now prove our claim inductively. We start with the base case of \(|\Theta|=2\) with \(\widehat{\rho}\in[0,1]\). If the user is indifferent everywhere, the above implies that either signaling cannot improve utility and we are at a stable point or signaling moves the share posterior to the simplex boundary, addressed below. Otherwise, \(\mathcal{D}\) is simply a point on this interval. Starting at any noisy prior \(\widehat{\mu}_{t}\), the first optimal signaling step will result in induced posteriors either at point \(\mathcal{D}\), or on the boundary \([0,1]\) due to lemma 4. Note that since \(\lambda=0\), the prior at the next round is the "share" induced posterior, \(\widehat{\rho}_{0}^{1}\). Thus, the next round's prior will be either on the boundary (i.e. \(0\) or \(1\)) or at \(\mathcal{D}\). If the prior is at the boundary, observe that there exists no signaling scheme that can strictly improve upon the utility at the prior, since there is no convex combination of distinct posteriors that can equal the prior. Thus, we have stability at round \(t+1\). If the next round prior is at \(\mathcal{D}\), optimal signaling at round \(t+1\) must (due to lemma 4) either stay at \(\mathcal{D}\) (and thus achieve convergence), or move the boundary and achieve stability in the next round. Thus, when \(|\Theta|=2\), a stable point is achieved in at most \(2\) steps. Our inductive hypothesis is the following: For a state size \(|\Theta|=n\geq 2\), the performative process converges in at most \(2(|\Theta|-1)\) in the \(\lambda=0\) regime. Now consider an instance with \(|\Theta|=n+1\), and accordingly, \(dim(\mathcal{D})\leq n\). Suppose we are at time \(t\) with prior \(\widehat{\mu}_{t}\). If \(dim(\mathcal{D})=n\), the user is indifferent everywhere, and we know that either signaling cannot improve utility and we have reached a stable point, or signaling will move the share posterior (and thus \(\mu_{t+1}\)) to the simplex boundary, a case addressed below. Otherwise, \(dim(\mathcal{D})\leq n-1\), and after the first optimal signaling, lemma 4 established that the "share" posterior, \(\widehat{\rho}_{t}^{1}\) (and thus the next round prior \(\widehat{\mu}_{t+1}\)) must lie either on \(\mathcal{D}\) or on the simplex boundary. Consider the latter scenario. A simplex boundary is defined by a set of states \(\mathcal{B}\) such that for each \(\widehat{\theta}\in\mathcal{B}\), \(\widehat{\rho}(\widehat{\theta})\in\{0,1\}\). Observe that once the prior is on the boundary, any signaling thereafter must also lie on the same boundary. Otherwise, there is an induced posterior for which a \(\widehat{\theta}^{\prime}\in\mathcal{B}\) satisfies \(\widehat{\rho}_{t}(\widehat{\theta}^{\prime}|s)\notin\{0,1\}\) and no convex combination with this posterior can lead to the prior which has \(\widehat{\mu}_{t+1}(\widehat{\theta}^{\prime})\in\{0,1\}\). Observe also that the simplex boundary is itself a lower dimensional simplex. In other words, if optimal signaling moves the next round's prior to be on the boundary, the problem reduces to a problem over a smaller state-space of size at most \(|\Theta|-1=n\), for which we can appeal to the inductive hypothesis. For the other scenario where the prior at round \(t+1\) lies on the surface \(\mathcal{D}\), we note that the induced posteriors of strictly optimal signaling at round \(t+1\) must now be two points that lie on either \(\mathcal{D}\) or at the boundary. As we have already addressed the boundary case, observe if the share posterior is on \(\mathcal{D}\) and \(dim(\mathcal{D})=0\), if the next round \((t+2)\) can strictly improve utility, the posteriors must be on the boundary. If \(dim(\mathcal{D})\in[1,n-1]\), then both posteriors are either on (1) \(\mathcal{D}\), or (2) on the boundary. For (1), observe that if one posterior lies in \(\mathcal{D}\), the other must also since their linear combination must be \(\widehat{\mu}_{t+1}\in\mathcal{D}\). Since \(\mathcal{D}\) has dimension \(\leq n-1\) and all induced posteriors thereafter must stay on this lower dimensional plane, we can reformulate this as a lower dimensional problem and appeal to the induction hypothesis. Thus in at most \(2\) steps, we arrive at a lower dimensional problem. Thus, when the number of states is \(|\Theta|=n+1\), we require at most \(2+2(n-1)=2n=2(|\Theta|-1)\) iterations, confirming the induction hypothesis. While having exactly \(\lambda=0\) is unlikely, this result can be interpreted as follows: the convergence process in Theorem 4 gets infinitely close to some distribution (rate depends on \(\lambda\)) but never reaches it. Since, realistically, there is a finite resolution and practical agents have bounded rationality, it may be reasonable to assume \(\mu_{t}\) to reach \(\rho_{0}^{1}\). Such a phenomenon is inextricably linked to the \(\lambda=0\) case for which the process directly moves to this \(\rho_{0}^{1}\). Thus, even for \(\lambda\neq 0\), the two results can be seen from a hierarchical perspective. To sum-up, Theorem 5 can be seen as charting the high-level behavior of the performative process until reaching a stable point while Theorem 4 provides evidence that such a phenomenon also occurs in the more realistic case where performativity is a smoother process where \(\mu_{t}\) depends on the previous priors. Lastly, we comment on the properties of the stable point. Observe that since \(\mu_{t+1}=\mu_{t}\), optimal signaling at a stable point is the completely uninformative scheme that almost surely recommends users take their optimal action based on belief \(\mu_{t}\). Alternatively, we can see the \(\lambda=0\) process as changing the user's priors with persuasion until reaching a point wherein optimal signaling can longer achieve any benefit. This leads to our last technical question: what are the conditions wherein the stable point has higher platform utility than the original prior \(\mu_{t}\). **Theorem 6**.: _For \(c_{n}=\max_{m,v}u(0,\theta)\), define a normalized platform utility \(u^{\prime}(a,\theta)=u(a,\theta)-c_{n}\).10 Then the performative process with \(\lambda=0\) always converges to a stable point in a monotonically increasing fashion if the sender's normalized utility at \(\mu_{0}\) is positive._ Footnote 10: Recall the \(0\) action is to _not share_ Proof.: First, note that scaling the platform utility by an additive constant does not affect optimal signaling, and thus we can solve the normalized instance. We will show that the \(\lambda=0\) process under the stated conditions always leads to strictly increasing platform utility until reaching stability. Consider an arbitrary round \(t\geq 0\), with the optimal user action being \(a_{t}\in\{0,1\}\) for prior \(\mu_{t}\). If we are not at a stable point, platform utility can always improve due to signaling. Thus: \[\mathbb{E}_{\mu_{t}}[u^{\prime}(a_{t},\theta)]<P_{t}(s=0)\,\mathbb{E}_{\rho_{t }^{0}}[u^{\prime}(0,\theta)]+P_{t}(s=1)\,\mathbb{E}_{\rho_{t}^{1}}[u^{\prime} (1,\theta)]\] where we use the fact that optimal signaling is persuasive. Next, inductively assume that the user's best action at belief \(\mu_{t}\) is \(1\) (to share) which yields positive platform utility (we will see why this always holds). We know this is true for the first round by the theorem condition and the fact that user default action at \(\mu_{0}\) is assumed to be share. Since platform utility for action \(0\) (not share) is always less than or equal to \(0\) for any state under the theorem conditions: \(\mathbb{E}_{\mu_{t}}[u^{\prime}(a_{t},\theta)]<P_{t}(s=1)\,\mathbb{E}_{\rho_{t }^{1}}[u^{\prime}(1,\theta)]\), which implies \[\frac{1}{P_{t}(s=1)}\,\mathbb{E}_{\mu_{t}}[u^{\prime}(a_{t},\theta)]<\mathbb{ E}_{\rho_{t}^{1}}[u^{\prime}(1,\theta)]=\mathbb{E}_{\mu_{t+1}}[u^{\prime}(1, \theta)]\] Lastly, we note that the next round's prior is equal to the posterior induced by the share/\(1\) signal. Since signaling is always persuasive, the optimal action of the receiver at belief \(\rho_{t}^{1}=\mu_{t+1}\) (wherein they receive the share signal) is to share, which has positive utility for the platform by the above. Thus the assumption we made earlier always holds for any round \(t+1\). Note that since \(P_{t}(s=1)\in[0,1]\), \(\mathbb{E}_{\mu_{t}}[u^{\prime}(a_{t},\theta)]<\mathbb{E}_{\mu_{t+1}}[u^{ \prime}(a_{t+1},\theta)]=\mathbb{E}_{\mu_{t+1}}[u^{\prime}(1,\theta)]\), and this process monotonically converges to a stable point. ## 7 Discussion This work takes a softer approach toward addressing misinformation on social media platforms by leveraging an information design framework based on Bayesian persuasion. Our setting, wherein underlying states are not perfectly observed but predicted, generalizes this classical framework to a noisy and realistic setting. We rigorously characterize how the prediction accuracy affects optimal signaling and platform utility, providing operationally useful results to platforms, while noting that user utility can never decrease due to this. Further, the techniques used provide significant insights from a geometric and optimization perspective which may be of broader interest. We also consider the long-term implications of such an intervention and model the platform-user interactions from a performative angle. We rigorously characterize the convergence and stability properties of this process, with these results illustrating that persuasion can have a long-term positive impact on content quality and veracity on social media platforms. Our work leaves open a number of technical and conceptual questions. Providing necessary conditions that ensure increased platform utility at a stable point would complement Theorem 6 and be an insightful result. As would running experiments on real user data and interactions to evaluate the real-world effectiveness of such an approach. Along a similar line, our model assumes a perfectly rational user with the sender not having any exogenous restrictions; generalizing this to consider a bounded rationality model [27, 28] or designing robust signaling under exogenous restrictions [14, 12] would be an intriguing research direction. For example, in standard Bayesian persuasion, when the sender doesn't have a fully-revealing signaling scheme available for use, the equilibrium significantly changes [12], it will be interesting to explore whether the added noise by the predictions in our model helps stabilize the equilibrium reached. Another interesting direction is how persuasion impacts influence propagation and network effects within platform [29, 30]. Lastly, developing broader socio-technical guidelines around information design for online interactions is a prescient and necessary direction.
2301.04146
Hilbert space of Quantum Field Theory in de Sitter spacetime
We study the decomposition of the Hilbert space of quantum field theory in $(d+1)$ dimensional de Sitter spacetime into Unitary Irreducible Representations (UIRs) of its isometry group \SO$(1,d+1)$. Firstly, we consider multi-particle states in free theories starting from the tensor product of single-particle UIRs. Secondly, we study conformal multiplets of a bulk Conformal Field Theory with symmetry group \SO$(2,d+1)$. Our main tools are the Harish-Chandra characters and the numerical diagonalization of the (truncated) quadratic Casimir of \SO$(1,d+1)$. We introduce a continuous density that encodes the spectrum of irreducible representations contained in a reducible one of $\SO(1,d+1)$. Our results are complete for $d=1$ and $d=2$. In higher dimensions, we rederive and extend several results previously known in the literature. Our work provides the foundation for future nonperturbative bootstrap studies of Quantum Field Theory in de Sitter spacetime.
Joao Penedones, Kamran Salehi Vaziri, Zimo Sun
2023-01-10T19:00:00Z
http://arxiv.org/abs/2301.04146v2
# Hilbert space of Quantum Field Theory ###### Abstract We study the decomposition of the Hilbert space of quantum field theory in \((d+1)\) dimensional de Sitter spacetime into Unitary Irreducible Representations (UIRs) of its isometry group \(\mathrm{SO}(1,d+1)\). Firstly, we consider multi-particle states in free theories starting from the tensor product of single-particle UIRs. Secondly, we study conformal multiplets of a bulk Conformal Field Theory with symmetry group \(\mathrm{SO}(2,d+1)\). Our main tools are the Harish-Chandra characters and the numerical diagonalization of the (truncated) quadratic Casimir of \(\mathrm{SO}(1,d+1)\). We introduce a continuous density that generalizes the notion of multiplicity in the decomposition of reducible representations into irreducible ones. Our results are complete for \(d=1\) and \(d=2\). In higher dimensions, we rederive and extend several results previously known in the literature. Our work provides the foundation for future nonperturbative bootstrap studies of Quantum Field Theory in de Sitter spacetime. ## 1 Introduction and summary * 2 A brief review of UIRs of SO\((1,d+1)\) * 2.1 Classification of UIRs * 2.2 Harish-Chandra characters * 3 Tensor products in SO\((1,2)\) * 3.1 Some details regarding SO\((1,2)\) UIRs * 3.2 \({\cal F}_{\Delta_{1}}\otimes{\cal F}_{\Delta_{2}}\) * 3.2.1 The discrete part * 3.2.2 The continuous part * 3.3 \({\cal F}_{\Delta}\otimes{\cal D}_{p}\) * 3.3.1 The discrete part * 3.3.2 The continuous part * 3.4 \({\cal D}_{p_{1}}\otimes{\cal D}_{p_{2}}\) * 3.4.1 The discrete part * 3.4.2 The continuous part * 3.5 Identical particles * 3.5.1 Two-particle Hilbert space * 3.5.2 Counting complementary series in multi-particle Hilbert space * 4 Tensor products in SO\((1,d+1)\) * 4.1 The relative density * 4.2 Truncated model of the relative density * 4.3 Analytical continuation to complementary series * 4.4 Photon from massless scalars * 4.5 Some remarks on nonzero spin * 4.5.1 The case of SO\((1,3)\) * 5 Conformal field theory in de Sitter * 5.1 SO\((2,d+1)\) group characters and noncompact generators * 5.2 From SO\((2,2)\) to SO\((1,2)\) * 5.2.1 Character analysis * 5.2.2 Numerical check * 5.3 From SO\((2,3)\) to SO\((1,3)\) * 5.4 From SO\((2,d+1)\) to SO\((1,d+1)\): scalar primary representations * 5.4.1 Character analysis * 5.4.2 Numerical check * 5.5 From SO\((2,d+1)\) to SO\((1,d+1)\): Some remarks on spinning primary representations A Fourier transform of \(\psi\) functionsB Numerical analysis of \(L_{0}\neq 0\) sector of SO\((1,2)\)C Summing over SO\((d)\) Weyl characterD Matrix elements of the noncompact generators in SO\((1,d+1)\)E Absence of exceptional series in \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\)F A review of SO\((2,d+1)\)G Various properties of \(|\phi_{n}^{s}\rangle\)H Various properties of \(|s,n\rangle_{a_{1}\cdots a_{s},b_{1}\cdots b_{s}}\)H.1 \(s=0\)H.2 \(s=1\)H.3 Matrix elements ## 1 Introduction and summary Our universe is under accelerated expansion. The best known description of the dynamics of elementary particles is Quantum Field Theory (QFT). Therefore, it is worth understanding QFT in the most symmetric expanding universe, namely de Sitter (dS) spacetime [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13]. More precisely, we would like to set up a non-perturbative bootstrap approach to QFT in dS spacetime [14]. This requires a detailed understanding of how the spacetime symmetries are realized in the Hilbert space of the theory. In a quantum theory, symmetry generators are realized as (anti-)hermitian operators acting on the Hilbert space. For QFT in \((d+1)\)-dimensional dS, the symmetry group is SO\((1,d+1)\). Therefore, the Hilbert space must decompose into Unitary Irreducible Representations (UIRs) of this non-compact group. Such UIRs have been classified long ago in various dimensions [15; 16; 17; 18; 19; 20; 21]. Namely, there are principal series \(\mathcal{P}\), complementary series \(\mathcal{C}\), discrete series \(\mathcal{D}\), exceptional series \(\mathcal{V}\) (of type I) and \(\mathcal{U}\) (of type II), as we partially review in section 2. The question we address in this paper is: _How does the Hilbert space of a QFT in dS decompose into UIRs of SO\((1,d+1)\)?_ Clearly, this question is too difficult to solve for a general interacting QFT. In this paper, we consider two simple cases: free theories (in sections 3 and 4) and Conformal Field Theories (CFT) (in section 5). The Hilbert space of a **free QFT** is a multi-particle Fock space, whose decomposition into UIRs amounts to the study of tensor products of single-particle UIRs. Remarkably, this simple group theory question has not been completely solved. We summarize our results in tables 1.1, 1.2 and 1.3. In addition, below we give a brief historical review of the previous literature. In the present work, we rederive and extend previous results by writing the product of two Harish-Chandra characters (reviewed in section 2) as a sum of characters of UIRs. The sum over the principal series is actually an integral over its continuous label involving a density that generalizes the notion of multiplicity from compact groups to non-compact groups. In fact, the analytic structure of this density also encodes information about the complementary series. In addition, we check our predictions by numerically diagonalizing (a truncated version) of the quadratic Casimir of \(\mathrm{SO}(1,d+1)\) acting on the tensor product of two UIRs. An example of a novel result is the decomposition of the tensor product of two exceptional series \(\mathcal{V}_{1,0}\) described in section 4.4. For QFT in dS, this result implies that there is a photon UIR inside the two-particle states of two different massless scalars. The Hilbert space of a **CFT** in dS is equivalent to the well understood Hilbert space of a CFT in radial quantization. Therefore, we are led to another simple group theoretical task: decompose the well known unitary conformal multiplets of \(\mathrm{SO}(2,d+1)\) into UIRs of the subgroup \(\mathrm{SO}(1,d+1)\). Surprisingly, this is also an unsolved problem. We summarize our results in table 1. To the best of our knowledge, these are new results building on [14] that studied the \(d=1\) case. As you can see from the tables, the results are complete in \(d=1\) and \(d=2\) but there are many open questions in \(d\geq 3\). In \(d=1\), both free massive QFT and interacting CFT give rise to Hilbert spaces containing all principal and discrete series with infinite degeneracy.1 On the contrary, the complementary series UIRs appear in finite number (possibly zero). This is reminiscent of single-particle asymptotic states in Minkowski spacetime. By continuity, we expect this structure of the Hilbert space to hold for generic interacting QFTs in dS\({}_{2}\). The situation is very similar for \(d=2\) with the difference that there are no discrete series and there are principal series of all spins \(s\in\mathbb{Z}\). Footnote 1: The exception being chiral theories. For example, if the single-particle states correspond to \(\mathcal{D}_{k}^{+}\), then the full multi-particle Hilbert space will decompose into \(\mathcal{D}_{q}^{+}\) with finite degeneracy for each \(q\geq 1\). Similarly, a chiral CFT in dS\({}_{2}\) only gives rise to discrete series UIRs (see the last paragraph of section 5.2.1). It would be very interesting to complete this analysis in \(d=3\). There are many results known in the literature. For example, Martin in [22] studied the tensor product of principal series times other UIRs (including principal, complementary, exceptional and discrete). To complete the analysis, the decomposition of the tensor product of exceptional and discrete series is needed. It would be interesting to revisit and extend this work using Harish-Chandra characters. Similarly, it would be great to complete the analysis of CFT in dS\({}_{4}\). We leave these tasks for the future. \begin{tabular}{|c|c|c|c|c|} \hline \(\otimes\) & \({\cal P}_{\Delta_{1}}\) & \({\cal C}_{\Delta_{1}}\) & \({\cal D}_{k_{1}}^{+}\) & \({\cal D}_{k_{1}}^{-}\) \\ \hline \({\cal P}_{\Delta_{2}}\) & \(\int_{\Delta}{\cal P}_{\Delta\oplus\sum_{k}{\cal D}_{k}^{\pm}}\) & & & \\ \hline \({\cal C}_{\Delta_{2}}\) & \(\int_{\Delta}{\cal P}_{\Delta\oplus\sum_{k}{\cal D}_{k}^{\pm}}\) & \(\int_{\Delta}{\cal P}_{\Delta\oplus\sum_{k}{\cal D}_{k}^{\pm}\oplus{\cal C}_{ \Delta_{1}+\Delta_{2}-1}}^{2}\) & & \\ \hline \({\cal D}_{k_{2}}^{+}\) & \(\int_{\Delta}{\cal P}_{\Delta\oplus\sum_{k}{\cal D}_{k}^{+}}\) & \(\int_{\Delta}{\cal P}_{\Delta\oplus\sum_{k}{\cal D}_{k}^{+}}\) & \(\sum_{k\geq k_{1}+k_{2}}{\cal D}_{k}^{+}\) & \\ \hline \({\cal D}_{k_{2}}^{-}\) & \(\int_{\Delta}{\cal P}_{\Delta\oplus\sum_{k}{\cal D}_{k}^{-}}\) & \(\int_{\Delta}{\cal P}_{\Delta\oplus\sum_{k}{\cal D}_{k}^{-}}\) & \(\int_{\Delta}{\cal P}_{\Delta\oplus\sum_{k=1}^{|k_{1}-k_{2}|}{\cal D}_{k}^{ \text{sign}(k_{1}-k_{2})}}\) & \(\sum_{k\geq k_{1}+k_{2}}{\cal D}_{k}^{-}\) \\ \hline \end{tabular} **Table 1.1**: Decomposition of the tensor products of UIRs of \(\mathrm{SO}(1,2)\). These are the principal series \({\cal P}_{\Delta}\) with \(\Delta\in\frac{1}{2}+i\mathbb{R}_{\geq 0}\), the complementary series \({\cal C}_{\Delta}\) with \(\frac{1}{2}<\Delta<1\), and the discrete series \({\cal D}_{k}^{\pm}\) with \(k\in\mathbb{Z}_{+}\). Since the tensor product is symmetric we did not fill in the upper triangle of the table to avoid cluttering. The sum \(\sum_{k}\equiv\oplus_{k}\) runs over \(k\in\mathbb{Z}_{+}\) unless otherwise specified and \(\int_{\Delta}\) is a sum over all principal series with \(\Delta=\frac{1}{2}+i\lambda\) and \(\lambda\in\mathbb{R}_{\geq 0}\). #### Brief literature review For \(\mathrm{SO}(1,2)\) (or its double covering group \(\mathrm{SL}(2,\mathbb{R})\)), the decomposition of the tensor product of two principal series or complementary series representations was first carried out by Pukanszky [23]. The full list of tensor product decompositions involving all UIRs of \(\mathrm{SO}(1,2)\) was later solved by [24]. The decomposition of the tensor product of principal series and complementary series of \(\mathrm{SO}(1,3)\) (or its double covering group \(\mathrm{SL}(2,\mathbb{C})\)) was studied by Naimark in a series of papers [25; 26; 27]. In particular, in the last paper [27], he found that \({\cal C}_{\Delta_{1}}\otimes{\cal C}_{\Delta_{2}}\) can contain one complementary series representation \({\cal C}_{\Delta_{1}+\Delta_{2}-2}\) when \(\Delta_{1}+\Delta_{2}>3\). For higher dimensional \(\mathrm{SO}(1,d+1)\), it was argued that UIRs contained in \({\cal P}_{\Delta_{1},s_{1}}\otimes{\cal P}_{\Delta_{2},s_{2}}\) should also appear in the decomposition of the regular representation of \(\mathrm{SO}(1,d+1)\)[28; 29]. Based on Hirai's result [30; 31] on the latter problem, it is inferred that \({\cal P}_{\Delta_{1},s_{1}}\otimes{\cal P}_{\Delta_{2},s_{2}}\) contains only principal series4, and when \(d\) is odd, also discrete series. For the special case of \(\mathrm{SO}(1,4)\), the discrete series part of this tensor product was solved by Martin [22]. When \(s_{1}=s_{2}=0\), the discrete series should also disappear [28; 29]. Our analysis of the \(\mathrm{SO}(d+1)\) content in section 4 can also be used to exclude discrete series when \(d\) is sufficiently large. A complete understanding of complementary series in \(\mathcal{C}_{\Delta_{1},s_{1}}\otimes\mathcal{C}_{\Delta_{2},s_{2}}\) is not known. Some partial results for \(s_{1}=s_{2}=0\) were derived recently in [32]. Another open question is to identify what tensor products contain complementary series. See also [33; 34; 35; 36; 37; 38; 39] for other works that studied \(\mathrm{SO}(1,d+1)\) representation theory motivated by QFT in de Sitter. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \(\otimes\) & \(\mathcal{P}_{\Delta_{1},0}\) & \(\mathcal{C}_{\Delta_{1},0}\) & \(\mathcal{V}_{1,0}\) & \(\ldots\) \\ \hline \(\mathcal{P}_{\Delta_{2},0}\) & \(\sum_{s}\int_{\Delta}\mathcal{P}_{\Delta,s}\) & & & \\ \hline \(\mathcal{C}_{\Delta_{2},0}\) & \(\sum_{s}\int_{\Delta}\mathcal{P}_{\Delta,s}\) & \(\sum_{s}\int_{\Delta}\mathcal{P}_{\Delta,s}\oplus\sum_{n,s}\mathcal{C}_{ \Delta_{1}+\Delta_{2}-d-s-2n,s}\)5 & & \\ \hline \(\mathcal{V}_{1,0}\) &? &? & \(\sum_{s}\int_{\Delta}\mathcal{P}_{\Delta,s}\oplus\mathcal{U}_{1,0}\) & \\ \hline \(\ldots\) &? &? &? &? \\ \hline \end{tabular} \end{table} Table 1.3: Decomposition of the tensor product of some UIRs of \(\mathrm{SO}(1,d+1)\) for \(d\geq 3\). We consider the principal series \(\mathcal{P}_{\Delta,s}\) with dimension \(\Delta\in\frac{d}{2}+i\mathbb{R}_{\geq 0}\) and integer spin \(s\geq 0\), the complementary series \(\mathcal{C}_{\Delta,s}\) with \(\frac{d}{2}<\Delta<d\), and the exceptional series \(\mathcal{V}_{p,0}\) with \(p\in\mathbb{N}\) and \(\mathcal{U}_{s,t}\) with \(s\in\mathbb{N}\) and \(t=0,1,\ldots,s-1\), described in section 2. In section 4.5, we make some comments about tensor products of UIRs with non-zero spin but the current knowledge is very incomplete. In addition, there are several more UIRs of \(\mathrm{SO}(1,d+1)\) that we do not study in this paper. Since the tensor product is symmetric we did not fill in the upper triangle of the table to avoid cluttering. The sum \(\sum_{s}\equiv\oplus_{s}\) runs over all integer spins \(s\geq 0\) and \(\int_{\Delta}\) is a sum over all principal series with \(\Delta=\frac{d}{2}+i\lambda\) and \(\lambda\in\mathbb{R}_{\geq 0}\). A brief review of UIRs of SO\((1,d+1)\) The \((d+1)\) dimensional de Sitter spacetime is a hypersurface in the ambient space \(\mathbb{R}^{1,d+1}\) \[-X_{0}^{2}+X_{1}^{2}+\cdots+X_{d+1}^{2}=1 \tag{1}\] where we take the de Sitter radius to be 1. The embedding (1) manifests the isometry group SO\((1,d+1)\) of dS\({}_{d+1}\), which is generated by \(L_{AB}=-L_{BA},0\leq A,B\leq d+1\) satisfying commutation relations \[[L_{AB},L_{CD}]=\eta_{BC}L_{AD}-\eta_{AC}L_{BD}+\eta_{AD}L_{BC}- \eta_{BD}L_{AC} \tag{2}\] where \(\eta_{AB}=\text{diag}(-1,1,\cdots,1)\) is the metric on \(\mathbb{R}^{1,d+1}\). In a unitary representation, \(L_{AB}\) are realized as anti-hermitian operators on some Hilbert space. The isomorphism between \(\mathfrak{so}(1,d+1)\) and the \(d\)-dimensional Euclidean conformal algebra is realized as \[L_{ij}=M_{ij}\,\ \ L_{0,d+1}=D\,\ \ L_{d+1,i}=\frac{1}{2}(P_{i}+K_{i})\,\ \ L_{0,i}=\frac{1}{2}(P_{i}-K_{i}) \tag{3}\] where \(D\) is the dilatation, \(P_{i}\) (\(i=1,2,\cdots d\)) are translations, \(K_{i}\) are special conformal transformations and \(M_{ij}=-M_{ji}\) are rotations. The commutation relations of the conformal algebra following from (2) and (3) are \[[D,P_{i}]=P_{i}\,\ \ \ [D,K_{i}]=-K_{i}\,\ \ [K_{i},P_{j}]=2 \delta_{ij}D-2M_{ij}\,\] \[[M_{ij},P_{k}]=\delta_{jk}P_{i}-\delta_{ik}P_{j}\,\ \ [M_{ij},K_{k}]= \delta_{jk}K_{i}-\delta_{ik}K_{j}\,\] \[[M_{ij},M_{k\ell}]=\delta_{jk}M_{i\ell}-\delta_{ik}M_{j\ell}+ \delta_{i\ell}M_{jk}-\delta_{j\ell}M_{ik}. \tag{4}\] The quadratic Casimir of SO\((1,d+1)\), which commutes with all \(L_{AB}\), is chosen to be \[C^{\text{SO}(1,d+1)}=\frac{1}{2}L_{AB}L^{AB}=D(d-D)+P_{i}K_{i}+ \frac{1}{2}M_{ij}^{2}. \tag{5}\] Here \(\frac{1}{2}M_{ij}^{2}\equiv\frac{1}{2}M_{ij}M^{ij}\) is the quadratic Casimir of SO\((d)\) and it is negative-definite for a unitary representation since \(M_{ij}\) are anti-hermitian. For example, for a spin-\(s\) representation of SO\((d)\), it takes the value of \(-s(s+d-2)\). ### Classification of UIRs An infinite dimensional representation of SO\((1,d+1)\) is fixed by a scaling dimension \(\Delta\in\mathbb{C}\) and a highest-weight vector \(\lambda\) of SO\((d)\). In the present paper, we only consider \(\lambda=(s,0,\cdots,0)\) labelled by a nonnegative integer \(s\)8, in other words, we will focus on symmetric traceless tensors of SO\((d)\). \(s\) labels the spin of a field in dS\({}_{d+1}\). More general \(\lambda\) corresponds to fields of mixed symmetry, including form fields, spinors, tensor spinors, etc. See [36, 40, 41, 42] for recent discussions on these fields. Given a representation labelled by \(\Delta\) and \(s\), the quadratic Casimir is equal to \(\Delta(d-\Delta)-s(d+s-2)\). For any \(d\geq 3\), there are four types of UIRs apart from the trivial representation [29, 36, 43]: * **Principal series**\(\mathcal{P}_{\Delta,s}\): \(\Delta\in\frac{d}{2}+i\mathbb{R}\) and \(s\geq 0\). The restriction of \(\mathcal{P}_{\Delta,s}\) to the maximal compact subgroup \(\mathrm{SO}(d+1)\) of \(\mathrm{SO}(1,d+1)\) is given by \[\mathcal{P}_{\Delta,s}|_{\mathrm{SO}(d+1)}=\bigoplus_{n=s}^{\infty}\bigoplus_{m =0}^{s}\mathbb{Y}_{n,m}\] (6) where \(\mathbb{Y}_{n,m}\) denotes a two-row Young diagram with \(n\) boxes in the first row and \(m\) boxes in the second row 9. Footnote 9: When \(d=3\), \(m\) can be negative, corresponding to anti-self-dual tensors of \(\mathrm{SO}(4)\). In this case, it is more appropriate to understand the symbol \(\mathbb{Y}_{n,m}\) as a highest-weight vector \((n,m)\). * **Complementary series**\(\mathcal{C}_{\Delta,s}\): \(0<\Delta<d\) when \(s=0\) and \(1<\Delta<d-1\) when \(s\geq 1\). It has the same \(\mathrm{SO}(d+1)\) content as \(\mathcal{P}_{\Delta,s}\). * **Type I exceptional series**\(\mathcal{V}_{p,0}\): \(\Delta=d+p-1\) and \(s=0\) for \(p\geq 1\). The \(\mathrm{SO}(d+1)\) content of \(\mathcal{V}_{p,0}\) only consists of single-row Young diagrams: \[\left.\mathcal{V}_{p,0}\right|_{\mathrm{SO}(d+1)}=\bigoplus_{n=p}^{\infty} \mathbb{Y}_{n}\.\] (7) These representations can be roughly thought as analytical continuation of complementary series beyond its unitarity regime, with certain \(\mathrm{SO}(d+1)\) UIRs removed to maintain irreducibility and unitarity. The precise meaning of this analytical continuation is clarified in appendix D. * **Type II exceptional series**\(\mathcal{U}_{s,t}\): \(\Delta=d+t-1\) and \(s\geq 1\) with \(t=0,1,2\cdots,s-1\). The \(\mathrm{SO}(d+1)\) content of \(\mathcal{U}_{s,t}\) is \[\left.\mathcal{U}_{s,t}\right|_{\mathrm{SO}(d+1)}=\bigoplus_{n=s}^{\infty} \bigoplus_{m=t+1}^{s}\mathbb{Y}_{n,m}\.\] (8) The \([\Delta,s]\) and \([d-\Delta,s]\) representations in principal series and complementary series are actually isomorphic. Therefore, we only consider \(\Delta\) with nonnegative imaginary part in principal series, and \(\Delta>\frac{d}{2}\) in complementary series. We will also use the notation \(\mathcal{F}_{\Delta,s}\) for both principal series and complementary series. Whether \(\mathcal{F}_{\Delta,s}\) belongs to principal series or complementary series will be clear once we specify the scaling dimension \(\Delta\). \(\mathcal{F}_{\Delta,s}\) describes spin-\(s\) massive fields in \(\mathrm{dS}_{d+1}\) of mass \(m^{2}=\Delta(d-\Delta)\) when \(s=0\) or \(m^{2}=(\Delta+s-2)(d+s-2-\Delta)\) when \(s\geq 1\). \(\mathcal{U}_{s,t}\) describes partially massless gauge fields of spin \(s\) and depth \(t\)[44, 45, 46, 47, 48, 49, 50, 51, 52]. In particular, \(t=s-1\) corresponds to massless gauge fields, e.g. photon, linearized graviton, etc. \(\mathcal{V}_{p,0}\) is expected to describe scalar fields of mass \(m^{2}=(1-p)(d+p-1)\) with some shift symmetry being gauged [43, 53, 54]. In particular, when \(p=1\), the shift symmetry is simply \(\phi(x)\to\phi(x)+c\), where \(c\) is an arbitrary real number. **Remark 1**.: _When \(d=3\), \(\mathcal{U}_{s,t}\) is actually the direct sum of two discrete series representations \(\mathcal{U}_{s,t}^{\pm}\), which consist of \(\text{SO}(4)\) representations \(\bigoplus_{n=s}^{\infty}\bigoplus_{m=t+1}^{s}\mathbb{Y}_{n,\pm m}\) respectively. Since discrete series is essentially the same as exceptional series for \(d=3\), we will only consider the latter in dS\({}_{4}\). In general, discrete series (which only exists when \(d\) is odd) requires an \(\text{SO}(d)\) highest-weight vector \(\lambda\) with \(\frac{d-1}{2}\) nonzero entries. It implies that the \(\text{SO}(d+1)\) content of a discrete series with \(d\geq 5\) has at least three rows in terms of Young diagram, which is completely different from exceptional series._ **Remark 2**.: _When \(d=2\), there are only principal series and complementary series up to isomorphism [15; 16; 17; 55]. The complementary series representations always have \(s=0\). There exists an isomorphism between \(\mathcal{F}_{\Delta,s}\) and \(\mathcal{F}_{\bar{\Delta},-s}\), where \(s\) is an arbitrary integer labelling the one dimensional representation of \(\text{SO}(2)\). A generic massive spinning field in dS\({}_{3}\) is described by \(\mathcal{F}_{\Delta,s}\oplus\mathcal{F}_{\Delta,-s}\), which is a UIR of \(\text{O}(1,3)\). Gauge fields are described by principal series representations \(\mathcal{P}_{1,s}\cong\mathcal{P}_{1,-s}\). For example, photons in dS\({}_{3}\) correspond to \(\mathcal{P}_{1,1}\)._ When \(d=1\), the infinite dimensional representations of \(\text{SO}(1,2)\) are only labelled by the scaling dimension \(\Delta\). So the classification of UIRs is different: * **Principal series**\(\mathcal{P}_{\Delta}\): \(\Delta\in\frac{1}{2}+i\mathbb{R}\). Its restriction to \(\text{SO}(2)\) yields \[\mathcal{P}_{\Delta}|_{\text{SO}(2)}=\bigoplus_{n\in\mathbb{Z}}(n)\] (9) where \((n)\) denotes the (one-dimensional) spin \(n\) representation of \(\text{SO}(2)\). * **Complementary series**\(\mathcal{C}_{\Delta}\): \(0<\Delta<1\). It has the same \(\text{SO}(2)\) content as \(\mathcal{P}_{\Delta}\). * **Lowest-weight discrete series**\(\mathcal{D}_{p}^{+}\): \(\Delta=p\in\mathbb{Z}_{>0}\). * **Highest-weight discrete series**\(\mathcal{D}_{p}^{-}\): \(\Delta=-p\in\mathbb{Z}_{<0}\). The \(\text{SO}(2)\) spectrum of the discrete series is \[\mathcal{D}_{p}^{+}\big{|}_{\text{SO}(2)}=\bigoplus_{n\geq p}(n)\,\qquad \qquad\mathcal{D}_{p}^{-}\big{|}_{\text{SO}(2)}=\bigoplus_{n\leq-p}(n). \tag{10}\] Again, the principal series and complementary series of \(\text{SO}(1,2)\) have the \(\Delta\leftrightarrow 1-\Delta\) symmetry. We will use \(\mathcal{D}_{p}\) to denote the unitary reducible representation \(\mathcal{D}_{p}^{+}\oplus\mathcal{D}_{p}^{-}\). A massless scalar in dS\({}_{2}\), with the constant mode being removed, is described by \(\mathcal{D}_{1}\). In particular, its right-moving modes (along the global circle of dS\({}_{2}\)) correspond to \(\mathcal{D}_{1}^{+}\) and the left-moving modes correspond to \(\mathcal{D}_{1}^{-}\). In general, the \(\mathcal{D}_{p}^{+}\) (\(\mathcal{D}_{p}^{-}\)) describes the right (left) movers of a scalar field with mass \(m^{2}=p(1-p)\) in dS\({}_{2}\). ### Harish-Chandra characters For all the UIRs reviewed above, one can define a group character (known as Harish-Chandra character or global character): \[\Theta_{R}(g)\equiv\,\operatorname{tr}\mathcal{H}(g),\ \ g\in\text{SO}(1,d+1) \tag{11}\] where \(\mathcal{H}\) denotes the infinite dimensional Hilbert space of a given UIR \(R\). Such a character exists as a distribution on \(\mathrm{SO}(1,d+1)\). We will take \(g\) to be \(e^{tD}\) for \(\mathrm{SO}(1,2)\), and \(e^{tD}x_{1}^{J_{1}}\cdots x_{r}^{J_{r}}\) for \(\mathrm{SO}(1,d+1)\), where \(t\in\mathbb{R}\), \(r=\left\lfloor\frac{d}{2}\right\rfloor\) is the rank of \(\mathrm{SO}(d)\), \(x_{j}\in\mathrm{U}(1)\) and \(J_{j}=iM_{2j-1,2j}\) are Cartan generators of \(\mathrm{SO}(d)\). The \(t\) dependence is always via \(q\equiv e^{-|t|}\), except for the spinning principal series of \(\mathrm{SO}(1,3)\), which will be clarified in remark 3 below. In the \(\mathrm{SO}(1,2)\) case, the Harish-Chandra characters are [43; 56] * **Principal and complementary series**: \[\Theta_{\Delta}(q)=\frac{q^{\Delta}+q^{\bar{\Delta}}}{1-q}\,\ \ \bar{\Delta}\equiv 1-\Delta\.\] (12) * **Highest and lowest weight Discrete series**: \[\Theta_{p}^{\pm}(q)=\frac{q^{p}}{1-q}\.\] (13) For generic \(\mathrm{SO}(1,d+1)\), the explicit expression of Harish-Chandra characters depends on the parity of \(d\)[36; 43]: * **Principal and complementary series**: \[\Theta_{\Delta,s}(q,\mathbf{x})=\chi_{\mathbb{Y}_{s}}^{\mathrm{SO}(d)}(\mathbf{x}) \frac{q^{\Delta}+q^{\bar{\Delta}}}{P_{d}(q,\mathbf{x})},\ \ \ \bar{\Delta}\equiv d-\Delta\] (14) where \(\chi_{\mathbb{Y}_{s}}^{\mathrm{SO}(d)}(\mathbf{x})\) is the \(\mathrm{SO}(d)\) Weyl character 10 corresponding to the spin-\(s\) representation \(\mathbb{Y}_{s}\) and \[P_{d}(q,\mathbf{x})=\prod_{i=1}^{r}(1-x_{i}q)(1-x_{i}^{-1}q)\times\begin{cases}1,&d =2r\\ 1-q,&d=2r+1\end{cases}\] (15) The presence of both \(q^{\Delta}\) and \(q^{\bar{\Delta}}\) in (14) manifests the \(\Delta\leftrightarrow d-\Delta\) symmetry. Footnote 10: In this paper, we will use \(\Theta\) for Harish-Chandra characters of noncompact Lie groups and use \(\chi\) for Weyl characters of compact Lie groups. * **Type I exceptional series \(\mathcal{V}_{p,0}\)**: \[\Theta_{\mathcal{V}_{p,0}}(q,\mathbf{x})=\frac{q^{1-p}+q^{d+p-1}}{P_{d}(q,\mathbf{x}) }-\chi_{\mathbb{Y}_{p-1}}^{\mathrm{SO}(d+2)}(q,\mathbf{x})\] (16) The small \(q\) expansion of \(\Theta_{\mathcal{V}_{p,0}}(q,\mathbf{x})\) starts with \(\chi_{\mathbb{Y}_{p}}^{\mathrm{SO}(d)}(\mathbf{x})q\). For example, when \(p=1\) and \(d=3\), eq. (16) becomes \[d=3:\ \Theta_{\mathcal{V}_{1,0}}(q,x)=\frac{2\,q^{3}}{(1-q)(1-xq)(1-x^{-1}q)}+ \frac{\chi_{\mathbb{Y}_{1}}^{\mathrm{SO}(3)}(x)\,q}{(1-xq)(1-x^{-1}q)}\.\] (17) * **Type II exceptional series**\(\mathcal{U}_{s,t}\): \[\Theta_{\mathcal{U}_{s,t}}(q,\mathbf{x})=\chi^{\mathrm{SO}(d)}_{\mathbb{Y }_{s}}(\mathbf{x})\frac{q^{1-t}+q^{d+t-1}}{P_{d}(q,\mathbf{x})}-\chi^{\mathrm{SO}(d)}_{ \mathbb{Y}_{t}}(\mathbf{x})\frac{q^{1-s}+q^{d+s-1}}{P_{d}(q,\mathbf{x})}+\chi^{\mathrm{ SO}(d+2)}_{\mathbb{Y}_{s-1,t}}(q,\mathbf{x})\] (2.18) where \(\chi^{\mathrm{SO}(d+2)}_{\mathbb{Y}_{s-1,t}}(q,\mathbf{x})\) is the \(\mathrm{SO}(d+2)\) Weyl character corresponding to the two-row Young diagram \(\mathbb{Y}_{s-1,t}\). When \(d=3\), eq. (2.18) reduces to \[d=3:\quad\Theta_{\mathcal{U}_{s,t}}(q,x)=2\left(\chi^{\mathrm{SO}(3)}_{ \mathbb{Y}_{s}}(x)\frac{q^{t+2}}{P_{3}(q,x)}-\chi^{\mathrm{SO}(3)}_{\mathbb{Y }_{t}}(x)\frac{q^{s+2}}{P_{3}(q,x)}\right)\.\] (2.19) The overall factor \(2\) is related to the reducibility of \(\mathcal{U}_{s,t}\) in the \(d=3\) case, discussed in remark 1. When \(d\geq 4\), the small \(q\) expansion of \(\Theta_{\mathcal{U}_{s,t}}(q,\mathbf{x})\) has a universal leading term 11 Footnote 11: When \(d=4\), the character \(\chi^{\mathrm{SO}(d)}_{\mathbb{Y}_{s,t+1}}(\mathbf{x})\) in eq.(2.20) should be understood as the sum of \(\chi^{\mathrm{SO}(4)}_{\mathbb{Y}_{s,t+1}}(\mathbf{x})\) and \(\chi^{\mathrm{SO}(4)}_{\mathbb{Y}_{s,-(t+1)}}(\mathbf{x})\). \[d\geq 4:\quad\Theta_{\mathcal{U}_{s,t}}(q,\mathbf{x})=\chi^{\mathrm{SO}(d)}_{ \mathbb{Y}_{s,t+1}}(\mathbf{x})q^{2}+\mathcal{O}(q^{3})\.\] (2.20) whose physical origin is explained in [57]. **Remark 3**.: _For spinning principal series of \(\mathrm{SO}(1,3)\), the corresponding Harish-Chandra character is given by_ \[d=2:\ \Theta_{\Delta,s}(q,x)=\frac{x^{s}q^{\Delta}+x^{-s}q^{\bar{ \Delta}}}{(1-xq)(1-x^{-1}q)},\ \ q=e^{-t} \tag{2.21}\] _which manifests the isomorphism between \(\mathcal{F}_{\Delta,s}\) and \(\mathcal{F}_{\bar{\Delta},-s}\). Because \(q\) is equal to \(e^{-t}\) rather than \(e^{-|t|}\) here, the character does not have \(t\to-t\) symmetry. Instead, it satisfies \(\Theta_{\Delta,s}(q^{-1},x^{-1})=\Theta_{\Delta,s}(q,x)\)._ **Remark 4**.: _There exist more general principal series representations \(\mathcal{P}_{\Delta,\mathbb{Y}}\), labelled by a complex scaling dimension \(\Delta\in\frac{d}{2}+i\mathbb{R}\) and an arbitrary UIR of \(\mathrm{SO}(d)\) corresponding to the Young diagram \(\mathbb{Y}\). The Harish-Chandra character of \(\mathcal{P}_{\Delta,\mathbb{Y}}\) is given by_ \[\Theta_{\mathcal{P}_{\Delta,\mathbb{Y}}}(q,\mathbf{x})=\chi^{SO(d)}_{ \mathbb{Y}}(\mathbf{x})\frac{q^{\Delta}+q^{\bar{\Delta}}}{P_{d}(q,\mathbf{x})}. \tag{2.22}\] ## 3 Tensor products in \(\mathrm{SO}(1,2)\) The tensor product decomposition of \(\mathrm{SO}(1,2)\) (or its covering group \(\mathrm{SL}(2,\mathbb{R})\)) UIRs has been solved by Pukanszky [23] and Repka [24]. For instance, the tensor product of two principal series representations contain all discrete series representations and the full principal series. In this section, we will revisit this problem with the aid of Harish-Chandra characters, focusing on defining and computing a proper density of principal series representations in any tensor product. As a byproduct, we will also show that such a density also contains information of complementary series after a simple analytic continuation. ### Some details regarding \(\text{SO}(1,2)\) UIRs A particularly useful basis of \(\mathfrak{so}(1,2)\) is given by \[L_{0}=-\frac{i}{2}(P+K)\,\quad L_{\pm}=-\frac{i}{2}(P-K)\mp D \tag{3.1}\] and satisfies the commutation relations \([L_{0},L_{\pm}]=\pm L_{\pm},[L_{-},L_{+}]=2L_{0}\). In this basis, the \(\text{SO}(1,2)\) Casimir can be expressed as \[C^{\text{SO}(1,2)}=L_{0}(1-L_{0})+L_{+}L_{-}. \tag{3.2}\] \(L_{0}\) is the generator of the \(\text{SO}(2)\) subgroup, corresponding to rotations along the global circle of \(\text{dS}_{2}\). Since we only consider representations of \(\text{SO}(1,2)\), its eigenvalues are integers12. Denote the eigenstates of \(L_{0}\) by \(|n\rangle\). In principal or complementary series of scaling dimension \(\Delta\), the label \(n\) takes the value of all integers. Following the conventions in [43], the action of \(\mathfrak{so}(1,2)\) on \(|n\rangle\) is Footnote 12: The eigenvalues can be half-integers if we consider the covering group \(\text{SL}(2,\mathbb{R})\) which can describe spinors in \(\text{dS}_{2}\). \[L_{0}|n\rangle=n|n\rangle,\ \ L_{\pm}|n\rangle=(n\pm\Delta)|n\pm 1\rangle. \tag{3.3}\] The norm of \(|n\rangle\) compatible with eq. (3.3) is given by \[\text{Principal series }\mathcal{P}_{\Delta}: (n|n)=1\] \[\text{Complementary series }\mathcal{C}_{\Delta}: (n|n)=\frac{\Gamma(n+\bar{\Delta})}{\Gamma(n+\Delta)}=\frac{ \Gamma(-n+\bar{\Delta})}{\Gamma(-n+\Delta)}. \tag{3.4}\] In lowest-weight discrete series, e.g. \(\mathcal{D}_{p}^{+}\), the label \(n\) has a lower bound \(p\), corresponding to a lowest-weight state \(|p\rangle\) that is annihilated by \(L_{-}\). In highest-weight discrete series, e.g. \(\mathcal{D}_{p}^{-}\), the label \(n\) has an upper bound \(-p\), corresponding to a highest-weight state \(|-p\rangle\) that is annihilated by \(L_{+}\). In both \(\mathcal{D}_{p}^{\pm}\), we also have the action (3.3), with \(\Delta\) being replaced by \(p\). The corresponding norm of \(|n\rangle\) is \[\text{Discrete series }\mathcal{D}_{p}^{\pm}:\quad(n|n)=\frac{\Gamma(\pm n+1-p)}{ \Gamma(\pm n+p)}. \tag{3.5}\] ### \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\) In general, the tensor product \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\) contains both discrete and continuous series UIRs. Since these two types of representations are very different in nature, we will discuss them separately. #### 3.2.1 The discrete part Let \(|n,m\rangle\equiv|n\rangle\otimes|m\rangle\) be a basis of the tensor product of two continuous series \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\) on which the action of \(\mathfrak{so}(1,2)\) follows from eq. (3.3) \[L_{\pm}|n,m\rangle=(n\pm\Delta_{1})|n\pm 1,m\rangle+(m\pm\Delta_{2})|n,m\pm 1 \rangle,\ \ L_{0}|n,m\rangle=(n+m)|n,m\rangle. \tag{3.6}\] In order to identify a discrete series representation in \({\cal F}_{\Delta_{1}}\otimes{\cal F}_{\Delta_{2}}\), say \({\cal D}^{+}_{k}\), it suffices to build the corresponding lowest weight state \(|k\rangle_{k}\), which is characterized by the following first order relations (we also include the trivial representation by allowing \(k\) to be \(0\)) \[L_{-}|k\rangle_{k}=0,\ \ \ L_{0}|k\rangle_{k}=k|k\rangle_{k}. \tag{3.7}\] Being an eigenstate of \(L_{0}\), the state \(|k\rangle_{k}\) should take the following from \[|k\rangle_{k}=\sum_{n\in\mathbb{Z}}a_{n}|n,k-n\rangle \tag{3.8}\] where \(a_{n}\) are coefficients to be fixed. Applying the lowest-weight condition to (3.8), we obtain a recurrence relation for \(a_{n}\) \[a_{n}(n-\Delta_{1})+a_{n-1}(k-n+\bar{\Delta}_{2})=0 \tag{3.9}\] which can be solved up to an overall normalization factor, denoted by \(c\) \[a_{n}=c\frac{\Gamma(n+\Delta_{2}-k)}{\Gamma(n+\bar{\Delta}_{1})}. \tag{3.10}\] With the lowest-weight state \(|k\rangle_{k}\) being constructed, the existence of \({\cal D}^{+}_{k}\) in \({\cal F}_{\Delta_{1}}\otimes{\cal F}_{\Delta_{2}}\) is equivalent to the normalizability of \(|k\rangle_{k}\). It is straightforward to check that the explicit expression of \({}_{k}(k|k)_{k}\) is independent of whether \({\cal F}_{\Delta_{i}}\) is a principal series or complementary series: \[{}_{k}(k|k)_{k}=|c|^{2}\sum_{n\in\mathbb{Z}}\frac{\Gamma(n+\Delta_{2}-k) \Gamma(n+\bar{\Delta}_{2}-k)}{\Gamma(n+\Delta_{1})\Gamma(n+\bar{\Delta}_{1})}. \tag{3.11}\] The large \(n\) behavior of the summands in eq. (3.11) is \(n^{-2k}\). Therefore, as long as \(k\in\mathbb{Z}_{+}\), the sum is convergent and hence \(|k\rangle_{k}\) is normalizable. When \(k=0\), it is non-normalizable, which implies the absence of the trivial representation. A similar result holds for the highest-weight representations \(D^{-}_{k}\). Altogether, the tensor product \({\cal F}_{\Delta_{1}}\otimes{\cal F}_{\Delta_{2}}\) contains all the discrete series representations and each has multiplicity one, since the coefficients \(\{a_{n}\}\) are unique (up to an overall phase) once we fix the normalization. The trivial representation does not appear in this tensor product. #### 3.2.2 The continuous part In addition to the discrete series, \({\cal F}_{\Delta_{1}}\otimes{\cal F}_{\Delta_{2}}\) is also known to contain all principal series representations, and in some cases, one complementary series representation. We aim to identify a proper density of these representations, which can be thought as the analogue of multiplicities in the discrete case. Presumably, this density encodes conditions for the appearance of complementary series representations, given that complementary series can be thought as an analytical continuation of principal series, at least at the level of Harish-Chandra characters. Character analysis In the representation theory of compact groups, the Weyl character is a powerful tool to compute multiplicities because the tensor decomposition \(R_{1}\otimes R_{2}=\oplus_{a}\,(R_{a}^{\oplus n_{a}})\) of some compact Lie group \(G\) is equivalent to the character relation \(\chi_{R_{1}}^{G}\chi_{R_{2}}^{G}=\sum_{a}n_{a}\chi_{R_{a}}^{G}\), where \(n_{a}\) denotes the multiplicity of the UIR \(R_{a}\). In particular, it shows what representations would be present in the tensor products if their \(n_{a}\neq 0\). Consider a simple example with \(G=\mathrm{SO}(3)\), and \(R_{1},R_{2}\) being spin 1 and spin 2 representations respectively. The multiplicity \(n_{\ell}\) of each spin \(\ell\) representation in \(R_{1}\otimes R_{2}\) should satisfy \[\chi_{1}(\theta)\chi_{2}(\theta)=\sum_{\ell\in\mathbb{N}}n_{\ell} \chi_{\ell}(\theta)\,\qquad\chi_{\ell}(\theta)=\frac{\sin(\ell+\frac{1}{2}) \theta}{\sin\frac{\theta}{2}}\,, \tag{3.12}\] where \(\chi_{\ell}(\theta)\) is the character of spin \(\ell\) representation. Define an inner product \((\chi_{\ell},\chi_{\ell^{\prime}})\equiv\int_{0}^{2\pi}\,d\theta\sin^{2} \frac{\theta}{2}\chi_{\ell}(\theta)\chi_{\ell^{\prime}}(\theta)\), then it is straightforward to check that \(\chi_{\ell}\) are orthogonal to each other with respect to this inner product, i.e. \((\chi_{\ell},\chi_{\ell^{\prime}})=\pi\delta_{\ell\ell^{\prime}}\), which leads to an integral representation of the multiplicity \(n_{\ell}\) \[n_{\ell} =\frac{1}{\pi}\int_{0}^{2\pi}\,d\theta\,\frac{\sin(\ell+\frac{1}{ 2})\theta}{\sin\frac{\theta}{2}}\sin\left(\frac{3}{2}\,\theta\right)\sin\left( \frac{5}{2}\,\theta\right)\] \[=\frac{1}{2\pi}\int_{0}^{2\pi}\,(1+2\cos\theta+\cdots+2\cos(\ell \theta))\,(\cos\theta-\cos(4\theta)). \tag{3.13}\] Based on this integral representation, it is clear that \(n_{\ell}\) equals to 1 when \(\ell=1,2,3\), and vanishes otherwise. Therefore, we can conclude the following tensor product decomposition of \(\mathrm{SO}(3)\): \[[3]\otimes[5]=[3]\oplus[5]\oplus[7] \tag{3.14}\] where \([n]\) denotes the spin \(\frac{n-1}{2}\) representation of \(\mathrm{SO}(3)\). For the tensor product of any spin \(m\) and spin \(n\) representations, it amounts to replacing \(\cos\theta-\cos(4\theta)\) in eq. (3.13) by \(\cos(m-n)\theta-\cos(m+n+1)\theta\), which then leads to \(n_{\ell}=1\) when \(|m-n|\leq\ell\leq m+n\), and vanishes otherwise. Now let us apply this idea to \(G=\mathrm{SO}(1,2)\), with \(R_{j}=\mathcal{P}_{\Delta_{j}},\Delta_{j}=\frac{1}{2}+i\mu_{j}\) belonging to principal series. Since \(\mathcal{P}_{\Delta_{1}}\otimes\mathcal{P}_{\Delta_{2}}\) only includes principal series and discrete series, we expect \[\Theta_{\Delta_{1}}(q)\Theta_{\Delta_{2}}(q)=\int_{0}^{\infty}\,d \lambda\,\mathcal{K}(\lambda)\,\Theta_{\frac{1}{2}+i\lambda}(q)+\sum_{k\geq 1, \pm}\Theta_{k}^{\pm}(q)\, \tag{3.15}\] along the lines of eq. (3.12), where \(\mathcal{K}(\lambda)\) can be roughly thought as a density of \(\mathcal{P}_{\frac{1}{2}+i\lambda}\). Plugging in the explicit expression of these characters given in section 2 with \(q=e^{-|t|}\), we find that eq. (3.15) effectively fixes the Fourier transformation of \(\mathcal{K}(\lambda)\) \[\int_{\mathbb{R}}\,d\lambda\,\mathcal{K}(\lambda)e^{i\lambda t}= \frac{\cos(\mu_{1}+\mu_{2})t+\cos(\mu_{1}-\mu_{2})t-1}{|\sinh\frac{t}{2}|} \tag{3.16}\] where we have implicitly extended \(\mathcal{K}(\lambda)\) to an even function on the whole real line, which is compatible with the shadow symmetry of principal series. Unfortunately, the singularity of \(\frac{\cos(\mu_{1}+\mu_{2})t+\cos(\mu_{1}-\mu_{2})t-1}{|\sinh\frac{t}{2}|}\) at \(t=0\) implies that \(\mathcal{K}(\lambda)\) cannot be obtained by an inverse Fourier transformation. This is consistent with the fact, which we shall show later both analytically and numerically, that the tensor product \(\mathcal{P}_{\Delta_{1}}\otimes\mathcal{P}_{\Delta_{2}}\) has a continuous spectrum of principal series. In other words, given an arbitrary interval \([a,b]\subset\mathbb{R}_{\geq 0}\), there exist infinite number of principal series representations \(\mathcal{P}_{\frac{1}{2}+i\lambda}\) with \(\lambda\in[a,b]\), contained in \(\mathcal{P}_{\Delta_{1}}\otimes\mathcal{P}_{\Delta_{2}}\), and hence the density of principal series blows up [58]. In order to get a finite \(\mathcal{K}(\lambda)\), we may consider a regularization scheme for the inverse Fourier transformation. A simple regularization scheme is to introduce a hard cutoff \(t>\epsilon>0\): \[\mathcal{K}_{\epsilon}(\lambda) =\int_{\epsilon}^{\infty}\frac{dt}{\pi}\cos(\lambda t)\frac{\cos \left(\mu_{1}+\mu_{2}\right)t+\cos\left(\mu_{1}-\mu_{2}\right)t-1}{|\sinh\frac {t}{2}|}\] \[=-\frac{2}{\pi}\log\left(e^{\gamma_{E}}\,\epsilon\right)+\frac{1 }{\pi}\sum_{\pm}\psi\left(\frac{1}{2}\pm i\lambda\right)-\frac{1}{2\pi}\sum_{ \pm,\pm,\pm}\psi\left(\frac{1}{2}\pm i\lambda\pm i\mu_{1}\pm i\mu_{2}\right)+ \mathcal{O}(\epsilon) \tag{3.17}\] We define the hard-cutoff renormalized kernel as the finite part of this expression: \[\mathcal{K}_{\text{hc}}(\lambda)=\frac{1}{\pi}\sum_{\pm}\psi\left(\frac{1}{2} \pm i\lambda\right)-\frac{1}{2\pi}\sum_{\pm,\pm,\pm}\psi\left(\frac{1}{2}\pm i \lambda\pm i\mu_{1}\pm i\mu_{2}\right) \tag{3.18}\] where the index in \(\mathcal{K}_{\text{hc}}\) is to label the hard-cutoff renormalization. Note that the \(\mu_{i}\)-independent term is from the discrete series contribution and the \(\mu_{i}\)-dependent term is from the left hand side of equation (3.15). In fact, this renormalized density should be valid more generally. To see that one thinks of (3.16) as an equality between distributions.13 In other words, it becomes a true equality if we integrate both sides against a smooth test function, Footnote 13: We thank Petr Kravchuk for enlightening discussions about this issue. \[\int dtf(t)\int_{\mathbb{R}}\,d\lambda\,\mathcal{K}(\lambda)e^{i\lambda t}= \int dtf(t)\frac{\cos(\mu_{1}+\mu_{2})t+\cos(\mu_{1}-\mu_{2})t-1}{|\sinh\frac {t}{2}|}\,, \tag{3.19}\] where \(f(t)\) vanishes at \(t=0\) (and is polynomially bounded when \(t\to\pm\infty\)). Notice that a constant shift \(\mathcal{K}(\lambda)\to\mathcal{K}(\lambda)+const\) drops out of this equation because \(f(0)=0\). This is the only ambiguity in the distribution \(\mathcal{K}(\lambda)\). Therefore, we expect that different renormalization schemes lead to \(\mathcal{K}_{\text{hc}}(\lambda)\) up to a constant shift. Another possible regularization scheme we consider is a Pauli-Villars type regularization in the same spirit as [59], where it is argued that the density of single particle states \(\rho(\omega)\) of a scalar field \(\phi\), say in dS\({}_{2}\), of scaling dimension \(\Delta\), can be computed as the Fourier transformation of the Harish-Chandra character \(\Theta_{\Delta}(t)=\frac{e^{-\Delta t}+e^{-\bar{\Delta}t}}{1-e^{-|t|}}\). Due to the singularity at \(t=0\), the density of single particle states suffers from a divergence. Then [59] introduced a Pauli-Villars regularization, which effectively replaces \(\Theta_{\Delta}\) by \[\Theta_{\Delta}^{\Lambda}(t)\equiv\frac{e^{-\Delta t}+e^{-\bar{\Delta}t}}{1-e^{ -|t|}}-\frac{e^{-\Delta_{\Lambda}t}+e^{-\bar{\Delta}_{\Lambda}t}}{1-e^{-|t|}} \tag{3.20}\] where \(\Delta_{\Lambda}=\frac{1}{2}+i\Lambda\) for some large scale \(\Lambda\). The second character in \(\Theta^{\Lambda}_{\Delta}\) corresponds to a very heavy particle \(\phi_{\Lambda}\) with a mass \((\frac{1}{4}+\Lambda^{2})^{\frac{1}{2}}\sim\Lambda\). Then a regularized density \(\rho_{\Lambda}(\omega)\), defined as the Fourier transformation of \(\Theta^{\Lambda}_{\Delta}\) for energy \(\omega\) much smaller than the cutoff scale \(\Lambda\), can also be thought as the relative density of single particle states between \(\phi\) and the very heavy field \(\phi_{\Lambda}\) at low energy. A renormalized density is identified as the finite part of \(\rho_{\Lambda}(\omega)\) in the large \(\Lambda\) limit. In our setup, we implement the Pauli-Villars regularization by introducing two more principal series representations \(\mathcal{P}_{\Delta_{3}}\) and \(\mathcal{P}_{\Delta_{4}}\), where \(\Delta_{3}=\frac{1}{2}+i\mu_{3}\) and \(\Delta_{4}=\frac{1}{2}+i\mu_{4}\), with \(\mu_{3},\mu_{4}\) being large. Then a relative density \(\mathcal{K}_{\text{rel}}(\lambda)\) of principal series between \(\mathcal{P}_{\Delta_{1}}\otimes\mathcal{P}_{\Delta_{2}}\) and \(\mathcal{P}_{\Delta_{3}}\otimes\mathcal{P}_{\Delta_{4}}\) should satisfy \[\Theta_{\Delta_{1}}(q)\Theta_{\Delta_{2}}(q)-\Theta_{\Delta_{3}}(q)\Theta_{ \Delta_{4}}(q)=\int_{0}^{\infty}\,d\lambda\,\mathcal{K}_{\text{rel}}(\lambda) \,\Theta_{\frac{1}{2}+i\lambda}(q) \tag{3.21}\] because the discrete parts of the two tensor products cancel out. In eq. (3.21), the relative density \(\mathcal{K}_{\text{rel}}(\lambda)\) is well-defined and can be easily evaluated as an inverse Fourier transformation 14 by using eq. (A.3) Footnote 14: The inverse Fourier transformation is equivalent to saying that principal series characters are \(\delta\) function normalizable with respect to the inner product \((\Theta_{\Delta_{1}},\Theta_{\Delta_{2}})\equiv\int_{0}^{\infty}dt\sinh^{2} \left(\frac{t}{2}\right)\Theta_{\Delta_{1}}(e^{-t})\Theta_{\Delta_{2}}(e^{-t})\), which is the reminiscence of the inner product we have introduced for \(\text{SO}(3)\) characters. However, there is an extra subtlety in the \(\text{SO}(1,2)\) case because principal series characters are not orthogonal to discrete series characters with respect to this inner product. So the character relation itself along the lines of (3.15) is not sufficient for deriving the multiplicities of discrete series representations. \[\mathcal{K}_{\text{rel}}(\lambda) =\frac{1}{2\pi}\sum_{\pm,\pm,\pm}\int_{0}^{\infty}dt\,\left( \frac{e^{-(\frac{1}{2}\pm i\mu_{1}\pm i\mu_{2}\pm i\lambda)t}}{1-e^{-t}}- \frac{e^{-(\frac{1}{2}\pm i\mu_{3}\pm i\mu_{4}\pm i\lambda)t}}{1-e^{-t}}\right)\] \[=\frac{1}{2\pi}\sum_{\pm,\pm,\pm}\left(\psi\left(\frac{1}{2}\pm i \mu_{3}\pm i\mu_{4}\pm i\lambda\right)-\psi\left(\frac{1}{2}\pm i\mu_{1}\pm i \mu_{2}\pm i\lambda\right)\right). \tag{3.22}\] Taking \(\mu_{3}=\mu_{4}=\Lambda\) very large, we get for \(\lambda\ll\Lambda\): \[\mu_{3}=\mu_{4}=\Lambda:\ \mathcal{K}_{\text{PV},1}(\lambda)\underset{ \Lambda\to\infty}{\sim}\frac{2}{\pi}\log(2\Lambda)+\frac{1}{\pi}\sum_{\pm} \psi\left(\frac{1}{2}\pm i\lambda\right)-\frac{1}{2\pi}\sum_{\pm,\pm,\pm}\psi \left(\frac{1}{2}\pm i\mu_{1}\pm i\mu_{2}\pm i\lambda\right) \tag{3.23}\] which has the same finite part as (3.18). This is not surprising because, in the limit \(\mu_{3}=\mu_{4}=\Lambda\to\infty\), (3.21) turns into (3.15) up to rapidly oscillating terms that integrate to zero against any smooth test function \(f(t)\). On the other hand, taking \(\mu_{3}=2\Lambda\) and \(\mu_{4}=\Lambda\) very large, we get instead \[\mu_{3}=2\mu_{4}=2\Lambda:\ \ \mathcal{K}_{\text{PV},2}(\lambda)\underset{ \Lambda\to\infty}{\sim}\frac{2}{\pi}\log(3\Lambda^{2})-\frac{1}{2\pi}\sum_{\pm, \pm,\pm}\psi\left(\frac{1}{2}\pm i\mu_{1}\pm i\mu_{2}\pm i\lambda\right). \tag{3.24}\] Comparing eq. (3.17) and eq. (3.23) with eq. (3.24), we conclude that one needs to be careful with the choice of regularisation scheme. Nonetheless, the relative density \(\mathcal{K}_{\text{rel}}(\lambda)\) always makes sense as long as we keep \(\mu_{3}\) and \(\mu_{4}\) finite. Therefore, we will mostly use the notion of relative density henceforth. At this point, \(\mathcal{K}_{\text{rel}}(\lambda)\) has been derived by using characters and is called a relative density simply because of intuitions from representation theory of compact Lie groups. We will justify the name "relative density" shortly by reconstructing \(\mathcal{K}_{\text{rel}}(\lambda)\) numerically as the relative density of eigenvalues of two large but finite matrices, with each eigenvalue being identified as a UIR of \(\text{SO}(1,2)\). Before describing the numerical approach, we digress a little bit and discuss what we can learn about complementary series in \(\mathcal{C}_{\Delta_{1}}\otimes\mathcal{C}_{\Delta_{2}},\Delta_{j}=\frac{1}{2} +\mu_{j}\) from the relative density \(\mathcal{K}_{\text{rel}}(\lambda)\). Treating complementary series as a direct analytical continuation of principal series, we expect eq. (3.22) to hold for \(\mathcal{C}_{\Delta_{1}}\otimes\mathcal{C}_{\Delta_{2}}\) upon the replacement \(i\mu_{j}\to\mu_{j},j=1,2\). However, this naive guess breaks down manifestly when \(\mu_{1}+\mu_{2}>\frac{1}{2}\) because \(\frac{e^{-(\frac{1}{2}-\mu_{1}-\mu_{2}\pm i\lambda)t}}{1-e^{-t}}\) would blow up as \(t\to\infty\) or equivalently \(q\to 0\). This phenomenon signals the appearance of a complementary series representation \(\mathcal{C}_{\mu_{1}+\mu_{2}}\) in eq. (3.21), which potentially should cancel the exponentially blowing up behavior at large \(t\). To justify this argument from a different viewpoint, notice that although \(\mathcal{K}_{\text{rel}}(\lambda)\) does not make sense as an inverse Fourier transformation when \(\mu_{1}+\mu_{2}>\frac{1}{2}\), its \(\psi\) function realization clearly admits well-defined analytical continuation in this region, i.e. \[\mathcal{C}_{\Delta_{1}}\otimes\mathcal{C}_{\Delta_{2}}:\ \mathcal{K}_{\text{rel}}( \lambda)=\frac{1}{2\pi}\sum_{\pm,\pm,\pm}\left(\psi\left(\frac{1}{2}\pm i\mu_ {3}\pm i\mu_{4}\pm i\lambda\right)-\psi\left(\frac{1}{2}\pm\mu_{1}\pm\mu_{2} \pm i\lambda\right)\right). \tag{3.25}\] Then it is natural to assume that \(\mathcal{K}_{\text{rel}}(\lambda)\) given by eq. (3.25) correctly captures the relative density of principal series between \(\mathcal{C}_{\Delta_{1}}\otimes\mathcal{C}_{\Delta_{2}}\) and \(\mathcal{P}_{\Delta_{3}}\otimes\mathcal{P}_{\Delta_{4}}\), no matter whether \(\mu_{1}+\mu_{2}\) is larger than \(\frac{1}{2}\) or not. In both cases, evaluating the integral \(\int_{0}^{\infty}d\lambda\,\mathcal{K}_{\text{rel}}(\lambda)\Theta_{\frac{1}{ 2}+i\lambda}\) using the general formula (A.9), we find \[\int_{0}^{\infty}d\lambda\,\mathcal{K}_{\text{rel}}(\lambda)\Theta _{\frac{1}{2}+i\lambda}(q)=\begin{cases}\Theta_{\Delta_{1}}(q)\Theta_{\Delta_{ 2}}(q)-\Theta_{\Delta_{3}}(q)\Theta_{\Delta_{4}}(q),&0\leq\mu_{1}+\mu_{2}<\frac {1}{2}\\ \Theta_{\Delta_{1}}(q)\Theta_{\Delta_{2}}(q)-\Theta_{\Delta_{3}}(q)\Theta_{ \Delta_{4}}(q)-\Theta_{\mu_{1}+\mu_{2}}(q),&\frac{1}{2}<\mu_{1}+\mu_{2}<1\end{cases} \tag{3.26}\] which implies that in the tensor product \(\mathcal{C}_{\Delta_{1}}\otimes\mathcal{C}_{\Delta_{2}}\), when \(\mu_{1}+\mu_{2}\) crosses the value \(\frac{1}{2}\) from below, a complementary series representation of \(\Delta=\mu_{1}+\mu_{2}\) appears. Altogether, based on the fact that \(\mathcal{P}_{\Delta_{3}}\otimes\mathcal{P}_{\Delta_{4}}\) does not contain any complementary series representations, we can conclude that \(\mathcal{C}_{\Delta_{1}}\otimes\mathcal{C}_{\Delta_{2}}\) contains one complementary series representation, i.e. \(\mathcal{C}_{\mu_{1}+\mu_{2}}\) when \(\frac{1}{2}<\mu_{1}+\mu_{2}<1\), and zero such representation otherwise. This result is consistent with [23, 24] and will be confirmed numerically. At the special point \(\mu_{1}+\mu_{2}=\frac{1}{2}\), the integral \(\int_{0}^{\infty}d\lambda\,\mathcal{K}_{\text{rel}}(\lambda)\Theta_{\frac{1}{ 2}+i\lambda}\) can be computed by using eq. (A.10) \[\mu_{1}+\mu_{2}=\frac{1}{2}:\ \ \int_{0}^{\infty}d\lambda\,\mathcal{K}_{\text{rel}} (\lambda)\Theta_{\frac{1}{2}+i\lambda}(q)=\Theta_{\Delta_{1}}(q)\Theta_{ \Delta_{2}}(q)-\Theta_{\Delta_{3}}(q)\Theta_{\Delta_{4}}(q)-\frac{1}{2}\Theta_ {\frac{1}{2}}(q). \tag{3.27}\] where \(\Theta_{\frac{1}{2}}(q)\) is the character of \(\mathcal{P}_{\frac{1}{2}}\). It suggests that in this case there is a \(\delta\) function at \(\lambda=0\) on top of the the smooth distribution \(\mathcal{K}_{\text{rel}}(\lambda)\). **Remark 5**.: _From a contour integral point of view, the extra term \(\Theta_{\mu_{1}+\mu_{2}}\) in eq. (3.26) appears because certain poles of \(\psi\left(\frac{1}{2}-\mu_{1}-\mu_{2}\pm i\lambda\right)\) cross the integration contour along the real line in the \(\lambda\) plane, when we vary the value of \(\mu_{1}+\mu_{2}\) from 0 and 1. We will see lots of times in this paper that whenever such pole crossing happens some complementary series representations pop out. So the pole crossing phenomenon can be thought as the smoking gun of complementary series._ Numerical check Apart from characters, Casimir operators are also useful tools to study tensor product decomposition. More generally, identifying \(G\)-irreps in a reducible representation \(R\) of \(G\) is equivalent to diagonalizing Casimirs of \(G\). For example, let \(G=\mathrm{SO}(3)\) and \(R\) be the tensor product to two spin \(\frac{1}{2}\) representations.15 The matrix form of \(\mathrm{SO}(3)\) quadratic Casimir in the four dimensional Hilbert space of \(R\) is given by Footnote 15: Strictly speaking, spin \(\frac{1}{2}\) are representations of \(\mathrm{SU}(2)\) and not of \(\mathrm{SO}(3)\). However, the tensor product of two spin \(\frac{1}{2}\) representations is a (reducible) representation of \(\mathrm{SO}(3)\). \[C^{\mathrm{SO}(3)}=\left(\begin{array}{cccc}2&0&0&0\\ 0&1&1&0\\ 0&1&1&0\\ 0&0&0&2\end{array}\right) \tag{3.28}\] Diagonalizing this matrix yields an eigenvalue \(0\) which corresponds to a trivial representation, and a triply degenerated eigenvalue \(2\) which corresponds to a spin \(1\) representation. Altogether, the diagonalization procedure implies the tensor product decomposition \([2]\otimes[2]=[1]\oplus[3]\). Actually, the computation can be simplified knowing that only representations of integer spin can appear in the tensor product \([2]\otimes[2]\). Since such representations always have a one dimensional \(L_{z}=0\) eigenspace, it suffices to diagonalize \(C^{\mathrm{SO}(3)}\) in the \(L_{z}=0\) subspace of \([2]\otimes[2]\), which is spanned by \(|\pm\frac{1}{2},\mp\frac{1}{2}\rangle\). The matrix form of \(C^{\mathrm{SO}(3)}\) restricted to this subspace is given by \[C^{\mathrm{SO}(3)}\Big{|}_{L_{z}=0}=\left(\begin{array}{cc}1&1\\ 1&1\end{array}\right) \tag{3.29}\] Its eigenvalues are \(0\) and \(2\), corresponding to the irreps [1] and [3] respectively. Similarly, in order to identify the continuous families of representations in the case of \(G=\mathrm{SO}(1,2)\) and \(R=\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\), we can simply diagonalize the Casimir \(C^{\mathrm{SO}(1,2)}\) in the \(L_{0}=0\) sector, because \(\mathcal{F}_{\Delta}\) always has a one dimensional eigenspace of \(L_{0}=0\) while the discrete series representations do not have any \(L_{0}=0\) state. Let \(\mathcal{H}_{0}\) be the \(L_{0}=0\) subspace of \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\), and a (non-normalized) basis of \(\mathcal{H}_{0}\) is given by \(|\psi_{n}\rangle\equiv|n,-n\rangle,n\in\mathbb{Z}\). The norm of \(|\psi_{n}\rangle\) follows from eq. (3.4) \[(\psi_{n}|\psi_{n})=\begin{cases}1,&\mathcal{F}_{\Delta_{1}}=\mathcal{P}_{ \Delta_{1}},\mathcal{F}_{\Delta_{2}}=\mathcal{P}_{\Delta_{2}}\\ \frac{\Gamma(\bar{\Delta}_{2}-n)}{\Gamma(\bar{\Delta}_{2}-n)},&\mathcal{F}_{ \Delta_{1}}=\mathcal{P}_{\Delta_{1}},\mathcal{F}_{\Delta_{2}}=\mathcal{C}_{ \Delta_{2}}\\ \frac{\Gamma(\bar{\Delta}_{1}+n)}{\Gamma(\bar{\Delta}_{1}+n)}\frac{\Gamma(\bar {\Delta}_{2}-n)}{\Gamma(\bar{\Delta}_{2}-n)},&\mathcal{F}_{\Delta_{1}}= \mathcal{C}_{\Delta_{1}},\mathcal{F}_{\Delta_{2}}=\mathcal{C}_{\Delta_{2}} \end{cases} \tag{3.30}\] The action of \(C^{\mathrm{SO}(1,2)}\) on \(|\psi_{n}\rangle\) takes the form \[C^{\mathrm{SO}(1,2)}|\psi_{n}\rangle=(2n^{2}+\Delta_{1}\bar{\Delta}_{1}+\Delta _{2}\bar{\Delta}_{2})|\psi_{n}\rangle-(n-\Delta_{1})(n-\Delta_{2})|\psi_{n-1} \rangle-(n+\Delta_{1})(n+\Delta_{2})|\psi_{n+1}\rangle \tag{3.31}\] which commutes with a \(\mathbb{Z}_{2}\) action \(\mathcal{T}:|\psi_{n}\rangle\rightarrow|\psi_{-n}\rangle\). So the Hilbert space \(\mathcal{H}_{0}\) splits into eigenspaces of \(\mathcal{T}\), denoted by \(\mathcal{H}_{\pm}\). The \(\mathcal{T}=+1\) subspace \(\mathcal{H}_{+}\) is spanned by \(|\psi_{n}^{+}\rangle\equiv\mathrm{i}\mathcal{T}\). \(\frac{1}{2}\left(|\psi_{n}\rangle+|\psi_{-n}\rangle\right)\) with \(n\geq 0\), and the \(\mathcal{T}=-1\) subspace \(\mathcal{H}_{-}\) is spanned by \(|\psi_{m}^{-}\rangle\equiv\frac{1}{2}\left(|\psi_{m}\rangle-|\psi_{-m}\rangle\right)\) with \(m\geq 1\). Define matrix elements of the \(\mathrm{SO}(1,2)\) Casimir in \(\mathcal{H}_{\pm}\) as \[\mathcal{Q}_{mn}^{(\pm)}\equiv\frac{(\psi_{m}^{\pm}|C^{\mathrm{ SO}(1,2)}|\psi_{n}^{\pm})}{\sqrt{(\psi_{m}^{\pm}|\psi_{m}^{\pm})(\psi_{n}^{\pm}| \psi_{n}^{\pm})}}. \tag{3.32}\] \(\mathcal{Q}^{(\pm)}\) are sparse matrices, whose nonzero entries are given by \[\mathcal{Q}_{nn}^{(+)} =2n^{2}+\Delta_{1}\bar{\Delta}_{1}+\Delta_{2}\bar{\Delta}_{2}, \mathcal{Q}_{n+1,n}^{(+)}=\begin{pmatrix}\mathcal{Q}_{n,n+1}^{(+)}\\ \end{pmatrix}^{*}=\begin{cases}\sqrt{2}\beta_{0},&n=0\\ \beta_{n},&n\geq 1\end{cases}\] \[\mathcal{Q}_{nn}^{(-)} =2n^{2}+\Delta_{1}\bar{\Delta}_{1}+\Delta_{2}\bar{\Delta}_{2}, \mathcal{Q}_{n+1,n}^{(-)}=\begin{pmatrix}\mathcal{Q}_{n,n+1}^{(-)}\\ \end{pmatrix}^{*}=\beta_{n},\ \ n\geq 1 \tag{3.33}\] where \[\beta_{n}=-\begin{cases}(n+\Delta_{1})(n+\Delta_{2}),&\mathcal{P}_{\Delta_{1} }\otimes\mathcal{P}_{\Delta_{2}}\\ (n+\Delta_{1})\sqrt{(n+\Delta_{2})(n+\Delta_{2})},&\mathcal{P}_{\Delta_{1}} \otimes\mathcal{C}_{\Delta_{2}}\\ \sqrt{(n+\Delta_{1})(n+\Delta_{1})(n+\Delta_{2})(n+\Delta_{2})},&\mathcal{C}_ {\Delta_{1}}\otimes\mathcal{C}_{\Delta_{2}}\end{cases} \tag{3.34}\] Diagonalizing \(\mathcal{Q}^{(\pm)}\) is equivalent to solving their spectrum. First, we can show that both \(\mathcal{Q}^{(\pm)}\) have a continuous spectrum on \(\left[\frac{1}{4},\infty\right)\). Since our argument only relies on the asymptotic behavior of \(\mathcal{Q}^{(\pm)}\), we will use \(\mathcal{Q}^{(+)}\) to illustrate the idea. For any \(q_{\lambda}=\frac{1}{4}+\lambda^{2}\in\left[\frac{1}{4},\infty\right)\), let \(v_{n}(\lambda)\) be the corresponding eigenstate, satisfying \[\mathcal{Q}_{n,n-1}^{(+)}v_{n-1}(\lambda)+\mathcal{Q}_{n,n}^{(+)}v_{n}( \lambda)+\mathcal{Q}_{n,n+1}^{(+)}v_{n+1}(\lambda)=q_{\lambda}v_{n}(\lambda). \tag{3.35}\] Such a \(v_{n}(\lambda)\) is uniquely determined up to an overall normalization. For large \(n\), plug the ansatz \(v_{n}(\lambda)\sim\frac{1}{n^{\alpha}}\left(1+\frac{a_{1}}{n}+\frac{a_{2}}{n^ {2}}+\cdots\right)\) into the recurrence relation (3.35), and we find \(\alpha\) to be \[\alpha_{\pm}(\lambda)=\begin{cases}\bar{\Delta}_{1}+\bar{\Delta}_{2}-\frac{1}{ 2}\pm i\lambda,&\mathcal{P}_{\Delta_{1}}\otimes\mathcal{P}_{\Delta_{2}}\\ \bar{\Delta}_{1}\pm i\lambda,&\mathcal{P}_{\Delta_{1}}\otimes\mathcal{C}_{ \Delta_{2}}\\ \frac{1}{2}\pm i\lambda,&\mathcal{C}_{\Delta_{1}}\otimes\mathcal{C}_{\Delta_{2} }\end{cases} \tag{3.36}\] Altogether, the leading large \(n\) asymptotic behavior of \(v_{n}(\lambda)\) is \[v_{n}(\lambda)=\frac{R_{+}}{n^{\alpha_{+}(\lambda)}}\left(1+ \mathcal{O}(n^{-1})\right)+\frac{R_{-}}{n^{\alpha_{-}(\lambda)}}\left(1+ \mathcal{O}(n^{-1})\right) \tag{3.37}\] where \(R_{\pm}\) are two constants that cannot be fixed by the asymptotic analysis. The asymptotic behavior in eq. (3.37) implies that the eigenvectors \(v_{n}(\lambda)\) are \(\delta\)-function normalizable. The reason is as follows \[(v(\lambda_{1}),v(\lambda_{2})) \sim\sum_{n}\frac{R_{+}^{*}R_{+}}{n^{1-i(\lambda_{1}-\lambda_{2}) }}+\frac{R_{-}^{*}R_{-}}{n^{1+i(\lambda_{1}-\lambda_{2})}}+\frac{R_{+}^{*}R_{- }}{n^{1-i(\lambda_{1}+\lambda_{2})}}+\frac{R_{-}^{*}R_{+}}{n^{1+i(\lambda_{1}+ \lambda_{2})}}\] \[\sim\int\,ds\left(R_{+}^{*}R_{+}e^{i(\lambda_{1}\!-\!\lambda_{2} )s}\!+\!R_{-}^{*}R_{-}e^{-i(\lambda_{1}\!-\!\lambda_{2})s}\!+\!R_{+}^{*}R_{-}e^{ i(\lambda_{1}\!+\!\lambda_{2})s}\!+\!R_{-}^{*}R_{+}e^{-i(\lambda_{1}\!+\!\lambda_{2})s}\right)\] \[\sim 2\pi(R_{+}^{*}R_{+}+R_{-}^{*}R_{-})\delta(\lambda_{1}-\lambda_{2}) \tag{3.38}\] where in the last line we have put \(\delta(\lambda_{1}+\lambda_{2})=0\) because \(\lambda\) is assumed to be nonnegative. Therefore, we conclude that \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\) contains \(\mathrm{SO}(1,2)\) principal series representation \(\mathcal{P}_{\frac{1}{2}+i\lambda}\) for all \(\lambda\geq 0\). Eigenvectors of \(\mathcal{Q}^{(\pm)}\) with eigenvalue \(\frac{1}{4}-\lambda^{2}\), which corresponds to a complementary series \(\mathcal{C}_{\frac{1}{2}+\lambda}\), take the same form as (3.37), with the rotation \(i\lambda\to\lambda\) in \(\alpha_{\pm}(\lambda)\). Then for these eigenvectors, normalizability is equivalent to the vanishing of \(R_{-}\). However, checking whether \(R_{-}=0\) requires more than asymptotic analysis. The (non)existence problem of eigenvalues below \(\frac{1}{4}\) has been solved by Pukanszky [23] by studying the resolvent \(R_{\pm}(z)=\frac{1}{\mathcal{Q}^{(\pm)}-z}\), which encodes the full spectrum of \(\mathcal{Q}^{(\pm)}\). Instead of reviewing his lengthy and technical analysis, we will propose an efficient numerical method to extract the full spectrum of \(\mathcal{Q}^{(\pm)}\), which as we shall see, admits a straightforward generalization to all spectrum problems in this paper. More importantly, the numerical method provides an intuitive and physical picture to understand why \(\mathcal{K}_{\mathrm{rel}}(\lambda)\) in eq. (3.22) and eq. (3.25) can be viewed as a relative density of principal series. In order to perform a numerical analysis of \(\mathcal{Q}^{(\pm)}\), we cut off the Hilbert space \(\mathcal{H}_{0}\) at \(-\mathcal{N}\leq n\leq\mathcal{N}\). From a \(\mathrm{dS}_{2}\) picture, it is equivalent to cutting off the angular momentum (along the circle in global coordinates) of the two particles corresponding to \(\mathcal{F}_{\Delta_{1}}\) and \(\mathcal{F}_{\Delta_{2}}\) respectively. In this truncated Hilbert space \(\mathcal{H}_{0}^{(\mathcal{N})}\), \(\mathcal{Q}^{(+)}\) becomes an \((\mathcal{N}+1)\times(\mathcal{N}+1)\) matrix and \(\mathcal{Q}^{(-)}\) becomes an \(\mathcal{N}\times\mathcal{N}\) matrix. Taking \(\mathcal{Q}^{(+)}\) as an example, it has \(\mathcal{N}+1\) eigenvalues, denoted by \(q_{0}\leq q_{1}\leq\cdots\leq q_{\mathcal{N}}\). Any \(q_{a}\) smaller than \(\frac{1}{4}\) at large \(\mathcal{N}\) corresponds to a complementary series representation and \(q_{a}\geq\frac{1}{4}\) belongs to principal series. As discussed above, in the case of \(\mathcal{C}_{\Delta_{1}}\otimes\mathcal{C}_{\Delta_{2}}\) with \(\mu_{1}+\mu_{2}>\frac{1}{2}\), there is a single complementary series representation with dimension \(\Delta=\mu_{1}+\mu_{2}\), which has \(\mathrm{SO}(1,2)\) Casimir \((\mu_{1}+\mu_{2})(1-(\mu_{1}+\mu_{2}))\). This can be seen in the left panel of fig. (3.1), where we plot the first four eigenvalues of \({\cal Q}^{(+)}\) and the first three eigenvalues of \({\cal Q}^{(-)}\) in the truncated tensor product \({\cal C}_{\frac{1}{2}+\mu}\otimes{\cal C}_{\frac{1}{2}+\mu}\) as a function of \(\mu\). The cutoff \({\cal N}\) is chosen to be \(5000\). \({\cal Q}^{(-)}\) is always larger than \(\frac{1}{4}\) and \({\cal Q}^{(+)}>\frac{1}{4}\) for \(0<\mu\lesssim\frac{1}{4}\). When \(\mu>\frac{1}{4}\), the smallest eigenvalue \({\cal Q}^{(+)}_{\rm min}\) of \({\cal Q}^{(+)}\) becomes smaller than \(\frac{1}{4}\) and lies on the quadratic curve \({\cal Q}=2\mu(1-2\mu)\), which implies the existence of a complementary series representation \({\cal C}_{2\mu}\) in this regime. We also illustrate the convergence of \({\cal Q}^{(+)}_{\rm min}({\cal N})\) to the expected value \({\cal Q}^{(+)}_{\rm min}(\infty)=(\mu_{1}+\mu_{2})(1-(\mu_{1}+\mu_{2}))\) in the \({\cal N}\to\infty\) limit in the left and middle panels of fig. (3.2). Empirically, we find that there is a power low convergence of \({\cal Q}^{(+)}_{\rm min}({\cal N})\) to the exact value given by \[{\cal Q}^{(+)}_{\rm min}({\cal N})-{\cal Q}^{(+)}_{\rm min}(\infty)\ \ \ \stackrel{{ \sim}}{{\cal N}\to\infty}\ \ \frac{1}{{\cal N}^{2(\mu_{1}+\mu_{2})-1}}. \tag{3.39}\] In the case of \({\cal P}_{\Delta_{1}}\otimes{\cal P}_{\Delta_{2}}\), all eigenvalues of \({\cal Q}^{(\pm)}\) are larger than \(\frac{1}{4}\), e.g. the right panel of fig. (3.1). So each eigenvalue \(q_{n}\) in the spectrum of, say \({\cal H}_{+}\), can be parameterized by a positive number \(\lambda_{n}=\sqrt{q_{n}-\frac{1}{4}}\), which effectively corresponds to a principal series representation of scaling dimension \(\Delta_{n}=\frac{1}{2}+i\lambda_{n}\). A coarse-grained density of principal series in truncated \({\cal H}_{+}\) can then be defined as the inverse spacing of \(\lambda_{n}\) \[\bar{\rho}^{+}_{\cal N}(\lambda_{n})\equiv\frac{2}{\lambda_{n+1}-\lambda_{n-1 }}. \tag{3.40}\] Similarly, we define a coarse-grained density \(\bar{\rho_{\cal N}}(\lambda_{n})\) of principal series in truncated \({\cal H}_{-}\). Adding up \(\bar{\rho}^{\pm}_{\cal N}\) as interpolated functions yields the full coarse-grained density \(\bar{\rho}_{\cal N}(\lambda)\) of principal series in the tensor product \({\cal P}_{\Delta_{1}}\otimes{\cal P}_{\Delta_{2}}\). This coarse-grained density suffers from divergence in the \({\cal N}\to\infty\) limit because we have shown above that \(\{\lambda_{n}\}\) converge to a continuous spectrum in this limit. In the same spirit as the character analysis, we remove this divergence by considering a relative coarse-grained density (dropping the label \({\cal N}\) whenever we have a relative density) \[\rho_{\rm rel}=\bar{\rho}_{\mathcal{N}}^{\mathcal{P}_{\Delta_{1}} \otimes\mathcal{P}_{\Delta_{2}}}-\bar{\rho}_{\mathcal{N}}^{\mathcal{P}_{\Delta _{3}}\otimes\mathcal{P}_{\Delta_{4}}} \tag{3.41}\] As a very nontrivial check, we find that the relative coarse-grained density \(\rho_{\rm rel}\) defined in eq. (3.41) matches perfectly with \({\cal K}_{\rm rel}\) given by eq. (3.22) when \(\lambda\) is much smaller than the cut-off \({\cal N}\). In fig. (3.3) we show this remarkable matching for the case of \(\Delta_{1}=\frac{1}{2}+4.3i,\Delta_{2}=\frac{1}{2}+1.3i\) and \(\Delta_{3}=\Delta_{4}=\frac{1}{2}+0.1i\). By construction of \(\rho_{\rm rel}\), the matching confirms the meaning of \({\cal K}_{\rm rel}\), which is derived purely from characters, as a (relative) density of principal series. It is worth mentioning the observation that the coarse-grained \(\bar{\rho}_{\cal N}\) (which is regularized by \({\cal N}\)) for a fixed large \({\cal N}\) has a good agreement with \({\cal K}_{\epsilon}\) (which is regularized by \(\epsilon\)) defined in equation (3.17) by choosing the regularization \(\epsilon\) such that \(\bar{\rho}_{\cal N}\) and \({\cal K}_{\epsilon}\) match at some scale \(\lambda_{*}\), in the spirit of renormalization. In fig. (3.4), we show the match between \(\bar{\rho}_{\cal N}\) with \({\cal N}=1000\) and \({\cal K}_{\epsilon}\) with \(\epsilon=1.4\times 10^{-4}\) for \({\cal P}_{\frac{1}{2}+4.3i}\otimes{\cal P}_{\frac{1}{2}+1.3i}\). As we will discuss in appendix B, this phenomenon disappears for \(\bar{\rho}_{\cal N}\) in \(L_{0}\neq 0\) sectors. ### \({\cal F}_{\Delta}\otimes{\cal D}_{p}\) As we have seen in subsection 3.2 that highest-weight and lowest-weight discrete series representations always appear in pairs in \({\cal F}_{\Delta_{1}}\otimes{\cal F}_{\Delta_{2}}\), we are mainly interested in the tensor product of \({\cal F}_{\Delta}\) and a reducible representation \({\cal D}_{p}={\cal D}_{p}^{+}\oplus{\cal D}_{p}^{-}\) in this subsection. #### 3.3.1 The discrete part Let \(|n,m)\) be a basis of \({\cal F}_{\Delta}\otimes{\cal D}_{p}^{+}\), where \(n\in\mathbb{Z}\) and \(m\geq p\). First, we want to show that this tensor product contains exactly one copy of each \({\cal D}_{k}^{+},k\geq 1\). As in the previous case, it amounts to constructing the lowest-weight state \(|k\rangle_{k}\) of each \({\cal D}_{k}^{+}\). Since such a state has eigenvalue \(k\) with respect to \(L_{0}\), it can be expressed as a linear combination of all \(|k-n,n)\) \[|k\rangle_{k}=\sum_{n\geq p}a_{n}\,|k-n,n). \tag{3.42}\] The condition \(L_{-}|k\rangle_{k}=0\) yields the following recurrence relation \[(n-p)a_{n}+(k-n+\bar{\Delta})a_{n-1}=0 \tag{3.43}\] whose solution is given by \(a_{n}=\frac{\Gamma(n+\Delta-k)}{\Gamma(n+1-p)}\) up to an overall normalization constant \(c\). Then the norm of \(|k\rangle_{k}\) becomes \[{}_{k}(k|k)_{k}=|c|^{2}\sum_{n\geq p}\frac{\Gamma(n+\Delta-k)\Gamma(n+\bar{ \Delta}-k)}{\Gamma(n+1-p)\Gamma(n+p)}\sim|c|^{2}\sum_{n}\,n^{-2k}. \tag{3.44}\] The sum is convergent for \(k=1,2,\cdots\) and hence all \({\cal D}_{k}^{+}\) belongs to \({\cal F}_{\Delta}\otimes{\cal D}_{p}^{+}\). The degeneracy of each \({\cal D}_{k}^{+}\) follows from the uniqueness of the solution to eq. (3.43). The same strategy can be used to exclude all highest-weight discrete series representations \({\cal D}_{\ell}^{-}\). For example, let \(|-\ell\rangle_{\ell}\) be the highest-weight state of \({\cal D}_{\ell}^{-}\) and it has to take the following form, if exists \[|-\ell\rangle_{\ell}=\sum_{n\geq p}b_{n}\,|-\ell-n,n). \tag{3.45}\] The condition \(L_{+}|-\ell)_{\ell}=0\) leads to a recurrence relation \[(\Delta-n-\ell)b_{n}+(p+n-1)b_{n-1}=0,\ \ n\geq p+1 \tag{100}\] and an initial condition \(b_{p}=0\), which further imply that all \(b_{n}\) are actually vanishing. Therefore, \(\mathcal{D}_{\ell}^{-}\) does not belong to \(\mathcal{F}_{\Delta}\otimes\mathcal{D}_{p}^{+}\). Similarly, one can show that \(\mathcal{F}_{\Delta}\otimes\mathcal{D}_{p}^{-}\) contains exactly one copy of each highest-weight discrete series representations and does not contain any lowest-weight ones. Altogether, the discrete part of \(\mathcal{F}_{\Delta}\otimes\mathcal{D}_{p}\) is the same as that of \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\), namely \[\mathcal{F}_{\Delta}\otimes\mathcal{D}_{p}\supseteq\bigoplus_{k\geq 1}\left( \mathcal{D}_{k}^{+}\oplus\mathcal{D}_{k}^{-}\right). \tag{101}\] #### 3.3.2 The continuous part Character analysisUsing the tensor product of two principal series representations \(\mathcal{P}_{\Delta_{3}}\otimes\mathcal{P}_{\Delta_{4}}\) to regularize \(\mathcal{F}_{\Delta}\otimes\mathcal{D}_{p}\), the discrete series part cancel out and then the relative density of principal series should satisfy \[\Theta_{\Delta}(q)\Theta_{p}(q)-\Theta_{\Delta_{3}}(q)\Theta_{ \Delta_{4}}(q)=\int_{0}^{\infty}\,d\lambda\,\mathcal{K}_{\text{rel}}(\lambda) \Theta_{\Delta_{\lambda}}(q) \tag{102}\] where we have used the fact that such a tensor product does not contain any complementary series representations [24]. This is consistent with the explicit expression of \(\mathcal{K}_{\text{rel}}(\lambda)\): \[\mathcal{K}_{\text{rel}}(\lambda)= -\frac{1}{\pi}\sum_{\pm}\left(\psi\left(\Delta+p-\frac{1}{2}\pm i \lambda\right)+\psi\left(\bar{\Delta}+p-\frac{1}{2}\pm i\lambda\right)\right)\] \[+\frac{1}{2\pi}\sum_{\pm,\pm,\pm}\psi\left(\frac{1}{2}\pm i\mu_{ 3}\pm i\mu_{4}\pm i\lambda\right) \tag{103}\] because it does not admit any pole crossing along the lines of remark 5. Numerical checkTo identify the continuous families of states in \(\mathcal{F}_{\Delta}\otimes\mathcal{D}_{p}^{\pm}\), it is sufficient to diagonalize the \(\text{SO}(1,2)\) Casimir in the \(L_{0}=0\) subspace \(\mathcal{H}_{0}\) of \(\mathcal{F}_{\Delta}\otimes\mathcal{D}_{p}^{\pm}\). For example, a basis of \(\mathcal{H}_{0}\) in \(\mathcal{F}_{\Delta}\otimes\mathcal{D}_{p}^{+}\) is \(|\psi_{n})=|-n,n),\ n\geq p\). The non-vanishing matrix elements of \(C^{\text{SO}(1,2)}\) with respect to this basis are found to be \[\mathcal{Q}_{n,n}=2n^{2}+\Delta\bar{\Delta}+p(1-p),\ \ \ \mathcal{Q}_{n+1,n}=( \mathcal{Q}_{n,n+1})^{*}=-\sqrt{(n+p)(n+1-p)}\,\gamma_{n} \tag{104}\] where \[\gamma_{n}=\begin{cases}n+\Delta,&\mathcal{F}_{\Delta}=\mathcal{P }_{\Delta}\\ \sqrt{(n+\Delta)(n+\bar{\Delta})},&\mathcal{F}_{\Delta}=\mathcal{C}_{\Delta} \end{cases} \tag{105}\] In \(\mathcal{F}_{\Delta}\otimes\mathcal{D}_{p}^{-}\), we get exactly the same matrix representation for \(C^{\text{SO}(1,2)}\). Similar to the discussion for the case of \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\), we are going to diagonalize the \(\mathcal{Q}\) matrix to derive the spectrum. Let us mention that with a similar large \(n\) argument that led to recursion relation (3.35) - where complementary series can be replaced by discrete series with \(\Delta=p\), one can show that \(\mathcal{F}_{\Delta}\otimes\mathcal{D}_{p}\) contains all the \(\mathrm{SO}(1,2)\) principal series: \(\mathcal{P}_{\frac{1}{2}+i\lambda}\) with \(\lambda\geq 0\). In fig. (3.5), we present the numerical result of diagonalizing the \(\mathcal{Q}\) matrix. In \(L_{0}=0\) sector, we do not expect to see the discrete series - while the \(L_{0}\neq 0\) sectors include discrete series following (3.47) - see appendix B for the discussion on \(L_{0}\neq 0\) sectors. As discussed in (3.49), in the \(\mathcal{C}_{\Delta}\otimes\mathcal{D}_{p}\) tensor product decomposition, no complementary series appears, in agreement with fig. (3.5). Analogous to section 3.2.2, one can define a density of states for \(\mathcal{F}_{\Delta}\otimes\mathcal{D}_{p}\) by diagonalizing the matrix \(\mathcal{Q}\) and parametrize the eigenvalues \(q_{n}>\frac{1}{4}\) corresponding to principal series with \(\lambda_{n}=\sqrt{q_{n}-\frac{1}{4}}\). Strictly speaking, the spectral density derived from \(\mathcal{Q}\) in eq. (3.50) corresponds to \(\mathcal{F}_{\Delta}\otimes\mathcal{D}_{p}^{+}\) or \(\mathcal{F}_{\Delta}\otimes\mathcal{D}_{p}^{-}\) separately. Therefore, we define \[\bar{\rho}_{\mathcal{N}}^{\mathcal{F}_{\Delta}\otimes\mathcal{D}_{p}}(\lambda _{n})\equiv 2\times\frac{2}{\lambda_{n+1}-\lambda_{n-1}} \tag{3.52}\] where the factor of two is to add up the equal contributions of \(\mathcal{F}_{\Delta}\otimes\mathcal{D}_{p}^{+}\) and \(\mathcal{F}_{\Delta}\otimes\mathcal{D}_{p}^{-}\). In fig. (3.6), we present the results of \(\rho_{\mathrm{rel}}\) defined similar to (3.41) as \[\rho_{\mathrm{rel}}\equiv\bar{\rho}_{\mathcal{N}}^{\mathcal{F}_{\Delta} \otimes\mathcal{D}_{p}}-\bar{\rho}_{\mathcal{N}}^{\frac{1}{2}+i\rho_{\mathrm{ rel}}\otimes\mathcal{D}_{\frac{1}{2}+i\rho_{\mathrm{rel}}}}. \tag{3.53}\] There is a perfect agreement between \(\rho_{\mathrm{rel}}\) in large \(\mathcal{N}\) and \(\mathcal{K}_{\mathrm{rel}}\) found in (3.49). ### \(\mathcal{D}_{p_{1}}\otimes\mathcal{D}_{p_{2}}\) The tensor product \(\mathcal{D}_{p_{1}}\otimes\mathcal{D}_{p_{2}}\) consists of four parts \[\mathcal{D}_{p_{1}}\otimes\mathcal{D}_{p_{2}}=\left(\mathcal{D}_{p_{1}}^{+} \otimes\mathcal{D}_{p_{2}}^{+}\right)\oplus\left(\mathcal{D}_{p_{1}}^{-} \otimes\mathcal{D}_{p_{2}}^{-}\right)\oplus\left(\mathcal{D}_{p_{1}}^{+} \otimes\mathcal{D}_{p_{2}}^{-}\right)\oplus\left(\mathcal{D}_{p_{1}}^{-} \otimes\mathcal{D}_{p_{2}}^{+}\right) \tag{3.54}\] where \({\cal D}^{+}_{p_{1}}\otimes{\cal D}^{+}_{p_{2}}\) and \({\cal D}^{-}_{p_{1}}\otimes{\cal D}^{-}_{p_{2}}\) are trivial in the sense that they do not have a continuous part. Taking \({\cal D}^{+}_{p_{1}}\otimes{\cal D}^{+}_{p_{2}}\) as an example, the reason is that in the full Hilbert space, \(L_{0}\) has a positive minimal eigenvalue \(p_{1}+p_{2}\). In addition, it is an easy task to write down the full decomposition \[{\cal D}^{+}_{p_{1}}\otimes{\cal D}^{+}_{p_{2}}=\bigoplus_{k\geq p _{1}+p_{2}}{\cal D}^{+}_{k},\ \ \ {\cal D}^{-}_{p_{1}}\otimes{\cal D}^{-}_{p_{2}}=\bigoplus_{k\geq p _{1}+p_{2}}{\cal D}^{-}_{k} \tag{3.55}\] Therefore we will focus on the tensor product of a lowest-weight discrete series representation and a highest-weight discrete series representation, e.g. \({\cal D}^{+}_{p_{1}}\otimes{\cal D}^{-}_{p_{2}}\), which certainly contains the continuous families of states (because there exist \(L_{0}=0\) states). #### 3.4.1 The discrete part The existence of a discrete series representation \({\cal D}^{+}_{k}\) in the tensor product \({\cal D}^{+}_{p_{1}}\otimes{\cal D}^{-}_{p_{2}}\) correspond to a normalizable \(L_{0}=k\) state annihilated by \(L_{-}\). Assuming \(p_{1}\geq p_{2}\), the most general form of such a state is \[|k)_{k}=\sum_{n\geq p_{2}}a_{n}|n+k,-n) \tag{3.56}\] where the coefficients \(a_{n}\) should be zero if \(n+k\leq p_{1}-1\). The lowest-weight condition \(L_{-}|k)_{k}=0\) yields \[0 =a_{p_{2}}(k+p_{2}-p_{1})|k+p_{2}-1,-p_{2})\] \[+\sum_{n\geq p_{2}}(a_{n+1}(k+n+1-p_{1})-a_{n}(n+p_{2}))|k+n,-n-1) \tag{3.57}\] which is equivalent to an initial condition and a recurrence relation \[a_{p_{2}}(k+p_{2}-p_{1})=0,\ \ \ a_{n+1}(k+n+1-p_{1})-a_{n}(n+p_{2})=0. \tag{3.58}\] When \(k\geq p_{1}-p_{2}+1\), all \(a_{n}\) are identically vanishing. When \(k\leq p_{1}-p_{2}\), all \(a_{n}\) with \(p_{2}\leq n\leq p_{1}-k-1\) are zero and the rest \(a_{n}\) are given by \[a_{n}=c\frac{\Gamma(n+p_{2})}{\Gamma(k+n+1-p_{1})},\ \ \ n\geq p_{1}-k \tag{3.59}\] where \(c\) is an arbitrary normalization constant. The norm of \(|k)_{k}\) becomes \[{}_{k}(k|k)_{k}=|c|^{2}\sum_{n\geq p_{1}-k}\frac{\Gamma(n+p_{2}) \Gamma(n+1-p_{2})}{\Gamma(n+k+1-p_{1})\Gamma(n+k+p_{1})}\sim|c|^{2}\sum_{n}n^{ -2k} \tag{3.60}\] which is convergent for \(k\in\mathbb{Z}_{+}\). Therefore \(\{\mathcal{D}_{1}^{+},\cdots\mathcal{D}_{p_{1}-p_{2}}^{+}\}\) belong to \(\mathcal{D}_{p_{1}}^{+}\otimes\mathcal{D}_{p_{2}}^{-}\) when \(p_{1}\geq p_{2}\). For \(\mathcal{D}_{k}^{-}\), the highest-weight state \(|-k)_{k}\) should take the following form \[|-k)_{k}=\sum_{n\geq p_{1}}b_{n}|n,-n-k). \tag{3.61}\] The highest-weight condition \(L_{+}|-k)_{k}=0\) yields \[b_{n+1}=\frac{n+p_{1}}{n+k+1-p_{2}}b_{n},\ \ \ b_{p_{1}}=0. \tag{3.62}\] Since \(p_{1}\geq p_{2}\) only solution of eq. (3.62) is \(b_{n}\equiv 0\) for any \(n\geq p_{1}\) and hence there does not exist any \(\mathcal{D}_{k}^{-}\). Similarly, we can show that when \(p_{1}\leq p_{2}\), the discrete part of \(\mathcal{D}_{p_{1}}^{+}\otimes\mathcal{D}_{p_{2}}^{-}\) only consists of highest weight discrete series representations \(\{\mathcal{D}_{1}^{-},\cdots\mathcal{D}_{p_{2}-p_{1}}^{-}\}\). Altogether, the discrete part of \(\mathcal{D}_{p_{1}}^{+}\otimes\mathcal{D}_{p_{2}}^{-}\) for any \(p_{1}\) and \(p_{2}\) can be summarized as \[\mathcal{D}_{p_{1}}^{+}\otimes\mathcal{D}_{p_{2}}^{-}\supseteq \begin{cases}\bigoplus_{k=1}^{p_{1}-p_{2}}\mathcal{D}_{k}^{+},&p_{1}>p_{2}\\ \varnothing,&p_{1}=p_{2}\\ \bigoplus_{k=1}^{p_{2}-p_{1}}\mathcal{D}_{k}^{-},&p_{1}<p_{2}\end{cases} \tag{3.63}\] which further implies that \[\left(\mathcal{D}_{p_{1}}^{+}\otimes\mathcal{D}_{p_{2}}^{-}\right) \oplus\left(\mathcal{D}_{p_{1}}^{-}\otimes\mathcal{D}_{p_{2}}^{+}\right) \supseteq\bigoplus_{1\leq k\leq|p_{1}-p_{2}|}\left(\mathcal{D}_{k}^{+}\oplus \mathcal{D}_{k}^{-}\right) \tag{3.64}\] and \[\mathcal{D}_{p_{1}}\otimes\mathcal{D}_{p_{2}}\supseteq\bigoplus_ {1\leq k\leq|p_{1}-p_{2}|,\ k\geq p_{1}+p_{2}}\mathcal{D}_{k}. \tag{3.65}\] #### 3.4.2 The continuous part Character analysis Use \(\mathcal{P}_{\Delta_{3}}\otimes\mathcal{P}_{\Delta_{4}}\) to regularize the tensor product of \(\mathcal{D}_{p_{1}}\otimes\mathcal{D}_{p_{2}}\). According to eq. (3.65), the discrete series parts of \(\mathcal{D}_{p_{1}}\otimes\mathcal{D}_{p_{2}}\) and \(\mathcal{P}_{\Delta_{3}}\otimes\mathcal{P}_{\Delta_{4}}\) have a finite mismatch. Taking into account this mismatch, the relative density of principal series should satisfy \[\Theta_{p_{1}}(q)\Theta_{p_{2}}(q)-\Theta_{\Delta_{3}}(q)\Theta_{\Delta_{4}}(q )=\int_{0}^{\infty}\,d\lambda\,\mathcal{K}_{\rm rel}(\lambda)\Theta_{\Delta_{ \lambda}}(q)-\sum_{|p_{1}-p_{2}|<k<p_{1}+p_{2}}\Theta_{k}(q) \tag{3.66}\] where \(\Theta_{k}(q)=\frac{2\,q^{k}}{1-q}\) is the character of \(\mathcal{D}_{k}^{+}\oplus\mathcal{D}_{k}^{-}\). Solving this equation yields \[\mathcal{K}_{\rm rel}(\lambda) =\frac{1}{2\pi}\sum_{\pm,\pm,\pm}\psi\left(\frac{1}{2}\pm i\mu_{ 3}\pm i\mu_{4}\pm i\lambda\right)-\frac{2}{\pi}\sum_{\pm}\psi\left(p_{1}+p_{2 }-\frac{1}{2}\pm i\lambda\right)\] \[+\sum_{|p_{1}-p_{2}|<k<p_{1}+p_{2}}\frac{2}{\pi}\frac{k-\frac{1} {2}}{\left(k-\frac{1}{2}\right)^{2}+\lambda^{2}}. \tag{3.67}\] Since \(p_{1}+p_{2}\) is always larger than \(\frac{1}{2}\), there cannot be pole crossing in \(\mathcal{K}_{\rm rel}(\lambda)\) and hence there is no complementary series in \(\mathcal{D}_{p_{1}}\otimes\mathcal{D}_{p_{2}}\). Numerical check For \(\mathcal{D}_{p_{1}}^{+}\otimes\mathcal{D}_{p_{2}}^{-}\) and \(\mathcal{D}_{p_{2}}^{+}\otimes\mathcal{D}_{p_{1}}^{-}\), the matrix elements of \(C^{\rm SO(1,2)}\) in the \(L_{0}=0\) subspace take the same form \[\mathcal{Q}_{nn}=2n^{2}+p_{1}(1-p_{1})+p_{2}(1-p_{2})\] \[\mathcal{Q}_{n+1,n}=\mathcal{Q}_{n,n+1}=-\sqrt{(n+p_{1})(n+p_{2})( n+1-p_{1})(n+1-p_{2})} \tag{111}\] where \(n\geq\max(p_{1},p_{2})\). \(L_{0}=0\) does not have access to discrete series but similar to how it was shown in section 3.2.2 and section 3.3.2, one can use eq. (111) to see that all the principal series representations appear in the tensor product decomposition. By introducing a cut-off \(n\leq\mathcal{N}\), we truncate the infinite dimensional matrix \(\mathcal{Q}\), which leads to a coarse-grained density \(\bar{\rho}_{\mathcal{N}}^{\mathcal{D}_{\mathrm{P}_{1}}\otimes\mathcal{D}_{ \mathrm{P}_{2}}}\) of principal series. In fig. (10), similar to previous sections, we find a perfect agreement between spectral kernel from character analysis in (111) and the adopted version of (109): \[\rho_{\mathrm{rel}}\equiv\bar{\rho}_{\mathcal{N}}^{\mathcal{D}_{ \mathrm{P}_{1}}\otimes\mathcal{D}_{\mathrm{P}_{2}}}-\bar{\rho}_{\mathcal{N}}^ {\mathcal{P}_{\frac{1}{2}+i\mu_{\mathrm{rel}}}}{}^{\otimes\mathcal{P}_{\frac {1}{2}+i\mu_{\mathrm{rel}}}}. \tag{112}\] ### Identical particles #### 3.5.1 Two-particle Hilbert space When \(\Delta_{1}=\Delta_{2}=\Delta\), the symmetrized tensor product \(\mathcal{F}_{\Delta}\odot\mathcal{F}_{\Delta}\subset\mathcal{F}_{\Delta} \otimes\mathcal{F}_{\Delta}\) describes the two-particle Hilbert space of a scalar field with mass \(m=\sqrt{\Delta(1-\Delta)}\). Since \(\mathcal{F}_{\Delta}\odot\mathcal{F}_{\Delta}\) is the permutation invariant subsector of \(\mathcal{F}_{\Delta}\otimes\mathcal{F}_{\Delta}\), the \(\mathrm{SO}(1,2)\) invariant subspaces in \(\mathcal{F}_{\Delta}\odot\mathcal{F}_{\Delta}\) can be obtained by imposing permutation symmetry on the decomposition of \(\mathcal{F}_{\Delta}\otimes\mathcal{F}_{\Delta}\). For the discrete part of \(\mathcal{F}_{\Delta}\otimes\mathcal{F}_{\Delta}\), as shown in subsection 3.2, each \(\mathcal{D}_{k}^{+}\) is generated by a lowest-weight state \(|k\rangle_{k}\), c.f. eq. (10) and eq. (111): \[|k\rangle_{k}=\sum_{n}a_{n}|n,k-n\rangle,\ \ \ a_{n}=c\frac{\Gamma(n+\Delta-k)}{ \Gamma(n+\Delta)}. \tag{113}\] Noticing \(a_{k-n}=(-)^{k}a_{n}\), we find that \(|k\rangle_{k}\) and the whole representation \(\mathcal{D}_{k}^{+}\) has a parity \((-)^{k}\) under permutation. Therefore only \(\mathcal{D}_{k}^{+}\) with even \(k\) can exist in \(\mathcal{F}_{\Delta}\odot\mathcal{F}_{\Delta}\). The same conclusion also holds for the the lowest-weight discrete series. Altogether, the discrete part of \(\mathcal{F}_{\Delta}\odot\mathcal{F}_{\Delta}\) consists of \(\mathcal{D}_{2}^{\pm},\mathcal{D}_{4}^{\pm},\mathcal{D}_{6}^{\pm},\ldots\) For the continuous part of \(\mathcal{F}_{\Delta}\otimes\mathcal{F}_{\Delta}\), the \(\mathbb{Z}_{2}\) action \(\mathcal{T}\) that maps \(|\psi_{n}\rangle\) to \(|\psi_{-n}\rangle\) coincides with the permutation operator since \(|\psi_{n}\rangle=|n,-n\rangle\) by definition. Then the eigenspaces \(\mathcal{H}_{\pm}\subset\mathcal{H}_{0}\) of \(\mathcal{T}\) have eigenvalue \(\pm 1\) under permutation respectively. Therefore, the principal series and complementary series contained in \(\mathcal{F}_{\Delta}\odot\mathcal{F}_{\Delta}\) correspond to the spectrum of \(\mathrm{SO}(1,2)\) Casimir restricted to the permutation invariant subspace \(\mathcal{H}_{+}\). To understand the continuous part from character side, we need the following well-established fact in representation theory of compact Lie groups. Let \(R\) be an irrep of \(G\), then the character of \(R\odot R\) is given by16 Footnote 16: It is actually very easy to prove this relation. Let \(|n\rangle\) be an normalized and orthogonal basis of \(R\). One can easily check that \(\Pi_{+}=\frac{1}{2}\sum_{n,m}|n,m\rangle\langle n,m|+|n,m\rangle\langle m,n|\) is a projection operator onto the permutation invariant subspace of \(R\otimes R\). Therefore evaluating the character \(\chi_{R\odot R}(g)\) is equivalent to computing the For \(G={\rm SO}(1,2)\) and \(R={\cal F}_{\Delta}\), eq. (3.72) means \[\Theta_{{\cal F}_{\Delta}\odot{\cal F}_{\Delta}}(q)=\frac{1}{2}\frac{q^{2\Delta}+q ^{2\bar{\Delta}}+2q}{(1-q)^{2}}+\frac{1}{2}\frac{q^{2\Delta}+q^{2\bar{\Delta}} }{1-q^{2}}. \tag{3.73}\] For \(\Delta_{k}=\frac{1}{2}+i\mu_{k},k=1,2\), let \({\cal K}_{\rm rel}(\lambda)\) be the relative density of principal series between \({\cal P}_{\Delta_{1}}\odot{\cal P}_{\Delta_{1}}\) and \({\cal P}_{\Delta_{2}}\odot{\cal P}_{\Delta_{2}}\), and then it should satisfy \[\int_{0}^{\infty}d\lambda\,{\cal K}_{\rm rel}(\lambda)\Theta_{\frac{1}{2}+i \lambda}(q)=\frac{1}{2}\left(\frac{q^{2\Delta_{1}}+q^{2\bar{\Delta}_{1}}-q^{2 \Delta_{2}}-q^{2\bar{\Delta}_{2}}}{(1-q)^{2}}+\frac{q^{2\Delta_{1}}+q^{2\bar {\Delta}_{1}}-q^{2\Delta_{2}}-q^{2\bar{\Delta}_{2}}}{1-q^{2}}\right) \tag{3.74}\] which yields \[{\cal K}_{\rm rel}(\lambda) =\frac{1}{4\pi}\sum_{\pm\pm}\left(\psi\left(\frac{1}{2}\pm 2i \mu_{2}\pm i\lambda\right)-\psi\left(\frac{1}{2}\pm 2i\mu_{1}\pm i\lambda \right)\right)\] \[+\frac{1}{4}\sum_{\pm}\left(\frac{1}{\cosh(\pi(\lambda\pm 2\mu_{ 1}))}-\frac{1}{\cosh(\pi(\lambda\pm 2\mu_{2}))}\right). \tag{3.75}\] As explained in [43], the single particle states of a free massive scalar field in dS\({}_{2}\) can be identified with the principal series representation \({\cal P}_{\Delta}\) if the mass squared \(m^{2}=\Delta(1-\Delta)>\frac{1}{4}\). The states \(|n\rangle\), introduced in section 3.1, form an eigenbasis of \(L_{0}\) which is the SO(2) generator of global dS\({}_{2}\). The discussion above, implies that the two-particle Hilbert space of the same scalar field contains the discrete series representations \({\cal D}_{2}^{\pm},{\cal D}_{4}^{\pm},{\cal D}_{6}^{\pm},\cdots\). For example, \({\cal D}_{k}^{+}\) contains the normalizable lowest-weight state \(|k\rangle_{k}\), given in (3.70), that is annihilated by the generator \(L_{-}\). One can think of such UIRs as purely right moving in dS\({}_{2}\). Similar to section 3.2.2, for a numerical check of the analysis above, we need to construct the matrix representation of the Casimir operator restricted to the \(L_{0}=0\) sector of \({\cal F}_{\Delta}\odot{\cal F}_{\Delta}\). However, it does not require any extra work since the \(L_{0}=0\) sector of the symmetrized tensor \({\cal F}_{\Delta}\odot{\cal F}_{\Delta}\) is nothing but the subspace \({\cal H}_{+}\) of the usual tensor product \({\cal F}_{\Delta}\otimes{\cal F}_{\Delta}\). So the Casimir is given by \({\cal Q}^{(+)}\) of \({\cal F}_{\Delta}\otimes{\cal F}_{\Delta}\), c.f. eq. (3.33). This simple observation has two immediate applications. First, \({\cal F}_{\Delta}\odot{\cal F}_{\Delta}\) and \({\cal F}_{\Delta}\otimes{\cal F}_{\Delta}\) contain the same complementary series representation, if exists, because we have checked numerically \({\cal Q}^{(-)}>\frac{1}{4}\). Second, the coarse-grained density of principal series in (truncated) \({\cal F}_{\Delta}\odot{\cal F}_{\Delta}\) is the same as \(\bar{\rho}_{\cal N}^{+}(\lambda)\) for (truncated) \({\cal F}_{\Delta}\otimes{\cal F}_{\Delta}\), c.f. eq. (3.40). With the coarse-grained density known, we further define a relative density \(\rho_{\rm rel}\) between \({\cal F}_{\Delta_{1}}\odot{\cal F}_{\Delta_{1}}\) and \({\cal F}_{\Delta_{2}}\odot{\cal F}_{\Delta_{2}}\). A comparison of \(\rho_{\rm rel}\) and \({\cal K}_{\rm rel}\) (c.f. eq. (3.75)) is shown in fig. (3.8). #### 3.5.2 Counting complementary series in multi-particle Hilbert space In section 3.2, we have shown that the tensor product \(\mathcal{C}_{\frac{1}{2}+\mu_{1}}\otimes\mathcal{C}_{\frac{1}{2}+\mu_{2}}\) contains exactly one complementary series representation of scaling dimension \(\mu_{1}+\mu_{2}\) when \(\mu_{1}+\mu_{2}>\frac{1}{2}\). This result also holds for symmetrized tensor product in the sense that \(\mathcal{C}_{2\mu}\in\mathcal{C}_{\frac{1}{2}+\mu}\odot\mathcal{C}_{\frac{1}{ 2}+\mu}\) when \(2\mu>\frac{1}{2}\). In the QFT language, it means that given a light scalar17\(\phi\) of scaling dimension \(\Delta=\frac{1}{2}+\mu\) or equivalently mass \(m^{2}=\frac{1}{4}-\mu^{2}\), its two-particle Hilbert space contains an invariant subspace corresponding to a light scalar of mass \(m^{2}=2\mu(1-2\mu)\) when \(\mu>\frac{1}{4}\). It is then natural to ask if there is any light scalar in any \(M\)-particle Hilbert space of \(\phi\), and if yes, what is the corresponding mass. Group theoretically, the \(M\)-particle Hilbert space is described by the symmetrized tensor product \(\mathcal{C}_{\Delta}^{\odot M}\), whose character \(\Theta_{\mathcal{C}_{\Delta}^{\odot M}}(q)\) can be extracted from the following generating function Footnote 17: We use the word “light” for a field whose single-particle Hilbert space is described by a complementary series representation. \[\mathbb{P}(q,t)\equiv\exp\left(\sum_{k\geq 1}\Theta_{\mathcal{C}_{\Delta}}(q^{ k})\,\frac{t^{k}}{k}\right)=\sum_{M\geq 0}\Theta_{\mathcal{C}_{\Delta}^{ \odot M}}(q)\,t^{M} \tag{3.76}\] In particular, it is easy to check for \(M=2\) we recover eq. (3.73). As discussed in section 3.2.2, the first several terms in the small \(q\) expansion of \(\Theta_{\mathcal{C}_{\Delta}^{\odot M}}(q)\) contain all the information regarding invariant subspaces of \(\mathcal{C}_{\Delta}^{\odot M}\) that correspond to complement series. The generating function \(\mathbb{P}(q,t)\) provides a simple way to derive such expansions as follows. In the exponent of \(\mathbb{P}(q,t)\), for each fixed \(k\), we expand \(\Theta_{\mathcal{C}_{\Delta}}(q^{k})\) as a series \((q^{k\Delta}+q^{k\bar{\Delta}})\sum_{n\geq 0}q^{nk}\). Switching the order of the two infinite sums, then the sum over yields two logarithmic functions, which allows us to rewrite \(\mathbb{P}(q,t)\) as \[\mathbb{P}(q,t)=\prod_{n\geq 0}\frac{1}{1-t\,q^{\Delta+n}}\frac{1}{1-t\,q^{ \bar{\Delta}+n}}=\sum_{\mathbf{k},\mathbf{\ell}}\,q^{\alpha(\mathbf{k},\mathbf{\ell})}t^{\sum_{ n}(k_{n}+\ell_{n})} \tag{3.77}\] where \(\mu=\Delta-\frac{1}{2}\) and \[\alpha(\mathbf{k},\mathbf{\ell})=\sum_{n\geq 0}\left(n+\frac{1}{2}\right)(k_{n}+\ell_{n })+\mu\sum_{n\geq 0}(k_{n}-\ell_{n}) \tag{3.78}\] Fixing the particle number \(\sum_{n}(k_{n}+\ell_{n})=M\), we have \[\alpha(\mathbf{k},\mathbf{\ell})=\Delta M+\sum_{n\geq 0}n(k_{n}+\ell_{n})-2\mu\sum_{n \geq 0}\ell_{n} \tag{3.79}\] Noticing that \(\sum_{n\geq 0}n(k_{n}+\ell_{n})\geq 0\) and \(\sum_{n\geq 0}\ell_{n}\leq\sum_{n\geq 0}(k_{n}+\ell_{n})=M\), where the two "\(=\)" hold if and only if \(\mathbf{k}=0\) and \(\mathbf{\ell}=(M,0,0,\cdots)\), we find the minimal value of \(\alpha(\mathbf{k},\mathbf{\ell})\) to be \(\bar{\Delta}M\). To obtain the second minimal value of \(\alpha(\mathbf{k},\mathbf{\ell})\), which corresponds to \(\ell_{0}\leq M-1\), we rewrite it as \[\alpha(\mathbf{k},\mathbf{\ell})=\Delta M-2\mu\ell_{0}+\sum_{n\geq 1}(n(k_{n}+\ell_{n})- 2\mu\ell_{n})\geq\Delta M-2\mu\ell_{0}+\sum_{n\geq 1}nk_{n} \tag{3.80}\] where we have used \(2\mu<1\leq n\) in the sum over \(n\). Then it is easy to see \(\alpha(\mathbf{k},\mathbf{\ell})\geq M\bar{\Delta}+2\mu\) when \(\ell_{0}\leq M-1\). The equality holds for \(\mathbf{k}=(1,0,0,\cdots)\) and \(\mathbf{\ell}=(M-1,0,0,\cdots)\). Altogether, we obtain the leading and subleading terms in the small \(q\) expansion of \(\Theta_{\mathcal{C}_{\bar{\Delta}}^{\otimes M}}(q)\) \[\Theta_{\mathcal{C}_{\bar{\Delta}}^{\otimes M}}(q)=q^{M\bar{\Delta}}+q^{M\bar {\Delta}+2\mu}+\text{higher order terms in }q \tag{3.81}\] When \(M\bar{\Delta}<\frac{1}{2}\), which corresponds to \(\mu>\frac{M-1}{2M}\), there has to be one copy of \(\mathcal{C}_{1-M\bar{\Delta}}\) in the symmetrized tensor product \(\mathcal{C}_{\Delta}^{\otimes M}\) to cancel \(q^{M\bar{\Delta}}\) since the latter cannot be reproduced by an integral over principal series characters. On the other hand, because of \(M\bar{\Delta}+2\mu=(M-2)\bar{\Delta}+1\geq 1\) for \(M\geq 2\), there should not be any extra complementary series representations. It is worth mentioning that the same results also hold for the tensor product \(\mathcal{C}_{\Delta}^{\otimes M}\) which can be easily checked by using the rule of \(\mathcal{C}_{\Delta_{1}}\otimes\mathcal{C}_{\Delta_{2}}\). A simple reason for the coincidence from the character viewpoint is that the first two terms in the small \(q\) expansion of \(\Theta_{\mathcal{C}_{\Delta}^{\otimes M}}(q)\) are \(q^{M\bar{\Delta}}+Mq^{M\bar{\Delta}+2\mu}\). Altogether, in the multi-particle Hilbert space of \(\phi\), there exist \(n_{\text{max}}\equiv\left\lfloor\frac{1}{1-2\mu}\right\rfloor\) complementary series representations, with scaling dimensions \(1-\bar{\Delta},1-2\bar{\Delta},\cdots,1-n_{\text{max}}\bar{\Delta}\). ## 4 Tensor products in \(\text{SO}(1,d+1)\) In this section, we generalize the character and numerical methods developed in section 3 to study tensor product decomposition of higher dimensional \(\text{SO}(1,d+1)\), focusing on principal and complementary series of spin \(0\). Let us first consider the tensor product of two scalar principal series representations \(\mathcal{P}_{\Delta_{1}}\otimes\mathcal{P}_{\Delta_{2}}\) with \(\Delta_{k}=\frac{d}{2}+i\mu_{k}\). Each \(\mathcal{P}_{\Delta_{k}}\) consists of all the single-row representations of \(\mathrm{SO}(d+1)\). Using the following decomposition rule of orthogonal groups \[\mathbb{Y}_{n}\otimes\mathbb{Y}_{\ell}=\bigoplus_{a=0}^{\min(n,\ell )}\bigoplus_{b=0}^{\min(n,\ell)-a}\mathbb{Y}_{n+\ell-2a-b,b}\, \tag{4.1}\] we can immediately conclude that the \(\mathrm{SO}(d+1)\) content of \(\mathcal{P}_{\Delta_{1}}\otimes\mathcal{P}_{\Delta_{2}}\) has at most two rows in the Young diagram language. This condition excludes a large class of \(\mathrm{SO}(1,d+1)\) UIRs, including discrete series when \(d\geq 5\), because discrete series representations should contain \(\mathrm{SO}(d+1)\) content that has \(\frac{d+1}{2}\geq 3\) rows. Thus, \(\mathcal{P}_{\Delta_{1}}\otimes\mathcal{P}_{\Delta_{2}}\) only contains the four types of UIRs which are reviewed in section 2. Among these UIRs, the exceptional series should be thought as the analogue of discrete series in the \(d=1\) case, since they can be characterized by certain linear relations (in terms of \(\mathfrak{so}(1,d+1)\) generators)18 while the two continuous series can only be identified by the quadratic operator \(C^{\mathrm{SO}(1,d+1)}\). In appendix E, we show that the exceptional series cannot be present in this tensor product by directly checking the linear relations. Further assuming the absence of complementary series, which is actually proved by [28] and will also be confirmed numerically later, the decomposition of \(\mathcal{P}_{\Delta_{1}}\otimes\mathcal{P}_{\Delta_{2}}\) can be written _schematically_ as Footnote 18: When \(d=1\), these linear relations simply refer to the highest-weight or lowest-weight conditions, and for higher \(d\), they are discussed in appendix E. \[\mathcal{P}_{\Delta_{1}}\otimes\mathcal{P}_{\Delta_{2}}=\sum_{s \geq 0}\int_{0}^{\infty}d\lambda\,\mathcal{K}^{(s)}(\lambda)\,\mathcal{P}_{ \Delta_{\lambda},s},\ \ \Delta_{\lambda}=\frac{d}{2}+i\lambda \tag{4.2}\] where \(\mathcal{K}^{(s)}(\lambda)\) is _formally_ the density of spin \(s\) principal series. Proceeding with eq. (4.2) and replacing each UIR by the corresponding Harish-Chandra character, we would end up with a divergent \(\mathcal{K}^{(s)}(\lambda)\). Following the strategy used in section 3, we regularize \(\mathcal{K}^{(s)}(\lambda)\) by introducing two more principal series representation \(\mathcal{P}_{\Delta_{3}}\) and \(\mathcal{P}_{\Delta_{4}}\) as a reference, which then yields \[\Theta_{\Delta_{1}}(q,\mathbf{x})\Theta_{\Delta_{2}}(q,\mathbf{x})- \Theta_{\Delta_{3}}(q,\mathbf{x})\Theta_{\Delta_{4}}(q,\mathbf{x})=\sum_{s\geq 0} \int_{0}^{\infty}d\lambda\,\mathcal{K}^{(s)}_{\mathrm{rel}}(\lambda)\,\Theta_ {\lambda,s}(q,\mathbf{x}) \tag{4.3}\] where \(\Theta_{\lambda,s}\) means the character of \(\mathcal{P}_{\Delta_{\lambda},s}\). The character equation (4.3) implies that \(\mathcal{K}^{(s)}_{\mathrm{rel}}(\lambda)\) should be understood as the relative density of spin \(s\) principal series between the two tensor products \(\mathcal{P}_{\Delta_{1}}\otimes\mathcal{P}_{\Delta_{2}}\) and \(\mathcal{P}_{\Delta_{3}}\otimes\mathcal{P}_{\Delta_{4}}\). ### The relative density By extending \(\mathcal{K}^{(s)}_{\mathrm{rel}}(\lambda)\) to an even function of \(\lambda\) on the real line and using the explicit expression of Harish-Chandra characters, c.f. eq. (2.14), we rewrite (4.3) as \[\sum_{s\geq 0}\chi^{\mathrm{SO}(d)}_{\mathbb{Y}_{s}}(\mathbf{x})\int_{ \mathbb{R}}d\lambda\,\mathcal{K}^{(s)}_{\mathrm{rel}}(\lambda)e^{i\lambda t}= \frac{4\,e^{-\frac{d}{2}|t|}}{P_{d}(e^{-|t|},\mathbf{x})}\,(\cos(\mu_{1}t)\cos(\mu _{2}t)-\cos(\mu_{3}t)\cos(\mu_{4}t)) \tag{4.4}\] where we have made the substitution \(q=e^{-|t|}\). To obtain an integral equation for each \(\mathcal{K}^{(s)}_{\text{rel}}(\lambda)\), we need to expand the R.H.S of (4.4) into an infinite sum of \(\chi^{\text{SO}(d)}_{\mathbb{Y}_{s}}(\mathbf{x})\). Such an expansion exists thanks to the following relation \[\sum_{s=0}^{\infty}\,\chi^{\text{SO}(d)}_{\mathbb{Y}_{s}}(\mathbf{x})\,q^{s}=\frac{ 1-q^{2}}{P_{d}(q,\mathbf{x})} \tag{4.5}\] which is proved in appendix C for any dimension \(d\geq 3\). Plugging eq. (4.5) into eq. (4.4) yields \[\int_{\mathbb{R}}d\lambda\,\mathcal{K}^{(s)}_{\text{rel}}(\lambda)e^{i\lambda t }=2\,e^{-(\frac{d}{2}+s-1)|t|}\,\frac{\cos(\mu_{1}t)\cos(\mu_{2}t)-\cos(\mu_{3 }t)\cos(\mu_{4}t)}{|\sinh(t)|}. \tag{4.6}\] Then \(\mathcal{K}^{(s)}_{\text{rel}}(\lambda)\) can be computed as an inverse Fourier transform using eq. (A.3) \[\mathcal{K}^{(s)}_{\text{rel}}(\lambda)=\frac{1}{4\pi}\sum_{\pm,\pm,\pm}\left[ \psi\left(\frac{\frac{d}{2}+s\pm i\lambda\pm i\mu_{3}\pm i\mu_{4}}{2}\right)- \psi\left(\frac{\frac{d}{2}+s\pm i\lambda\pm i\mu_{1}\pm i\mu_{2}}{2}\right)\right] \tag{4.7}\] where \(\sum_{\pm,\pm,\pm}\) means summing over the 8 sign combinations. Altogether, there are 16 \(\psi\)-functions in \(\mathcal{K}^{(s)}_{\text{rel}}(\lambda)\). ### Truncated model of the relative density To ascertain it makes sense to identify \(\mathcal{K}^{(s)}_{\text{rel}}(\lambda)\) as a relative density, we would like to compare this to a model with discretized spectrum of eigenvalues. In particular, we will focus on the \(s=0\) case, which corresponds to scalar principal series. As reviewed in section 2, an exclusive feature of any \(\mathcal{F}_{\Delta,\ell}\) with \(\ell=0\) is that it contains the trivial representation of \(\text{SO}(d+1)\). Therefore we should be able to extract all information about \(\mathcal{P}_{\Delta_{\lambda}}\) in \(\mathcal{P}_{\Delta_{1}}\otimes\mathcal{P}_{\Delta_{2}}\) by only studying the \(\text{SO}(d+1)\) singlet subspace of the latter, denoted by \(\mathcal{H}_{0}\). \(\text{SO}(d+1)\) singlets can only appear in the tensor product of two \(\mathbb{Y}_{n}\) subspaces, one in \(\mathcal{P}_{\Delta_{1}}\) and the other one in \(\mathcal{P}_{\Delta_{2}}\). This structure yields a natural way to define a truncated model \(\mathcal{H}^{(\mathcal{N})}_{0}\) of \(\mathcal{H}_{0}\) by simply cutting off the \(\text{SO}(d+1)\) spins in each \(\mathcal{P}_{\Delta_{i}}\), e.g. \(0\leq n\leq\mathcal{N}\). More explicit, let \(\{|n,\Delta_{i}\rangle_{a_{1}\cdots a_{n}},a_{j}=1,\cdots,d+1\}\) be a basis of the \(\mathbb{Y}_{n}\) subspace in \(\mathcal{P}_{\Delta_{i}}\), where \((a_{1}\cdots a_{n})\) are symmetric and traceless. The normalization of \(|n,\Delta_{i}\rangle_{a_{1}\cdots a_{n}}\) is specified by eq. (D.4). We can then construct a normalized and orthonormal basis of the \((\mathcal{N}+1)\) dimensional Hilbert space \(\mathcal{H}^{(\mathcal{N})}_{0}\) as \[|\psi_{n}\rangle=\frac{1}{\sqrt{D_{n}^{d+1}}}|n,\Delta_{1}\rangle_{a_{1} \cdots a_{n}}|n,\Delta_{2}\rangle_{a_{1}\cdots a_{n}},\quad 0\leq n\leq\mathcal{N} \tag{4.8}\] where \(D_{n}^{d+1}=\dim_{\text{SO}(d+1)}(\mathbb{Y}_{n})\). Defining \(\mathcal{Q}_{nm}\equiv\langle\psi_{n}|\mathcal{C}^{\text{SO}(1,d+1)}_{2}|\psi _{m}\rangle\), the nonvanishing entries of \(\mathcal{Q}\) are given by eq. (D.23) \[\mathcal{Q}_{nn}=\Delta_{1}\bar{\Delta}_{1}+\Delta_{2}\bar{\Delta} _{2}+2n(n+d-1)\] \[\mathcal{Q}_{n+1,n}=\mathcal{Q}^{*}_{n,n+1}=-\sqrt{\frac{(n+1)(n+ d-1)}{(n+\frac{d-1}{2})(n+\frac{d+1}{2})}}(\Delta_{1}+n)(\Delta_{2}+n). \tag{4.9}\] For large \(\mathcal{N}\), each eigenvalue \(q_{n}\) of \(\mathcal{Q}\) that is larger than \(\frac{d^{2}}{4}\) corresponds to a scalar principal series representation with \(\Delta=\frac{d}{2}+i\lambda_{n}\), where \(\lambda_{n}=\sqrt{q_{n}-\frac{d^{2}}{4}}\). A coarse-grained density of scalar principal series representations in the truncated model defined by \(\mathcal{Q}\) is given by the inverse spacing of \(\{\lambda_{n}\}\) \[\bar{\rho}_{\mathcal{N}}(\lambda_{n})\equiv\frac{2}{\lambda_{n+1}-\lambda_{n-1}} \tag{4.10}\] This coarse-grained density blows up when \(\mathcal{N}\to\infty\) because the matrix \(\mathcal{Q}\) has a continuous spectrum in \(\left[\frac{d^{2}}{4},\infty\right)\) in the large \(\mathcal{N}\) limit. Let \(a_{n}(\lambda)\) be an eigenvector of \(\mathcal{Q}\) with eigenvalue \(\frac{d^{2}}{4}+\lambda^{2},\lambda\geq 0\), i.e. \[\mathcal{Q}_{n,n-1}a_{n-1}(\lambda)+\mathcal{Q}_{n,n}a_{n}(\lambda)+\mathcal{ Q}_{n,n+1}a_{n+1}(\lambda)=\left(\frac{d^{2}}{4}+\lambda^{2}\right)a_{n}( \lambda). \tag{4.11}\] This recursion relation leads to the following asymptotic behavior of \(a_{n}(\lambda)\) \[a_{n}(\lambda)=R_{+}\frac{n^{i(\mu_{1}+\mu_{2}+\lambda)}}{\sqrt{n}}\left(1+ \mathcal{O}(1/n)\right)+R_{-}\frac{n^{i(\mu_{1}+\mu_{2}-\lambda)}}{\sqrt{n}}( (1+\mathcal{O}(1/n)) \tag{4.12}\] where the coefficients \(R_{\pm}\) cannot be determined from an asymptotic analysis of the recursion relation. The asymptotic behavior (4.12) implies that the eigenvectors of \(\mathcal{Q}\) are \(\delta\)-function normalizable in the \(\mathcal{N}\to\infty\) limit. Figure 4.1: Tensor product of two scalar principal series representations of \(\mathrm{SO}(1,4)\). Left: Low-lying eigenstates of the matrix \(\mathcal{Q}\) of the tensor product \(\mathcal{P}_{\frac{3}{2}+i\mu}\otimes\mathcal{P}_{\frac{3}{2}+i\mu}\) for various \(\mu\). The cutoff is \(\mathcal{N}=10^{4}\). Right: Comparison of the relative coarse-grained density \(\rho_{\mathrm{rel}}(\lambda)\) (pink solid line) given by eq. (4.13) to \(\mathcal{K}_{\mathrm{rel}}^{(0)}(\lambda)\) (black dashed line) given by eq. (4.7) for \(\Delta_{1}=\frac{3}{2}+4.3i,\Delta_{2}=\frac{3}{2}+1.3i,\Delta_{3}=\Delta_{4}= \frac{3}{2}+i\mu_{\mathrm{rel}}=\frac{3}{2}+0.1i\), and \(\mathcal{N}=2000\). The inset plot, zooming into the range \(5.4<\lambda<6\), illustrates how \(\rho_{\mathrm{rel}}\) approaches \(\mathcal{K}_{\mathrm{rel}}^{(0)}(\lambda)\) as we increase \(\mathcal{N}\). The difference of two coarse-grained densities, on the other hand, has a finite large \(\mathcal{N}\) limit. For example, given principal series \(\mathcal{P}_{\Delta_{3}}\) and \(\mathcal{P}_{\Delta_{4}}\), we can construct another \((\mathcal{N}+1)\times(\mathcal{N}+1)\) matrix like eq. (4.9) and extract a coarse-grained density \(\bar{\rho}_{\mathcal{N}}^{\mathcal{P}_{\Delta_{3}}\otimes\mathcal{P}_{\Delta_ {4}}}\) from it. Define a relative coarse-grained density as \[\rho_{\rm rel}\equiv\bar{\rho}_{\mathcal{N}}^{\mathcal{P}_{\Delta_{1}}\otimes \mathcal{P}_{\Delta_{2}}}-\bar{\rho}_{\mathcal{N}}^{\mathcal{P}_{\Delta_{3}} \otimes\mathcal{P}_{\Delta_{4}}}. \tag{4.13}\] In fig. (4.1), we numerically show that \(\rho_{\rm rel}(\lambda)\) approaches \(\mathcal{K}_{\rm rel}^{(0)}(\lambda)\), c.f. eq. (4.7), as \(\mathcal{N}\to\infty\). Similar to the observation we had in section 3.2.2, we can define a Pauli-Villars regularization of \(\mathcal{K}^{(0)}(\lambda)\) by taking \(\mu_{3}=\mu_{4}=\Lambda\) large in \(\mathcal{K}_{\rm rel}^{(0)}(\lambda)\) \[\mathcal{K}_{\rm PV,1}^{(0)}(\lambda)=\frac{1}{\pi}\log\Lambda+\frac{1}{2\pi} \sum_{\pm}\psi\left(\frac{\frac{d}{2}\pm i\lambda}{2}\right)-\frac{1}{4\pi} \sum_{\pm,\pm,\pm}\psi\left(\frac{\frac{d}{2}\pm i\lambda\pm i\mu_{1}\pm i\mu _{2}}{2}\right)+\mathcal{O}(\Lambda^{-1}). \tag{4.14}\] We notice that for each fixed \(\mathcal{N}\), \(\mathcal{K}_{\rm PV,1}^{(0)}(\lambda)\) matches the coarse-grained density \(\bar{\rho}_{\mathcal{N}}(\lambda)\) with a properly chosen \(\Lambda\). In fig. (4.2), we show the remarkable match for the case of \(\mathcal{P}_{\frac{3}{2}+4.3i}\otimes\mathcal{P}_{\frac{3}{2}+1.3i}\). In the coarse-grained density the cutoff is \(\mathcal{N}=2000\), and in the Pauli-Villars regularization the cutoff is \(\Lambda\approx 4000\). ### Analytical continuation to complementary series While studying the tensor product of a principal series and a complementary series, e.g. \(\mathcal{P}_{\Delta_{1}}\otimes\mathcal{C}_{\Delta_{2}}\), the whole procedure of solving \(\mathcal{K}_{\rm rel}^{(s)}(\lambda)\) above works exactly the same way up to the analytical continuation \(i\mu_{2}\to\mu_{2}\). However, this simple analytical continuation prescription is not always valid in the case of \(\mathcal{C}_{\Delta_{1}}\otimes\mathcal{C}_{\Delta_{2}}\) with \(\Delta_{1}=\frac{d}{2}+\mu_{1}\) and \(\frac{d}{2}+\mu_{2}\), because as we have shown in appendix A, the integral (4.6) does not hold after the replacement \(i\mu_{1}\to\mu_{1},i\mu_{2}\to\mu_{2}\) if \(\frac{d}{2}+s-\mu_{1}-\mu_{2}<0\). For example, assume \(\frac{d}{2}+s+2N_{s}<\mu_{1}+\mu_{2}<\frac{d}{2}+s+2(N_{s}+1)\) for some nonnegative integer \(N_{s}\) and then eq. (A.9) implies the following modification of eq. (4.6) \[\int_{\mathbb{R}}d\lambda\,\mathcal{K}^{(s)}_{\rm rel}(\lambda)e^ {i\lambda t}= 2\,e^{-(\frac{d}{2}+s-1)|t|}\,\frac{\cosh(\mu_{1}t)\cosh(\mu_{ 2}t)-\cos(\mu_{3}t)\cos(\mu_{4}t)}{|\sinh(t)|}\] \[-2\sum_{n=0}^{N_{s}}\cosh\left[\left(\frac{d}{2}+s-\mu_{1}-\mu_{ 2}+2n\right)t\right] \tag{4.15}\] where \[\mathcal{K}^{(s)}_{\rm rel}(\lambda)=\frac{1}{4\pi}\sum_{\pm,\pm,\pm}\left[ \psi\left(\frac{\frac{d}{2}+s\pm i\lambda\pm i\mu_{3}\pm i\mu_{4}}{2}\right)- \psi\left(\frac{\frac{d}{2}+s\pm i\lambda\pm\mu_{1}\pm\mu_{2}}{2}\right)\right] \tag{4.16}\] is a direct analytical continuation of eq. (4.7). Because of the extra terms in (4.15), the character equation (4.3) should change accordingly as (suppressing the arguments of characters) \[\Theta_{\Delta_{1}}\Theta_{\Delta_{2}}-\Theta_{\Delta_{3}}\Theta_{\Delta_{4}}= \sum_{s\geq 0}\int_{0}^{\infty}d\lambda\,\mathcal{K}^{(s)}_{\rm rel}(\lambda) \,\Theta_{\lambda,s}+\sum_{s}\sum_{n=0}^{N_{s}}\Theta_{\mu_{1}+\mu_{2}-s-2n} \tag{4.17}\] where the sum over \(s\) is finite because \(s<\mu_{1}+\mu_{2}-\frac{d}{2}<\frac{d}{2}\). Altogether, the character equation (4.17) implies that in the tensor product of complementary series representations \({\cal C}_{\Delta_{1}}\otimes{\cal C}_{\Delta_{2}}\), we get only principal series when \(\mu_{1}+\mu_{2}\leq\frac{d}{2}\)[28], and we can also get additionally a finite number of complementary series representations when \(\mu_{1}+\mu_{2}>\frac{d}{2}\). The scaling dimensions of these complementary series representations for each fix spin \(s\) are \[\mu_{1}+\mu_{2}-s,\ \ \mu_{1}+\mu_{2}-s-2,\ \ \mu_{1}+\mu_{2}-s-4,\cdots,\mu_{1}+ \mu_{2}-s-2N_{s} \tag{111}\] where \(N_{s}\) satisfies \[\frac{d}{2}+s+2N_{s}<\mu_{1}+\mu_{2}<\frac{d}{2}+s+2\left(N_{s}+1 \right). \tag{112}\] When \(d=2\), \(\mu_{1}+\mu_{2}\) is bounded by \(2\) and hence \(s=N_{s}=0\) in eq. (112). In other words, there can be at most one complementary series representation, which has spin \(0\) and scaling dimension \(\mu_{1}+\mu_{2}\), if exists.This result was first derived in [27] by Naimark. In higher \(d\), the \(s=0\) version of eq. (111) was proved in [32] by directly constructing the intertwining map between \({\cal C}_{\Delta_{1}}\otimes{\cal C}_{\Delta_{2}}\) and \({\cal C}_{\mu_{1}+\mu_{2}-2n}\). It would be interesting to generalize the intertwining map to incorporate complementary series with nonzero \(s\). The truncated model \({\cal Q}\) described in the previous subsection can also be used to test these complementary series representations numerically. The nonvanishing matrix elements of \({\cal Q}\) in the \({\cal C}_{\Delta_{1}}\otimes{\cal C}_{\Delta_{2}}\) case are given by eq. (109) \[{\cal Q}_{nn}=\Delta_{1}\bar{\Delta}_{1}+\Delta_{2}\bar{\Delta}_{2 }+2n(n+d-1)\] \[{\cal Q}_{n+1,n}={\cal Q}_{n,n+1}=-\sqrt{\frac{(n+1)(n+d-1)( \Delta_{1}+n)(\bar{\Delta}_{1}+n)(\Delta_{2}+n)(\bar{\Delta}_{2}+n)}{(n+\frac {d-1}{2})(n+\frac{d+1}{2})}}. \tag{113}\] Figure 4.4: Comparison of the relative coarse-grained density \(\rho_{\rm rel}(\lambda)\) (pink solid line) given by eq. (108) to \({\cal K}_{\rm rel}^{(0)}(\lambda)\) (black dashed line) given by eq. (109) for \(\mu_{1}=0.3,\mu_{2}=0.5,\mu_{3}=\mu_{4}=\mu_{\rm rel}=0.1\), and \({\cal N}=2000\). The inset plot, zooming into the range \(5.500<\lambda<5.520\), illustrates how \(\rho_{\rm rel}\) approaches \({\cal K}_{\rm rel}^{(0)}(\lambda)\) (black dashed line) as we increase \({\cal N}\). where \(0\leq n\leq{\cal N}\). We plot the first four eigenvalues of \({\cal Q}\) in fig.(4.3), with \(d=9,\Delta_{1}=\Delta_{2}=\frac{9}{2}+\mu\) and \({\cal N}=10^{4}\). The smallest eigenvalue enters the complementary series regime around \(\mu=\frac{d}{4}=2.25\) and lies on the line \(2\mu(9-2\mu)\), corresponding to \({\cal C}_{2\mu}\). The second eigenvalue starts to be smaller than \(\frac{d^{2}}{4}=20.25\) around \(\mu=\frac{d}{4}+1=3.25\) and then lies on the line \((2\mu-2)(11-2\mu)\), corresponding to \({\cal C}_{2\mu-2}\). The third eigenvalue becomes the Casimir of \({\cal C}_{2\mu-4}\) roughly in the region \(4.25<\mu<4.5\), since it lies on the line \((2\mu-4)(13-2\mu)\). The fourth eigenvalue is always larger than \(\frac{d^{2}}{4}=20.25\) and hence cannot belong to complementary series. These numerical results agree perfectly with eq. (4.18) and eq. (4.19) for \(s=0\), which are derived from characters. Apart from these discrete eigenvalues corresponding to complementary series, the matrix \({\cal Q}\) defined by (4.20) also has a continuous spectrum in \([\frac{d^{2}}{4},\infty)\) when \({\cal N}\to\infty\). The reason is that eigenvectors of \({\cal Q}\) satisfying \({\cal Q}_{nm}a_{m}(\lambda)=(\frac{d^{2}}{4}+\lambda^{2})a_{n}(\lambda)\), have the asymptotic behavior \[a_{n}(\lambda)=R_{+}\frac{n^{i\lambda}}{\sqrt{n}}\left(1+{\cal O }(1/n)\right)+R_{-}\frac{n^{-i\lambda}}{\sqrt{n}}((1+{\cal O}(1/n)) \tag{4.21}\] which implies that eigenvectors of different \(\lambda\) are \(\delta\)-function normalizable. The continuous spectrum corresponds to scalar principal series. We can define a coarse-grained density \(\bar{\rho}_{\cal N}(\lambda)\) of these representations, along the lines of eq. (4.10), by numerically diagonalizing the truncated matrix \({\cal Q}\). We also define a relative coarse-grained density \[\rho_{\rm rel}(\lambda)=\bar{\rho}_{\cal N}^{{\cal C}_{\Delta_{1 }}\otimes{\cal C}_{\Delta_{2}}}-\bar{\rho}_{\cal N}^{{\cal P}_{\Delta_{3}} \otimes{\cal P}_{\Delta_{4}}} \tag{4.22}\] that has a finite large \({\cal N}\) limit. In fig. (4.4), we illustrate the agreement between \(\rho_{\rm rel}(\lambda)\) and \({\cal K}_{\rm rel}^{(0)}(\lambda)\) in eq. (4.16), using the tensor products \({\cal C}_{\frac{3}{2}+0.3}\otimes{\cal C}_{\frac{3}{2}+0.5}\) and \({\cal P}_{\frac{3}{2}+0.1i}\otimes{\cal P}_{\frac{3}{2}+0.1i}\). ### Photon from massless scalars We have shown that the tensor product \({\cal F}_{\Delta_{1}}\otimes{\cal F}_{\Delta_{2}}\) only contains UIRs in the continuous families, which describe massive fields in dS\({}_{d+1}\). Given the similarity between scalar complementary series and type I exceptional series, which is mentioned in section 2 and described in more detail in appendix D, one would naively expect the same result for \({\cal V}_{p_{1},0}\otimes{\cal V}_{p_{2},0}\). However, we will show that this naive expectation is not true by using \({\cal V}_{1,0}\otimes{\cal V}_{1,0}\) as an example. In particular, the tensor product \({\cal V}_{1,0}\otimes{\cal V}_{1,0}\) contains a type II exceptional series representation \({\cal U}_{1,0}\). In field theory language, it means that the two-particle Hilbert space of two _different_ massless particles admits an invariant subspace corresponding to a photon. To make the whole discussion very explicit, we will focus on the case of dS\({}_{4}\). The same holds for any higher dimensional dS space. Comparing \({\cal V}_{1,0}\) and representations in scalar complementary series, the main difference is the absence of SO(4) singlets in \({\cal V}_{1,0}\). Fortunately, this property does not affect almost any conclusion in appendix E, except for \({\cal U}_{1,0}={\cal U}_{1,0}^{+}\oplus{\cal U}_{1,0}^{-}\). To see what appends to \({\cal U}_{1,0}\), let us first recall that it is characterized by a normalizable state \(|\chi\rangle_{a,b}=-|\chi\rangle_{b,a}\)19 that satisfies \(L_{0b}|\chi\rangle_{a,b}=0\) because \({\cal U}_{1,0}\) does not contain spin 1 representation of SO(4). In \({\cal V}_{1,0}\otimes{\cal V}_{1,0}\), such a state has to be a linear combination of \[|\chi_{n}\rangle_{a,b}=|n\rangle_{aa_{2}\cdots a_{n}}|n\rangle_{ba_{2}\cdots a_ {n}}-(a\leftrightarrow b),\ \ n\geq 1 \tag{111}\] where \(|n\rangle_{a_{1}\cdots a_{n}}\) is a basis of the \(\mathbb{Y}_{n}\) component in \({\cal V}_{1,0}\). Let \(|\chi\rangle_{a,b}=\sum_{n\geq 1}c_{n}|\chi_{n}\rangle_{a,b}\). In appendix E, we show that the condition \(L_{0b}|\chi\rangle_{a,b}=0\) leads to a recurrence relation of all \(c_{n}\), c.f. eq. (104), \[c_{n}\alpha_{n}+c_{n+1}\beta_{n+1}\frac{n+3}{n+1}=0,\ \ n\geq 1 \tag{112}\] where \(\alpha_{n}\) and \(\beta_{n}\) are given by eq. (105), which, for \(\Delta=d=3,\bar{\Delta}=0\), reduces to \[\alpha_{n}=\sqrt{\frac{n(n+1)(n+3)}{2(n+2)}},\ \ \beta_{n}=-\sqrt{\frac{(n-1)n(n+2)}{2(n+1)}}. \tag{113}\] Besides the recurrence relation (112), the requirement \(L_{0b}|\chi\rangle_{a,b}=0\) actually also implies an initial relation \(c_{1}\beta_{1}=0\). For principal series or complementary series, \(c_{1}\beta_{1}=0\) is equivalent to \(c_{1}=0\) because \(\beta_{1}\neq 0\), which then leads to the vanishing of all \(c_{n}\) due to eq. (112). In the \({\cal V}_{1,0}\) case, the initial relation does not impose any constraint on \(c_{1}\) because \(\beta_{1}=0\). A general solution of all \(c_{n}\) is \[c_{n}=\frac{A}{(n+1)(n+2)} \tag{114}\] where \(A\) is an arbitrary constant. Using the inner product of \(|\chi_{n}\rangle_{a,b}\), c.f. eq. (104), we find that \(|\chi\rangle_{a,b}\) is normalizable \[{}_{a,b}\langle\chi|\chi\rangle_{a,b}=2|A|^{2}\sum_{n\geq 1}\frac{1}{n(n+2)}<\infty \tag{115}\] and hence \({\cal U}_{1,0}\) belongs to \({\cal V}_{1,0}\otimes{\cal V}_{1,0}\). Altogether, we can conclude that \({\cal V}_{1,0}\otimes{\cal V}_{1,0}\) contains exactly one exceptional series representation \({\cal U}_{1,0}\). We also want to mention that the same conclusion does not hold for the symmetrized tensor product \({\cal V}_{1,0}\odot{\cal V}_{1,0}\) since the state \(|\chi_{n}\rangle_{a,b}\) in eq. (111) is manifestly anti-symmetric under permutation. So it is impossible to identify a photon like state in the two-particle Hilbert space of a single massless scalar field. Next, we move to study the continuous part of \({\cal V}_{1,0}\otimes{\cal V}_{1,0}\), assuming that it does not support complementary series, which will be checked numerically later. Using \({\cal P}_{\Delta_{3}}\otimes{\cal P}_{\Delta_{4}}\) as a regulator, the relative density of principal series \({\cal K}_{\rm rel}^{(s)}(\lambda)\) between \({\cal V}_{1,0}\otimes{\cal V}_{1,0}\) and \({\cal P}_{\Delta_{3}}\otimes{\cal P}_{\Delta_{4}}\) is defined as \[\Theta_{{\cal V}_{1,0}}(q,x)^{2}-\Theta_{\Delta_{3}}(q,x)\Theta_{\Delta_{4}}( q,x)=\sum_{s\geq 0}\int_{0}^{\infty}\,d\lambda\,{\cal K}_{\rm rel}^{(s)}(\lambda) \Theta_{\Delta_{\lambda}}(q,x)+\Theta_{{\cal U}_{1,0}}(q,x) \tag{116}\] where \(\Theta_{{\cal V}_{1,0}}(q,x)\) is given by eq. (100), and \(\Theta_{{\cal U}_{1,0}}(q,x)\) is given by eq. (100) with \(s=1,t=0\). As we have seen in previous sections, in order to compute \({\cal K}_{\rm rel}^{(s)}(\lambda)\), we need to expand \(\Theta_{{\cal V}_{1,0}}(q,x)^{2}P_{3}(q,x)\) in terms of \(\mathrm{SO}(3)\) character. In this discussion, we will focus on the \(s=0\) case (the rest of densities can be derived similarly), and hence it suffices to know the constant term in this expansion. Using eq. (17) and eq. (110), we find the constant term to be \(\frac{4\,q^{6}}{1-q^{2}}+\frac{q^{2}(1+3q^{3})}{1+q}\). Plugging it into eq. (4.28) yields \[\int_{\mathbb{R}}d\lambda\,{\cal K}^{(0)}_{\rm rel}(\lambda)e^{i \lambda t}=\left(\frac{4\,e^{-\frac{9}{2}|t|}}{1-e^{-2|t|}}-\sum_{\pm,\pm}\frac {e^{-(\frac{9}{2}\pm i\mu_{3}\pm i\mu_{4})|t|}}{1-e^{-2|t|}}\right)+\frac{e^{- \frac{1}{2}|t|}+3\,e^{-\frac{7}{2}|t|}}{1+e^{-|t|}}+2\,e^{-\frac{3}{2}|t|} \tag{4.29}\] where the last term comes from \(\Theta_{{\cal U}_{1,0}}(q,x)\). Then \({\cal K}^{(0)}_{\rm rel}(\lambda)\) can be computed as an inverse Fourier transformation of the R.H.S of eq.(4.29), term by term. For the first two terms, the inverse Fourier transformation follows from eq. (111). For the third term, one can use the integral formula, \(\int_{0}^{\infty}dt\frac{e^{-xt}}{1+e^{-t}}=\frac{1}{2}\psi(\frac{1+z}{2})- \frac{1}{2}\psi(\frac{z}{2})\), where \({\rm Re}\,z>0\). For the last term, it is an elementary integral. Altogether, we obtain \[{\cal K}^{(0)}_{\rm rel}(\lambda) =\frac{1}{4\pi}\sum_{\pm,\pm,\pm}\psi\left(\frac{\frac{3}{2}\pm i \lambda\pm i\mu_{3}\pm i\mu_{4}}{2}\right)-\frac{1}{\pi}\sum_{\pm}\psi\left( \frac{\frac{9}{2}\pm i\lambda}{2}\right)\] \[+\frac{3}{2\pi}\frac{1}{\lambda^{2}+\frac{1}{4}}-\frac{3}{2\pi} \frac{1}{\lambda^{2}+\frac{9}{4}}+\frac{15}{2\pi}\frac{1}{\lambda^{2}+\frac{ 25}{4}}-\frac{2}{e^{\pi\lambda}+e^{-\pi\lambda}}. \tag{4.30}\] which can also be recovered by taking \(d=3,s=0\) and \(\mu_{1}=\mu_{2}=\frac{3}{2}\) in eq. (4.16). This is not a coincidence but a result of the fact that \({\cal V}_{1,0}\) of any \(\mathrm{SO}(1,d+1)\) is the boundary point of complementary series up to a trivial representation of \(\mathrm{SO}(d+1)\). From the numerical side, this is also obvious because the \(\mathrm{SO}(1,4)\) Casimir in the \(\mathrm{SO}(4)\) singlet subspace of \({\cal V}_{1,0}\otimes{\cal V}_{1,0}\) can be derived from that of \(\mathcal{C}_{\Delta_{1}}\otimes\mathcal{C}_{\Delta_{2}}\) case, c.f. eq. (114), by taking \(\Delta_{1}=\Delta_{2}=3\) and cutting off \(n\) at \(1\), i.e. \[\mathcal{Q}_{n,n}=2n(n+2),\;\;\;\mathcal{Q}_{n+1,n}=\mathcal{Q}_{n,n+1}=-n(n+3 ),\;\;\;n\geq 1. \tag{115}\] For numerical diagonalization, we impose a hard cut-off \(n\leq\mathcal{N}\), and then \(\mathcal{Q}\) becomes an \(\mathcal{N}\times\mathcal{N}\) matrix. In the left panel of fig. (101), we plot the first four eigenvalues of \(\mathcal{Q}\) with the cut-off \(\mathcal{N}\) being \(10^{6}\). They are all larger that \(\frac{9}{4}\), implying the absence of complementary series representations in \(\mathcal{V}_{1,0}\otimes\mathcal{V}_{1,0}\). Since the truncated \(\mathcal{Q}\) is larger than \(\frac{9}{4}\), its eigenvalues induce a coarse-grained density of principal series representations \(\bar{\rho}_{\mathcal{N}}(\lambda)\), which diverges in the \(\mathcal{N}\to\infty\) limit. We remove the divergence by introducing \(\mathcal{P}_{\Delta_{3}}\otimes\mathcal{P}_{\Delta_{4}}\) and considering a relative density \[\rho_{\rm rel}(\lambda)=\bar{\rho}_{\mathcal{N}}^{\mathcal{V}_{1,0}\otimes \mathcal{V}_{1,0}}(\lambda)-\bar{\rho}_{\mathcal{N}}^{\mathcal{P}_{\Delta_{3} }\otimes\mathcal{P}_{\Delta_{4}}}(\lambda) \tag{116}\] In the right panel of fig. 101, we show a match between eq. (116) and eq. (115) with \(\Delta_{3}=\Delta_{4}=\frac{3}{2}+0.1i\). ### Some remarks on nonzero spin For the tensor product of two spinning principal series representations, e.g. \(\mathcal{P}_{\Delta_{1},s_{1}}\otimes\mathcal{P}_{\Delta_{2},s_{2}}\), it has been proved by [29] that there can only be principal series and discrete series. Here principal series means a larger class of representations \(\mathcal{P}_{\Delta,\mathbb{Y}}\) that are labelled by a scaling dimension \(\Delta\in\frac{d}{2}+i\mathbb{R}_{\geq 0}\) and a Young diagram \(\mathbb{Y}\). The \(\mathcal{P}_{\Delta,s}\) representation is a special case of \(\mathcal{P}_{\Delta,\mathbb{Y}}\) when \(\mathbb{Y}\) has only one row, i.e. \(\mathbb{Y}=\mathbb{Y}_{s}\). The character of \(\mathcal{P}_{\Delta,\mathbb{Y}}\) is given by eq. (22). When \(d\) is even, \(\mathrm{SO}(1,d+1)\) does not admit discrete series, and hence \(\mathcal{P}_{\Delta_{1},s_{1}}\otimes\mathcal{P}_{\Delta_{2},s_{2}}\) is decomposed to principal series only. Based on this fact, we are going to discuss what kinds of principal series representations can appear in this tensor product by using character analysis. Since regularization is not relevant in this analysis, we just use the character relation _formally_ \[\Theta_{\Delta_{1},s_{1}}(q,\mathbf{x})\Theta_{\Delta_{2},s_{2}}(q,\mathbf{x})=\sum_{ \mathbb{Y}}\int_{0}^{\infty}\,d\lambda\,\mathcal{K}_{\mathbb{Y}}(\lambda) \Theta_{\Delta_{\lambda},\mathbb{Y}}(q,\mathbf{x}). \tag{117}\] For \(\Delta_{1}=\frac{d}{2}+i\mu_{1},\Delta_{2}=\frac{d}{2}+i\mu_{2}\) and \(q=e^{-|t|}\), the eq. (117) is equivalent to \[\sum_{\mathbb{Y}}\chi_{\mathbb{Y}}^{\mathrm{SO}(d)}(\mathbf{x})\int_{0}^{\infty} \,d\lambda\,\mathcal{K}_{\mathbb{Y}}(\lambda)e^{i\lambda t}=4\frac{\chi_{ \mathbb{Y}_{s_{1}}}^{\mathrm{SO}(d)}(\mathbf{x})\chi_{\mathbb{Y}_{s_{2}}}^{ \mathrm{SO}(d)}(\mathbf{x})}{P_{d}(e^{-|t|},\mathbf{x})}e^{-\frac{d}{2}|t|}\cos(\mu_{1} t)\cos(\mu_{2}t). \tag{118}\] Then we need to expand the \(\mathbf{x}\) dependent part on the R.H.S of eq. (118) as a series summing over all \(\mathrm{SO}(d)\) characters. It involves two steps. First, using eq. (101), we obtain \[\frac{\chi_{\mathbb{Y}_{s_{1}}}^{\mathrm{SO}(d)}(\mathbf{x})\chi_{ \mathbb{Y}_{s_{2}}}^{\mathrm{SO}(d)}(\mathbf{x})}{P_{d}(q,\mathbf{x})}=\sum_{s\geq 0} \chi_{\mathbb{Y}_{s_{1}}}^{\mathrm{SO}(d)}(\mathbf{x})\chi_{\mathbb{Y}_{s_{2}}}^{ \mathrm{SO}(d)}(\mathbf{x})\chi_{\mathbb{Y}_{s}}^{\mathrm{SO}(d)}(\mathbf{x})\frac{q^ {s}}{1-q^{2}}. \tag{119}\] Next, we express the product of three \(\mathrm{SO}(d)\) characters \(\chi_{\mathbb{Y}_{s_{1}}}^{\mathrm{SO}(d)}(\mathbf{x})\chi_{\mathbb{Y}_{s_{2}}}^{ \mathrm{SO}(d)}(\mathbf{x})\chi_{\mathbb{Y}_{s}}^{\mathrm{SO}(d)}(\mathbf{x})\) as a finite sum of \(\mathrm{SO}(d)\) characters, which is equivalent to the tensor product decomposition of \(\mathbb{Y}_{s_{1}}\otimes\mathbb{Y}_{s_{2}}\otimes\mathbb{Y}_{s}\) as \(\mathrm{SO}(d)\) representations. Thus, we can conclude that a principal series representation \(\mathcal{P}_{\Delta,\mathbb{Y}}\) appears in \(\mathcal{P}_{\Delta_{1},s_{1}}\otimes\mathcal{P}_{\Delta_{2},s_{2}}\) if and only if \(\mathbb{Y}\) belongs to \(\mathbb{Y}_{s_{1}}\otimes\mathbb{Y}_{s_{2}}\otimes\mathbb{Y}_{s}\) for some \(s\geq 0\). For example, when \(s_{1}=1\) and \(s_{2}=0\), the tensor products \(\mathbb{Y}_{1}\otimes\mathbb{Y}_{0}\otimes\mathbb{Y}_{s}=\mathbb{Y}_{1} \otimes\mathbb{Y}_{s}\) yield all single row representations and all \(\mathbb{Y}_{n,1},n\geq 1\). So the tensor product of a spin \(1\) and a spin \(0\) principal series representations contains all \(\mathcal{P}_{\Delta,s}\left(s\geq 0\right)\) and all \(\mathcal{P}_{\Delta,\mathbb{Y}_{s,1}}\left(s\geq 1\right)\). For \(s_{1}=s_{2}=1\), using the decomposition rule \[\mathbb{Y}_{1}\otimes\mathbb{Y}_{1}\otimes\mathbb{Y}_{s} =\mathbb{Y}_{1}\otimes\left(\mathbb{Y}_{s-1}\oplus\mathbb{Y}_{s +1}\oplus\mathbb{Y}_{s,1}\right)\] \[=\mathbb{Y}_{s-2}\oplus 3\mathbb{Y}_{s}\oplus\mathbb{Y}_{s+2} \oplus 2\left(\mathbb{Y}_{s+1,1}\oplus\mathbb{Y}_{s-1,1}\right)\oplus\mathbb{Y }_{s,2}\oplus\mathbb{Y}_{s,1,1}\, \tag{103}\] one can easily find that the tensor product of two spin \(1\) principal series representations contains all \(\mathcal{P}_{\Delta,s}\left(s\geq 0\right)\), \(\mathcal{P}_{\Delta,\mathbb{Y}_{s,1}}\left(s\geq 1\right)\), \(\mathcal{P}_{\Delta,\mathbb{Y}_{s,2}}\left(s\geq 2\right)\) and \(\mathcal{P}_{\Delta,\mathbb{Y}_{s,1,1}}\left(s\geq 1\right)\). Although, we start with even \(d\), we believe this result is valid for both even and odd \(d\). In [29], the possible \(\mathbb{Y}\) were found by a much more sophisticated method, that is independent of the parity of \(d\). The result can be described as follows. Let \(\rho\) be a UIR of \(\mathrm{SO}(d)\) contained in \(\mathbb{Y}_{s_{1}}\otimes\mathbb{Y}_{s_{2}}\) and \(\sigma\) be a UIR of \(\mathrm{SO}(d-1)\) contained in the restriction of \(\rho\) to \(\mathrm{SO}(d-1)\). A principal series representation \(\mathcal{P}_{\Delta,\mathbb{Y}}\) belongs to \(\mathcal{P}_{\Delta_{1},s_{1}}\otimes\mathcal{P}_{\Delta_{2},s_{2}}\) if and only if some \(\sigma\) obtained in this way belongs to the restriction of \(\mathbb{Y}\) to \(\mathrm{SO}(d-1)\). As shown in [60], this constraint on \(\mathbb{Y}\) is equivalent to saying that \(\mathbb{Y}\) belongs to \(\mathbb{Y}_{s_{1}}\otimes\mathbb{Y}_{s_{2}}\otimes\mathbb{Y}_{s}\) for some \(s\geq 0\). To see this, we only need a simple fact that given two irreducible representations \(\mathbb{Y}\) and \(\mathbb{Y}^{\prime}\) of \(\mathrm{SO}(d)\), the tensor product \(\mathbb{Y}\otimes\mathbb{Y}^{\prime}\) contains the trivial representation if and only if \(\mathbb{Y}=\mathbb{Y}^{\prime}\). Then identifying an irreducible component \(\mathbb{Y}\) in some reducible representation \(R\) amounts to checking if \(\bullet\in\mathbb{Y}\otimes R\), where \(\bullet\) denotes the trivial representation. Based on this fact, we can immediately see that the following equivalence \[\mathbb{Y}\subset\mathbb{Y}_{s_{1}}\otimes\mathbb{Y}_{s_{2}}\otimes\sum_{s \geq 0}\mathbb{Y}_{s}\quad\Leftrightarrow\quad\exists\,s\geq 0:\mathbb{Y}_{s} \subset\mathbb{Y}\otimes\mathbb{Y}_{s_{1}}\otimes\mathbb{Y}_{s_{2}}\,. \tag{104}\] as both sides mean \(\bullet\in\mathbb{Y}_{s_{1}}\otimes\mathbb{Y}_{s_{2}}\otimes\mathbb{Y}\otimes \mathbb{Y}_{s}\) for some \(s\). Then, reduce the last formula from \(\mathrm{SO}(d)\) to \(\mathrm{SO}(d-1)\). Since \(\mathbb{Y}_{s}\) are the only irreducible representations of \(\mathrm{SO}(d)\) that give rise to \(\mathrm{SO}(d-1)\) singlets, we conclude that (104) is equivalent to the statement that there is at least one \(\mathrm{SO}(d-1)\) singlet inside the \(\mathrm{SO}(d)\) tensor product \(\mathbb{Y}\otimes\mathbb{Y}_{s_{1}}\otimes\mathbb{Y}_{s_{2}}\). But this is equivalent to the statement that there is at least one irreducible representation \(\sigma\) of \(\mathrm{SO}(d-1)\) that appears both inside \(\mathbb{Y}\) and \(\mathbb{Y}_{s_{1}}\otimes\mathbb{Y}_{s_{2}}\). With the possible \(\mathrm{SO}(d)\) structures \(\mathbb{Y}\) known, we can in principle compute the relative density of each \(\mathcal{P}_{\Delta_{\lambda},\mathbb{Y}}\) between \(\mathcal{P}_{\Delta_{1},s_{1}}\otimes\mathcal{P}_{\Delta_{2},s_{2}}\) and \(\mathcal{P}_{\Delta_{3},s_{1}}\otimes\mathcal{P}_{\Delta_{4},s_{2}}\). For example, in the case of \(s_{1}=1,s_{2}=0\), the relative densities of \(\mathcal{P}_{\Delta_{\lambda},\mathbb{Y}_{s}}\)\((s\geq 0)\) and \(\mathcal{P}_{\Delta_{\lambda},\mathbb{Y}_{s,1}}\)\((s\geq 1)\) should satisfy (suppressing the arguments of characters) \[\Theta_{\Delta_{1},1}\Theta_{\Delta_{2}}-\Theta_{\Delta_{3},1}\Theta_{\Delta_{4 }}=\sum_{s\geq 0}\int_{0}^{\infty}\,d\lambda\,\mathcal{K}_{\mathrm{rel}}^{(s)}( \lambda)\Theta_{\Delta_{\lambda},s}+\sum_{s\geq 1}\int_{0}^{\infty}\,d\lambda\, \mathcal{K}_{\mathrm{rel}}^{(s,1)}(\lambda)\Theta_{\Delta_{\lambda},\mathbb{Y}_{s,1}}. \tag{105}\] As we have seen in eq. (4.35), the difference compared to the \(s_{1}=s_{2}=0\) case comes mainly from the following expansion \[\frac{\chi_{\mathbb{Y}_{1}}^{\text{SO}(d)}(\mathbf{x})}{P_{d}(q,\mathbf{x})} =\sum_{s\geq 0}\chi_{\mathbb{Y}_{1}}^{\text{SO}(d)}(\mathbf{x})\chi_{ \mathbb{Y}_{s}}^{\text{SO}(d)}(\mathbf{x})\frac{q^{s}}{1-q^{2}}\] \[=\frac{\chi_{\mathbb{Y}_{1}}^{\text{SO}(d)}(\mathbf{x})}{1-q^{2}}+\sum _{s\geq 1}\left(\chi_{\mathbb{Y}_{s-1}}^{\text{SO}(d)}(\mathbf{x})+\chi_{ \mathbb{Y}_{s+1}}^{\text{SO}(d)}(\mathbf{x})+\chi_{\mathbb{Y}_{s,1}}^{\text{SO}(d) }(\mathbf{x})\right)\frac{q^{s}}{1-q^{2}}\] \[=\frac{q}{1-q^{2}}+\sum_{s\geq 1}\chi_{\mathbb{Y}_{s}}^{\text{SO}(d) }(\mathbf{x})\frac{q^{s-1}+q^{s+1}}{1-q^{2}}+\sum_{s\geq 1}\chi_{\mathbb{Y}_{s,1}}^{ \text{SO}(d)}(\mathbf{x})\frac{q^{s}}{1-q^{2}}. \tag{4.39}\] Plugging (4.3) into the character relation (4.38), we obtain \[\mathcal{K}_{\text{rel}}^{(s)}(\lambda)=\frac{1}{4\pi}\begin{cases}\sum_{\pm, \pm,\pm}\left[\psi\left(\frac{\frac{d}{2}+1\pm i\lambda\pm i\mu_{3}\pm i\mu_{4 }}{2}\right)-\psi\left(\frac{\frac{d}{2}+1\pm i\lambda\pm i\mu_{1}\pm i\mu_{2}} {2}\right)\right],&s=0\\ \sum_{\pm,\pm,\pm,\pm}\left[\psi\left(\frac{\frac{d}{2}+s\pm 1\pm i\lambda \pm i\mu_{3}\pm i\mu_{4}}{2}\right)-\psi\left(\frac{\frac{d}{2}+s\pm 1\pm i\lambda \pm i\mu_{1}\pm i\mu_{2}}{2}\right)\right],&s\geq 1\end{cases} \tag{4.40}\] and \[\mathcal{K}_{\text{rel}}^{(s,1)}(\lambda)=\frac{1}{4\pi}\sum_{\pm,\pm,\pm} \left[\psi\left(\frac{\frac{d}{2}+s\pm i\lambda\pm i\mu_{3}\pm i\mu_{4}}{2} \right)-\psi\left(\frac{\frac{d}{2}+s\pm i\lambda\pm i\mu_{1}\pm i\mu_{2}}{2} \right)\right],\ \ s\geq 1. \tag{4.41}\] The analytical continuation \(i\mu_{1}\to\mu_{1},i\mu_{2}\to\mu_{2}\) in eq. (4.40) and eq. (4.41) leads to relative density of principal series between \(\mathcal{C}_{\Delta_{1},1}\otimes\mathcal{C}_{\Delta_{2}}\) and \(\mathcal{P}_{\Delta_{3},1}\otimes\mathcal{P}_{\Delta_{4}}\), where \(\Delta_{1}=\frac{d}{2}+\mu_{1}\) and \(\Delta=\frac{d}{2}+\mu_{2}\). After the analytical continuation, \(\mu_{1}\in(0,\frac{d}{2}-1)\) and \(\mu_{2}\in(0,\frac{d}{2})\). There can be pole crossing when we vary \(\mu_{1}+\mu_{2}\) if \(d\) is large enough. For example, when \(d=4\), pole crossing happens in \(\psi(\frac{2-\mu_{1}-\mu_{2}\pm i\lambda}{2})\), which is contained in \(\mathcal{K}_{\text{rel}}^{(1)}\). It implies that when \(\mu_{1}+\mu_{2}>2\), besides principal series representations, there also exists a spin 1 complementary series representation \(\mathcal{C}_{\mu_{1}+\mu_{2},1}\). A complete analysis of complementary series for higher dimension is very similar to subsection 4.3. For odd \(d\), discrete series representations can appear in \(\mathcal{F}_{\Delta_{1},s_{1}}\otimes\mathcal{F}_{\Delta_{2},s_{2}}\), given that one of the spins is nonzero. The full spectrum of discrete series representations in such a tensor product is not known for a generic odd \(d\). When \(d\geq 9\), we can exclude discrete series representations in \(\mathcal{F}_{\Delta_{1},s_{1}}\otimes\mathcal{F}_{\Delta_{2},s_{2}}\), by simply analyzing its \(\text{SO}(d+1)\) components. On the one hand, a discrete series of \(\text{SO}(1,d+1)\) should contain \(\text{SO}(d+1)\) UIRs corresponding to Young diagrams with \(\frac{d+1}{2}\geq 5\) rows. On the other hand, the \(\text{SO}(d+1)\) content of both \(\mathcal{F}_{\Delta_{1},s_{1}}\) and \(\mathcal{F}_{\Delta_{2},s_{2}}\) has at most two rows and hence their tensor product yields \(\text{SO}(d+1)\) UIRs that have at most four rows. When \(d=3\), the problem is solved by Martin [22, 61] if at least one of the \(\mathcal{F}_{\Delta_{i},s_{i}}\) belongs to principal series. For example, according to Martin, \(\mathcal{P}_{\Delta_{1},1}\otimes\mathcal{P}_{\Delta_{2}}\) contains all \(\mathcal{U}_{s,0}^{\pm}\), \(s\geq 1\), with multiplicity one for each20. In particular, \(\mathcal{U}_{1,0}^{+}\oplus\mathcal{U}_{1,0}^{-}\) is isomorphic to the single-particle Hilbert space of a photon in \(\text{dS}_{d+1}\). Footnote 20: Martin used Dixmier’s notation \(\pi_{p,q}^{\pm}\)[62] to denote discrete series representations, where \(p\geq 1\) and \(q=1,2,\cdots p\). In our notation, \(\pi_{p,q}^{\pm}\) should be identified as \(\mathcal{U}_{p,q-1}^{\pm}\). #### 4.5.1 The case of \(\text{SO}(1,3)\) As an explicit example, let's decompose the tensor product \(\mathcal{P}_{\Delta,s_{1}}\otimes\mathcal{P}_{\Delta_{2},s_{2}}\) of \(\text{SO}(1,3)\). Due to the isomorphism \(\mathcal{P}_{\Delta,s}\cong\mathcal{P}_{\bar{\Delta},-s}\), we will always consider nonnegative spins, and let \(\Delta\) run along the whole line \(1+i\mathbb{R}\) when \(s\geq 1\) and the half line \(1+i\mathbb{R}_{\geq 0}\) when \(s=0\). Without loss of generality, we also assume \(s_{1}\geq s_{2}\). Formally, the following character equation is expected to hold as a manifestation of the tensor product decomposition \[\Theta_{\Delta_{1},s_{1}}(q,x)\Theta_{\Delta_{2},s_{2}}(q,x)=\int_{0}^{\infty} \,\mathcal{K}^{(0)}(\lambda)\Theta_{1+i\lambda}(q,x)+\sum_{s\geq 1}\int_{ \mathbb{R}}\,\mathcal{K}^{(s)}(\lambda)\Theta_{1+i\lambda,s}(q,x) \tag{111}\] where \(\Delta_{1}=1+i\mu_{1},\Delta_{2}=1+i\mu_{2}\), and \(\mathcal{K}^{(s)}(\lambda)\) is understood as the density of spin \(s\) principal series. Using eq. (21) for \(\Theta_{\Delta,s}(q,x)\), we find that (111) is equivalent to \[\frac{q(x^{s_{1}}q^{i\mu_{1}}+x^{-s_{1}}q^{-i\mu_{1}})(x^{s_{2}}q ^{i\mu_{2}}+x^{-s_{2}}q^{-i\mu_{2}})}{(1-xq)(1-x^{-1}q)} =\sum_{s\geq 1}\int_{\mathbb{R}}d\lambda\,\mathcal{K}^{(s)}( \lambda)\left(x^{s}\,q^{i\lambda}+x^{-s}\,q^{-i\lambda}\right)\] \[+\int_{0}^{\infty}d\lambda\,\mathcal{K}^{(0)}(\lambda)\left(q^{i \lambda}+q^{-i\lambda}\right) \tag{112}\] Denote the L.H.S of eq. (112) by \(\Phi\), the R.H.S imposes a very nontrivial constraint, namely \(\Phi(q,x)=\Phi(q^{-1},x^{-1})\), or equivalently \(\Phi(t,\theta)=\Phi(-t,-\theta)\) if we make the substitution \(q=e^{-t}\) and \(x=e^{i\theta}\). It easy to check that \(\Phi(t,\theta)=\frac{2\cos(\mu_{1}t-s_{1}\theta)\cos(\mu_{2}t-s_{2}\theta)}{ \cosh t-\cos\theta}\) satisfies this condition. To proceed, we need to match the coefficients of any \(x^{\pm s}\) on both sides of eq. (112). It requires the Fourier expansion of \((1-xq)^{-1}(1-x^{-1}q)^{-1}\), which depends on the sign of \(1-q\): \[\frac{1}{(1-xq)(1-x^{-1}q)}=\sum_{n\in\mathbb{Z}}\,x^{n}\times \begin{cases}\frac{q^{|n|}}{1-q^{2}}&0<q<1\\ \frac{q^{-|n|}}{q^{2}-1}&q>1\end{cases} \tag{113}\] Because of the pole at \(q=1\) for each Fourier coefficient, \(\mathcal{K}^{(s)}(\lambda)\) defined in (111) is divergent. In order to regularize it, we introduce another tensor product \(\mathcal{P}_{\Delta_{3}}\otimes\mathcal{P}_{\Delta_{4}}\), and compute a relative density \(\mathcal{K}^{(s)}_{\text{rel}}(\lambda)\) of principal series between between \(\mathcal{P}_{\Delta,s_{1}}\otimes\mathcal{P}_{\Delta_{2},s_{2}}\) and \(\mathcal{P}_{\Delta_{3}}\otimes\mathcal{P}_{\Delta_{4}}\). By matching the coefficient of \(x^{0}\), we find \[\mathcal{K}^{(0)}_{\text{rel}}(\lambda) =-\frac{1}{4\pi}\sum_{\pm,\pm}\left(\psi\left(\frac{s_{1}+s_{2}+1 \pm i(\mu_{1}+\mu_{2})\pm i\lambda}{2}\right)+\psi\left(\frac{s_{1}-s_{2}+1\pm i (\mu_{1}-\mu_{2})\pm i\lambda}{2}\right)\right)\] \[+\frac{1}{4\pi}\sum_{\pm,\pm,\pm}\psi\left(\frac{1\pm i\mu_{3}\pm i \mu_{4}\pm i\lambda}{2}\right) \tag{114}\] When \(s_{1}=s_{2}=0\), (112) reduces to eq. (104) with \(d=2,s=0\). Pole crossing does not happen if we move \(\mu_{1},\mu_{2}\) along the real axis, or analytically continue one of the two \(\mathcal{P}_{\Delta_{i},s_{i}}\) to a complementary series representation. It suggests the absence of complementary series in any \(\mathcal{P}\otimes\mathcal{P}\) or \(\mathcal{P}\otimes\mathcal{C}\) of \(\text{SO}(1,3)\). The \(\mathcal{C}\otimes\mathcal{C}\) case has been studied in section 4.3. Similarly, by matching \(x^{s}\), we obtain \[\mathcal{K}_{\rm rel}^{(s)}(\lambda) =-\frac{1}{4\pi}\sum_{\pm}\left(\psi\left(\frac{|s\pm(s_{1}+s_{2})|+1 \mp i(\mu_{1}+\mu_{2})-i\lambda}{2}\right)-\psi\left(\frac{|s\pm(s_{1}-s_{2})|+ 1\mp i(\mu_{1}-\mu_{2})-i\lambda}{2}\right)\right)\] \[-\frac{1}{4\pi}\sum_{\pm}\left(\psi\left(\frac{|s\pm(s_{1}+s_{2})| +1\pm i(\mu_{1}+\mu_{2})+i\lambda}{2}\right)-\psi\left(\frac{|s\pm(s_{1}-s_{2})| +1\pm i(\mu_{1}-\mu_{2})+i\lambda}{2}\right)\right)\] \[+\frac{1}{4\pi}\sum_{\pm,\pm,\pm}\psi\left(\frac{s+1\pm i\mu_{3} \pm i\mu_{4}\pm i\lambda}{2}\right). \tag{100}\] ## 5 Conformal field theory in de Sitter Since dS spacetime is conformally equivalent to flat spacetime, the full Hilbert space \(\mathcal{H}_{\rm CFT}\) of a CFT in dS\({}_{d+1}\) is a direct sum of unitary and irreducible primary representations of the Lorentzian conformal group SO\((2,d+1)\). (A short review of SO\((2,d+1)\) and its representations can be found in appendix F). Then understanding the representation structure of \(\mathcal{H}_{\rm CFT}\) under the dS isometry group SO\((1,d+1)\) amounts to decomposing primary representations of SO\((2,d+1)\) into UIRs of SO\((1,d+1)\). In representation theory of compact groups, the Weyl character is a powerful tool to decompose a UIR \(R\) of \(G\) to UIRs of its subgroup \(H\). For example, let \(G=\) SO\((4),H=\) SO\((3)\) and \(R=\)\(\square\). The SO\((4)\) Weyl character associated to \(\square\) is \[\chi_{\square}^{\rm SO(4)}(x,y)=\frac{(x+y)(xy+1)\left(x^{2}y^{2}+1\right)}{x ^{2}y^{2}}. \tag{101}\] Since \(G\) and \(H\) have only one common Cartan generator (denoted by \(J_{3}\)), we take \(y=1\) in eq. (101), which effectively computes \(\,\mathrm{tr}\,\,x^{J_{3}}\) in the representation \(\square\), and then the character becomes \[\chi_{\square}^{\rm SO(4)}(x)=\left(x^{2}+x+1+x^{-1}+x^{-2}\right)+\left(x+1+ x^{-1}\right) \tag{102}\] which is manifestly the sum of SO\((3)\) characters corresponding to spin 1 and spin 2 representations respectively. Altogether, we obtain the following simple branching rule \[\square\] \[{\rm SO(3)}=\square\] \[\oplus\] [MISSING_PAGE_POST] \[\square\] \[\square\square\] \[\square\square\] \[\square\] \[\square\] ### \(\text{SO}(2,d+1)\) group characters and noncompact generators Given a primary representation of \(\text{SO}(2,d+1)\), the usual character considered in CFT is defined as the trace of \(e^{-\beta H}\) decorated by \(\text{SO}(d+1)\) Cartan generators, where \(\beta>0\) for the convergence of trace. Rigorously speaking, such a character (which will be referred to as a CFT character henceforth) is _not_ a group character because \(e^{-\beta H}\) does not belong to the group \(\text{SO}(2,d+1)\). For a scalar primary representation \(\mathcal{R}_{\Delta}\), the CFT character is given by [36] \[\Theta^{\text{CFT}_{d+1}}_{\mathcal{R}_{\Delta}}(\beta,\mathbf{y})= \frac{e^{-\Delta\beta}}{P_{d+1}(e^{-\beta},\mathbf{y})} \tag{100}\] where \(\mathbf{y}=(y_{1},y_{2},\cdots,y_{r^{\prime}}),r^{\prime}=\left\lfloor\frac{d+1}{ 2}\right\rfloor\) are fugacities of \(\text{SO}(d+1)\) rotations. For example, when \(d=0\) the CFT character is simply \(\frac{e^{-\Delta\beta}}{1-e^{-\beta}}\), and when \(d=1\) it is \(\frac{e^{-\Delta\beta}}{(1-ye^{-\beta})(1-y^{-1}e^{-\beta})}\). Unfortunately, this well-known CFT character cannot be used to study the reduction from \(\text{SO}(2,d+1)\) to \(\text{SO}(1,d+1)\) in that its definition involves the \(\text{SO}(2)\) generator \(H\) which is not accessible to the \(\text{SO}(1,d+1)\) subgroup. Instead, we need the \(\text{SO}(2,d+1)\) character defined with respect to the \(\text{SO}(1,d+1)\) Cartan generators, namely 21 Footnote 21: We assume \(0<q<1\) without loss of generality. \[\Theta^{\text{SO}(2,d+1)}_{\mathcal{R}_{\Delta}}(q,\mathbf{x})\equiv \,\text{tr}\,_{\mathcal{R}_{\Delta}}\left(q^{D}x_{1}^{J_{1}}\cdots x_{r}^{J_{ r}}\right) \tag{101}\] where \(J_{1},J_{2},\cdots,J_{r}\) are the Cartan generators of \(\text{SO}(d)\), with \(r=\left\lfloor\frac{d}{2}\right\rfloor\). Since \(q^{D}x_{1}^{J_{1}}\cdots x_{r}^{J_{r}}\) is a group element of \(\text{SO}(1,d+1)\) when \(x_{j}\in\text{U}(1)\), we will call \(\Theta^{\text{SO}(2,d+1)}_{\mathcal{R}_{\Delta}}(q,\mathbf{x})\) an \(\text{SO}(2,d+1)\) group character. It exists in the distributional sense because the trace in eq.(101) is, roughly speaking, an oscillating sum. For \(\text{SO}(2,1)\), the trace is computed exactly in [63]. In particular, the \(\text{SO}(2,1)\) group character of \(\mathcal{R}_{\Delta}\) is found to be \(\frac{q^{\Delta}}{1-q}\), very similar to its CFT counterpart. This result will be the main building block for us to systematically compute the \(\text{SO}(2,d+1)\) group character of \(\mathcal{R}_{\Delta}\) for any \(d\). The strategy is to decompose a primary representation \(\mathcal{R}_{\Delta}\) of \(\text{SO}(2,d+1)\) into primary representations of \(\text{SO}(2,1)\) while keeping track of the \(\text{SO}(d)\) spin of each \(\text{SO}(2,1)\) piece in the decomposition. In other words, we need the reduction of \(\mathcal{R}_{\Delta}\) from \(\text{SO}(2,d+1)\) to \(\text{SO}(2,1)\times\text{SO}(d)\). The CFT character allows us to solve such a reduction very easily. When \(d=2r\) is even, we have \(P_{d+1}(e^{-\beta},\mathbf{y})=(1-e^{-\beta})P_{d}(e^{-\beta},\mathbf{y})\), and using eq. (100) for \(P_{d}(e^{-\beta},\mathbf{y})^{-1}\) we obtain \[\Theta^{\text{CFT}_{d+1}}_{\mathcal{R}_{\Delta}}(\beta,\mathbf{y})= \frac{e^{-\Delta\beta}}{1-e^{-\beta}}\,\sum_{s\geq 0}\chi^{\text{SO}(d)}_{ \mathbb{Y}_{s}}(\mathbf{y})\frac{e^{-s\beta}}{1-e^{-2\beta}}=\sum_{n,s\geq 0} \Theta^{\text{CFT}_{1}}_{\mathcal{R}_{\Delta+2n+s}}(\beta)\chi^{\text{SO}(d)}_ {\mathbb{Y}_{s}}(\mathbf{y}) \tag{102}\] where \((1-e^{-2\beta})^{-1}\) is expanded into a Taylor series of \(e^{-2\beta}\), and each summand for fixed \(n\) and \(s\) is expressed as the product of a \(\text{CFT}_{1}\) character and an \(\text{SO}(d)\) character. It implies the following decomposition \[\mathcal{R}_{\Delta}|^{\text{SO}(2,d+1)}_{\text{SO}(2,1)\times \text{SO}(d)}=\bigoplus_{n,s\geq 0}\left[\mathcal{R}_{\Delta+2n+s}\right]_{\text{SO}(2,1)} \otimes\left[\mathbb{Y}_{s}\right]_{\text{SO}(d)} \tag{103}\] Applying the \(\mathrm{SO}(2,d+1)\) character to this decomposition yields \[\Theta^{\mathrm{SO}(2,d+1)}_{\mathcal{R}_{\Delta}}(q,\mathbf{x})=\sum_{n,s\geq 0} \Theta^{\mathrm{SO}(2,1)}_{\mathcal{R}_{\Delta+2n+s}}(q)\chi^{\mathrm{SO}(d)}_{ \mathbb{Y}_{s}}(\mathbf{x}) \tag{5.9}\] Since \(\mathrm{SO}(2,1)\) group characters and \(\mathrm{CFT}_{1}\) characters take the same form upon the replacement \(q\to e^{-\beta}\), eq. (5.7) and eq. (5.9) guarantee that this identification also holds for any even \(d\), namely \[\text{Even }d:\ \ \Theta^{\mathrm{SO}(2,d+1)}_{\mathcal{R}_{\Delta}}(q,\mathbf{x} )=\frac{q^{\Delta}}{P_{d+1}(q,\mathbf{x})} \tag{5.10}\] When \(d=2r+1\) is odd, \(\mathrm{SO}(d+1)\) has one more Cartan generator than \(\mathrm{SO}(d)\). Therefore, in order to derive the reduction from \(\mathrm{SO}(2,d+1)\) to \(\mathrm{SO}(2,1)\times\mathrm{SO}(d)\), we should set \(y_{r+1}=1\) in eq. (5.5). Let \(\hat{\mathbf{y}}\) be the first \(r\) components of \(\mathbf{y}\). Since \(P_{d+1}(e^{-\beta},\mathbf{y})=\prod_{i=1}^{r+1}(1-y_{i}e^{-\beta})(1-y_{i}^{-1}e^ {-\beta})\), taking \(y_{r+1}\) yields \[P_{d+1}(e^{-\beta},\hat{\mathbf{y}},1)=\left(1-e^{-\beta}\right)^{2}\prod_{i=1}^{ r}(1-y_{i}e^{-\beta})(1-y_{i}^{-1}e^{-\beta})=(1-e^{-\beta})P_{d}(e^{-\beta}, \hat{\mathbf{y}}) \tag{5.11}\] Applying eq. (C.1) for \(P_{d}(e^{-\beta},\hat{\mathbf{y}})^{-1}\) we obtain \[\Theta^{\mathrm{CFT}_{d+1}}_{\mathcal{R}_{\Delta}}(\beta,\hat{\mathbf{y}},1)=\sum _{n,s\geq 0}\Theta^{\mathrm{CFT}_{1}}_{\mathcal{R}_{\Delta+2n+s}}(\beta)\chi^{ \mathrm{SO}(d)}_{\mathbb{Y}_{s}}(\hat{\mathbf{y}}) \tag{5.12}\] which leads to the decomposition (5.8), and \[\text{Odd }d:\ \ \Theta^{\mathrm{SO}(2,d+1)}_{\mathcal{R}_{\Delta}}(q,\mathbf{x} )=\frac{q^{\Delta}}{P_{d+1}(q,\mathbf{x},1)} \tag{5.13}\] Altogether, eq. (5.10) and eq. (5.13) can be uniformly expressed as \[\Theta^{\mathrm{SO}(2,d+1)}_{\mathcal{R}_{\Delta}}(q,\mathbf{x})=\frac{q^{\Delta} }{(1-q)P_{d}(q,\mathbf{x})} \tag{5.14}\] Similarly, for a spinning primary representation \(\mathcal{R}_{\Delta,\ell}\), one can show that the corresponding \(\mathrm{SO}(2,d+1)\) character is given by \[\Theta^{\mathrm{SO}(2,d+1)}_{\mathcal{R}_{\Delta,\ell}}(q,\mathbf{x})=\frac{q^{ \Delta}}{(1-q)P_{d}(q,\mathbf{x})}\times\begin{cases}\chi^{\mathrm{SO}(d+1)}_{ \mathbb{Y}_{\ell}}(\mathbf{x}),&d=2r\\ \chi^{\mathrm{SO}(d+1)}_{\mathbb{Y}_{\ell}}(\mathbf{x},1),&d=2r+1\end{cases} \tag{5.15}\] ### From \(\mathrm{SO}(2,2)\) to \(\mathrm{SO}(1,2)\) For \(d=1\), the restriction of any primary representation of \(\mathrm{SO}(2,2)\) to \(\mathrm{SO}(2,1)\) is partially solved in [14]. It is found that an \(\mathrm{SO}(2,2)\) primary representation \(\mathcal{R}_{\Delta,\ell}\) of energy \(\Delta\) and spin \(\ell\), satisfying the unitarity bound \(\Delta\geq|\ell|\), contains \(\mathrm{SO}(1,2)\) principal series \(\mathcal{P}_{\frac{1}{2}+i\lambda}\) for all \(\lambda\). In addition, there is a complementary series representation \(\mathcal{C}_{1-\Delta}\) when \(\Delta<\frac{1}{2}\) and \(\ell=0\), and \(|\ell|\) discrete series representations \(\mathcal{D}_{1}^{\mathrm{sign}(\ell)},\mathcal{D}_{2}^{\mathrm{sign}(\ell)}, \cdots,\mathcal{D}_{|\ell|}^{\mathrm{sign}(\ell)}\) when \(\ell\neq 0\). However, a properly defined density of the principal series is still missing. We will solve this problem by using Harish-Chandra characters. #### 5.2.1 Character analysis Using a straightforward generalization of the derivation in subsection 5.1, we find that the \(\mathrm{SO}(2,2)\) group character of the primary representation \(\mathcal{R}_{\Delta,\ell}\) is given by \[\Theta^{\mathrm{SO}(2,2)}_{\mathcal{R}_{\Delta,\ell}}(q)=\frac{q^{\Delta}}{(1-q )^{2}},\;\;\;0<q<1. \tag{111}\] This character is insensitive to the spin \(\ell\) because we have only turned on one Cartan generator while \(\mathrm{SO}(2,2)\) has two commuting generators, one counting the energy and the other counting the spin. In the spinning case, i.e. \(\Delta>|\ell|>0\), we expect the following formal expansion \[\Theta^{\mathrm{SO}(2,2)}_{\mathcal{R}_{\Delta,\ell}}(q)=\int_{0}^{\infty}\,d \lambda\,\mathcal{K}(\lambda;\ell)\frac{q^{\Delta_{\lambda}}+q^{\bar{\Delta}_ {\lambda}}}{1-q}+\sum_{k=1}^{|\ell|}\frac{q^{k}}{1-q} \tag{112}\] where \(\mathcal{K}(\lambda;\ell)\) is a formal "density" of \(\mathrm{SO}(1,2)\) principal series. As we have seen many times in previous sections, \(\mathcal{K}(\lambda;\ell)\) defined this way is actually divergent because \(\Theta^{\mathrm{SO}(2,2)}_{\mathcal{R}_{\Delta,\ell}}(q)\) has a double pole at \(q=1\) while \(\mathrm{SO}(1,2)\) characters only have a single pole at \(q=1\). To regularize the divergence in \(\mathcal{K}(\lambda;\ell)\), we introduce another spinless primary representation \(\mathcal{R}_{\Delta^{\prime}}\) with \(\Delta^{\prime}>\frac{1}{2}\) such that it does not contain any complementary series representation of \(\mathrm{SO}(1,2)\) and define a relative density \(\mathcal{K}_{\mathrm{rel}}(\lambda;\ell)\) as \[\Theta^{\mathrm{SO}(2,2)}_{\mathcal{R}_{\Delta,\ell}}(q)-\Theta^{\mathrm{SO}( 2,2)}_{\mathcal{R}_{\Delta^{\prime}}}(q)=\int_{0}^{\infty}\,d\lambda\, \mathcal{K}_{\mathrm{rel}}(\lambda;\ell)\Theta_{\Delta_{\lambda}}(q)+\sum_{k= 1}^{|\ell|}\frac{q^{k}}{1-q}\,. \tag{113}\] From eq. (113), we can easily solve the relative density using (100) \[\mathcal{K}_{\mathrm{rel}}(\lambda;\ell) =\frac{1}{\pi}\int_{0}^{\infty}\,dt\,\left(\frac{e^{-(\Delta- \frac{1}{2})t}-e^{-(\Delta^{\prime}-\frac{t}{2})t}}{1-e^{-t}}-\sum_{k=1}^{| \ell|}e^{-(k-\frac{1}{2})t}\right)\cos(\lambda t)\] \[=\frac{1}{2\pi}\sum_{\pm}\left[\psi\left(\Delta^{\prime}-\frac{1 }{2}\pm i\lambda\right)-\psi\left(\Delta-\frac{1}{2}\pm i\lambda\right)\right] -\frac{1}{\pi}\sum_{k=1}^{|\ell|}\frac{k-\frac{1}{2}}{(k-\frac{1}{2})^{2}+ \lambda^{2}} \tag{114}\] where the first term is independent of \(\ell\) and the second term is independent of \(\Delta\). Equation (113) also holds for \(\ell=0\) as long as \(\Delta\geq\frac{1}{2}\), since \(\mathcal{R}_{\Delta}\) does not contain complementary series of \(\mathrm{SO}(1,2)\) in this regime, and the corresponding \(\mathcal{K}_{\mathrm{rel}}(\lambda;0)\) gives the relative density of principal series between \(\mathcal{R}_{\Delta}\) and \(\mathcal{R}_{\Delta^{\prime}}\). Taking \(\Delta<\frac{1}{2}\) for \(\ell=0\), the eq. (113) should be modified as \[0<\Delta<\frac{1}{2}:\;\;\;\Theta^{\mathrm{SO}(2,2)}_{\mathcal{R}_{\Delta}}(q)- \Theta^{\mathrm{SO}(2,2)}_{\mathcal{R}_{\Delta^{\prime}}}(q)=\int_{0}^{\infty} \,d\lambda\,\mathcal{K}_{\mathrm{rel}}(\lambda;0)\Theta_{\Delta_{\lambda}}(q) +\frac{q^{\Delta}+q^{1-\Delta}}{1-q}\,. \tag{115}\] where we have used eq. (101) with \(n=0\) for the integral \(\int_{0}^{\infty}\,d\lambda\,\mathcal{K}_{\mathrm{rel}}(\lambda;0)\Theta_{ \Delta_{\lambda}}(q)\). The extra term \(\frac{q^{\Delta}+q^{1-\Delta}}{1-q}\) is consistent with the existence of an \(\mathrm{SO}(1,2)\) complementary series representation \(\mathcal{C}_{1-\Delta}\) in the primary representation \(\mathcal{R}_{\Delta}\) when \(0<\Delta<\frac{1}{2}\). [new]When \(\Delta=|\ell|\), which corresponds to a chiral field in CFT\({}_{2}\), the primary representation \(\mathcal{R}_{|\ell|,\ell}\) contains an infinite number of null states. More explicitly, we have a basis \(\{|n,\bar{n}\rangle,n,\bar{n}\geq 0\}\) for any \({\cal R}_{\Delta,\ell}\) defined by eq. (116). According to eq. (117), states with \(\bar{n}\neq 0\) are null when \(\Delta=\ell\), and states with nonzero \(n\) are null when \(\Delta=-\ell\). In addition, it is straightforward to check that the physical states form the \(\mathrm{SO}(1,2)\) discrete series representation \({\cal D}_{\ell}^{\pm}\) when \(\Delta=\pm\ell\). Altogether, the \(\mathrm{SO}(2,2)\) UIR corresponding to a chiral field only gives a discrete series representation when restricted to \(\mathrm{SO}(1,2)\). #### 5.2.2 Numerical check The \(\mathrm{SO}(1,2)\) Casimir restricted to the subspace spanned by scalar descendants, i.e. \(\mathrm{SO}(2,2)\) descendants that are also \(\mathrm{SO}(2)\) singlets, contains all the information about principal and complementary series representations in \({\cal R}_{\Delta,\ell}\). Denote the normalized scalar descendant states by \(|n\rangle,n\geq|\mathrm{min}(0,\ell)|\), where each \(|n\rangle\) is at level \(2n+\ell\). The matrix elements of \(C^{\mathrm{SO}(1,2)}\) with respect to the \(|n\rangle\) basis have been derived in the appendix D of [14]. Define \({\cal Q}_{n,m}=\langle n|C^{\mathrm{SO}(1,2)}|m\rangle\), and the nonvanishing entries of \({\cal Q}\) are \[{\cal Q}_{n,n}=2n(n+\ell)+\Delta(\ell+2n+1)\] \[{\cal Q}_{n+1,n}={\cal Q}_{n,n+1}=-\sqrt{(n+1)(n+\Delta)(n+\ell+ 1)(n+\Delta+\ell)}. \tag{118}\] Similar to previous sections, we can show that this implies \(C^{\mathrm{SO}(1,2)}\) has continuous spectrum on principal series. As before, we may introduce a truncation cutoff \({\cal N}\) on \(n\) and diagonalize the \({\cal Q}\) matrix to find a finite set of eigenstates \(q_{i}\). Any eigenvalue smaller than \(\frac{1}{4}\) for large \({\cal N}\) corresponds to a complementary series representation with \(\mathrm{SO}(1,2)\) Casimir given by this eigenvalue. The first three eigenvalues of \({\cal Q}\) for scalar primary representations \({\cal R}_{\Delta}\) are plotted as a function of \(\Delta\) in the left panel of fig. (10), where the cutoff is chosen to be \({\cal N}=10000\). For \(\Delta\) smaller than roughly \(\frac{1}{2}\), the smallest eigenvalue of \({\cal Q}\) is below the critical line \({\cal Q}=\frac{9}{4}\) and lies on the curve \(\Delta(1-\Delta)\), suggesting the appearance of a single Figure 10: Low-lying eigenvalues of \({\cal Q}\). Left: The three smallest eigenvalues for various positive \(\Delta\). The dashed line is \({\cal Q}=\frac{1}{4}\) and the gray line is \({\cal Q}=\Delta(1-\Delta)\). Right: The convergence of the smallest eigenvalue of \({\cal Q}\) to its large \({\cal N}\) asymptote \({\cal Q}_{\mathrm{min}}(\infty)=\Delta(1-\Delta)\). Inset plot is the log-log version and dashed blue line is a power law fit. complementary series representation \(\mathcal{C}_{1-\Delta}\) in \(\mathcal{R}_{\Delta}\) in the large \(\mathcal{N}\) limit. The convergence to the expected value \(\Delta(1-\Delta)\) for the \(\Delta=0.35\) case is illustrated in the right panel of fig. (5.1). We parametrize the eigenstates with \(q_{i}>\frac{1}{4}\) corresponding to the principal series of dimensions \(\Delta_{i}=\frac{1}{2}+i\lambda_{i}\) with \(\lambda_{i}=\sqrt{q_{i}-\frac{1}{4}}\). A coarse-grained density of states is given by the inverse of the spacing of \(\{\lambda_{i}\}\): \[\bar{\rho}_{\mathcal{N}}(\lambda_{i})=\frac{2}{\lambda_{i+1}-\lambda_{i-1}}. \tag{5.22}\] Like before, we consider a relative coarse-grained density, which has a finite \(\mathcal{N}\to\infty\) limit \[\rho_{\rm rel}=\bar{\rho}_{\mathcal{N}}^{\mathcal{R}_{\Delta,\ell}}-\bar{\rho }_{\mathcal{N}}^{\mathcal{R}_{\Delta^{\prime}}}. \tag{5.23}\] Its remarkable match with the character based density \(\mathcal{K}_{\rm rel}(\lambda;\ell)\) is shown in fig.(5.2). Similar to the previous sections, we observe a match between the coarse-grained density \(\bar{\rho}_{\mathcal{N}}\) with Pauli-Villars type renormalized \(\mathcal{K}(\lambda,\ell)\). For more details see appendix B and fig. (B.5). ### From \(\mathbf{SO}(2,3)\) to \(\mathbf{SO}(1,3)\) Because \(\mathrm{SO}(1,3)\) only has principal series and complementary series, the reduction of any unitary primary representation \(\mathcal{R}_{\Delta,\ell}\) of \(\mathrm{SO}(2,3)\) to \(\mathrm{SO}(1,3)\) is actually simpler than its lower dimensional and higher dimensional counterparts. In this subsection, we perform a character analysis of such a reduction. First, the \(\mathrm{SO}(2,3)\) character of \(\mathcal{R}_{\Delta,\ell}\) is given by eq. (5.15): \[\Theta^{\mathrm{SO}(2,3)}_{\mathcal{R}_{\Delta,\ell}}(q,x)=\frac{ \chi^{\mathrm{SO}(3)}_{\mathcal{V}_{\ell}}(x)}{(1-x\,q)(1-x^{-1}q)}\frac{q^{ \Delta}}{1-q} \tag{5.24}\] where \(\chi^{\mathrm{SO}(3)}_{\mathcal{V}_{\ell}}(x)=\sum_{m=-\ell}^{\ell}x^{m}\). Due to the singularity at \(q=1\), the \(\mathrm{SO}(2,3)\) character \(\Theta^{\mathrm{SO}(2,3)}_{\mathcal{R}_{\Delta,\ell}}(q,x)\) itself does not admit a well-defined decomposition into \(\mathrm{SO}(1,3)\) characters. So we introduce another primary representation \(\mathcal{R}_{\Delta^{\prime},\ell}\) with \(\Delta^{\prime}>1\). Then the difference \(\Theta^{\mathrm{SO}(2,3)}_{\mathcal{R}_{\Delta,\ell}}(q,x)-\Theta^{\mathrm{ SO}(2,3)}_{\mathcal{R}_{\Delta^{\prime},\ell}}(q,x)\) can be expanded in terms of \(\mathrm{SO}(1,3)\) characters. For \(\Delta>1\), using the Fourier transformation \[\frac{q^{\Delta-1}-q^{\Delta^{\prime}-1}}{1-q} =\int_{\mathbb{R}}\,d\lambda\,e^{i\lambda t}\mathcal{K}_{\mathrm{ rel}}(\lambda),\ \ \ q=e^{-|t|}\] \[\mathcal{K}_{\mathrm{rel}}(\lambda) =\frac{1}{2\pi}\sum_{\pm}\left[\psi(\Delta^{\prime}-1\pm i \lambda)-\psi(\Delta-1\pm i\lambda)\right] \tag{5.25}\] we obtain \[\Theta^{\mathrm{SO}(2,3)}_{\mathcal{R}_{\Delta,\ell}}(q,x)- \Theta^{\mathrm{SO}(2,3)}_{\mathcal{R}_{\Delta^{\prime},\ell}}(q,x) =\int_{0}^{\infty}d\lambda\,\mathcal{K}_{\mathrm{rel}}(\lambda) \Theta^{\mathrm{SO}(1,3)}_{1+i\lambda}(q,x)\] \[+\sum_{m=1}^{\ell}\int_{0}^{\infty}d\lambda\,\mathcal{K}_{ \mathrm{rel}}(\lambda)\left(\Theta^{\mathrm{SO}(1,3)}_{1+i\lambda,m}(q,x)+ \Theta^{\mathrm{SO}(1,3)}_{1+i\lambda,-m}(q,x)\right) \tag{5.26}\] where \(\Theta^{\mathrm{SO}(1,3)}_{1+i\lambda,m}(q,x)+\Theta^{\mathrm{SO}(1,3)}_{1+i \lambda,-m}(q,x)=\frac{(x^{m}+x^{-m})(q^{1+i\lambda}+q^{1-i\lambda})}{(1-xq)( 1-x^{-1}q)}\) is the \(\mathrm{SO}(1,3)\) character of the reducible representation \(\mathcal{P}_{1+i\lambda,m}\oplus\mathcal{P}_{1+i\lambda,-m}\), which is isomorphic to \(\mathcal{P}_{1-i\lambda,m}\oplus\mathcal{P}_{1-i\lambda,-m}\) because of \(\mathcal{P}_{1+i\lambda,m}\cong\mathcal{P}_{1-i\lambda,-m}\). This reducible representation describes a massive field of spin \(|m|\) in \(\mathrm{dS}_{3}\). When \(\ell=0\), eq. (5.26) implies that there are only scalar principal series representations in \(\mathcal{R}_{\Delta}\) with \(\Delta>1\). Decreasing \(\Delta\) from \(1^{+}\) to the unitarity bound \(\frac{1}{2}\), pole crossing happens in the integral \(\int_{0}^{\infty}d\lambda\,\mathcal{K}_{\mathrm{rel}}(\lambda)\Theta^{ \mathrm{SO}(1,3)}_{1+i\lambda}(q,x)\), which signals the appearance of a complementary series representation. Indeed, for \(\frac{1}{2}<\Delta<1\) and \(\Delta^{\prime}>1\), we find \[\Theta^{\mathrm{SO}(2,3)}_{\mathcal{R}_{\Delta}}(q,x)-\Theta^{ \mathrm{SO}(2,3)}_{\mathcal{R}_{\Delta^{\prime}}}(q,x)=\int_{0}^{\infty}d \lambda\,\mathcal{K}_{\mathrm{rel}}(\lambda)\Theta^{\mathrm{SO}(1,3)}_{1+i \lambda}(q,x)+\Theta^{\mathrm{SO}(1,3)}_{2-\Delta}(q,x) \tag{5.27}\] Therefore, in addition to the principal series \(\mathcal{P}_{1+i\lambda}\), there is also a complementary series representation \(\mathcal{C}_{2-\Delta}\) in \(\mathcal{R}_{\Delta}\) if \(\Delta\) is smaller than \(1\). For \(\ell\geq 1\), unitarity requires \(\Delta\geq 1+\ell\geq 2\), so there cannot be any \(\mathrm{SO}(1,3)\) complementary series representation in \(\mathcal{R}_{\Delta,\ell}\). Instead, eq. (5.26) implies that besides \(\mathcal{P}_{1+i\lambda}\), there are also spinning principal series \(\mathcal{P}_{1+i\lambda,\pm m}\) with \(m=1,2,\cdots,\ell\) in \(\mathcal{R}_{\Delta,\ell}\). Principal series of different spins share the same (relative) density \(\mathcal{K}_{\mathrm{rel}}(\lambda)\). ### From \(\mathrm{SO}(2,d+1)\) to \(\mathrm{SO}(1,d+1)\): scalar primary representations Let \(|\Delta\rangle\) be the primary state of \(\mathcal{R}_{\Delta}\). A generic descendant of \(|\Delta\rangle\) should be a linear combination of \(L^{+}_{a_{1}}\cdots L^{+}_{a_{k}}|\Delta\rangle\), where \(L^{+}_{a}\) are raising operators given by eq. (F.2). Treating \(L^{+}_{a_{1}}\cdots L^{+}_{a_{k}}\) as a symmetrized tensor of \(k\) spin 1 representations of \(\mathrm{SO}(d+1)\), which can be decomposed as a direct sum of \(\mathbb{Y}_{k},\mathbb{Y}_{k-2},\mathbb{Y}_{k-4},\cdots\), then the possible irreducible \(\mathrm{SO}(d+1)\) structures in \(\mathcal{R}_{\Delta}\) are all its single-row representations. According to the review in section 2, the absence of two-row representations of \(\mathrm{SO}(d+1)\) implies that the only possible \(\mathrm{SO}(1,d+1)\) structure in \(\mathcal{R}_{\Delta}\) are the spinless principal series, complementary series, and the type I exceptional series \(\mathcal{V}_{s,0}\). We first prove that \(\mathcal{V}_{s,0}\) is not present in \(\mathcal{R}_{\Delta}\). A \(\mathcal{V}_{s,0}\) representation in \(\mathcal{R}_{\Delta}\) is characterized by a spin-\(s\) state \(|\phi\rangle_{a_{1}\cdots a_{s}}\) satisfying \(L_{a_{1}0}|\phi\rangle_{a_{1}\cdots a_{s}}=0\) because \(\mathcal{V}_{s,0}\) does not contain any \(\mathbb{Y}_{k}\) components with \(k<s\) while \(L_{a_{1}0}|\phi\rangle_{a_{1}\cdots a_{s}}\) has the symmetry of \(\mathbb{Y}_{s-1}\)22. In the primary representation \(\mathcal{R}_{\Delta}\), any spin-\(s\) state is a linear combination of \((L^{+}\cdot L^{+})^{n}(L^{+}_{a_{1}}\cdots L^{+}_{a_{s}}-\mathrm{trace})| \Delta\rangle,n\geq 0\), which in the index-free formalism can be written compactly as Footnote 22: More rigorously, \(\mathcal{V}_{s,0}\) is characterized by a spin-\(s\) state \(|\phi\rangle_{a_{1}\cdots a_{s}}\) such that the \(\mathbb{Y}_{s-1}\) and \(\mathbb{Y}_{s,1}\) components of \(L_{a}|\phi\rangle_{a_{1}\cdots a_{s}}\) are vanishing. It is straightforward to check that a state satisfying these conditions is an eigenstate of \(C^{\mathrm{SO}(1,d+1)}\) with the desired eigenvalue. When \(|\phi\rangle_{a_{1}\cdots a_{s}}\in\mathcal{R}_{\Delta}\), the \(\mathbb{Y}_{s,1}\) part vanishes automatically since \(\mathcal{R}_{\Delta}\) does not contain any two-row representation of \(\mathrm{SO}(d+1)\). \[|\phi^{s}_{n}\rangle\equiv\left(L^{+}\cdot L^{+}\right)^{n}\left(z\cdot L^{+} \right)^{s}|\Delta\rangle \tag{112}\] where \(z^{a}\) is an auxiliary null vector in \(\mathbb{C}^{d+1}\). Define \(|\phi,z\rangle=\sum_{n\geq 0}a_{n}|\phi^{s}_{n}\rangle\) and then the analogue of \(L_{a_{1}0}|\phi\rangle_{a_{1}\cdots a_{s}}=0\) in the index-free formalism is \[L_{a0}\mathcal{D}_{z^{a}}|\phi,z\rangle=0,\;\;\;\mathcal{D}_{z_{a}}=\partial_{ z^{a}}-\frac{1}{d+2z\cdot\partial_{z}-1}z_{a}\partial_{z}^{2}. \tag{113}\] Using eq. (108) and (109), we find \[L^{+}_{a}\mathcal{D}_{z^{a}}|\phi^{s}_{n}\rangle =\frac{s(d+s-2)}{d+2s-3}|\phi^{s-1}_{n+1}\rangle\] \[L^{-}_{a}\mathcal{D}_{z^{a}}|\phi^{s}_{n}\rangle =\frac{4s(d+s-2)(n+s+\Delta-1)(n+s+\frac{d-1}{2})}{d+2s-3}|\phi^{ s-1}_{n}\rangle. \tag{114}\] Since \(L_{a0}=\frac{i}{2}\left(L^{+}_{a}+L^{-}_{a}\right)\), the requirement \(L_{a0}\mathcal{D}_{z^{a}}|\phi,z\rangle=0\) leads to a recurrence relation of the coefficients \(a_{n}\) \[4(n+s+\Delta)\left(n+s+\frac{d+1}{2}\right)a_{n+1}+a_{n}=0 \tag{115}\] where \(n\geq-1\) and \(a_{-1}\equiv 0\). The equation corresponding to \(n=-1\) holds if and only if \(a_{0}=0\), which further requires all \(a_{n}\) to vanish because of the recurrence relation eq. (115). Altogether, \(\mathcal{R}_{\Delta}\) does not contain any \(\mathrm{SO}(1,d+1)\) representation belonging to the type I exceptional series, and hence the only possible \(\mathrm{SO}(1,d+1)\) species is spinless principal or complementary series representation. #### 5.4.1 Character analysis Assuming the absence of complementary series in \(\mathcal{R}_{\Delta}\) when \(\Delta\) is larger than some critical value \(\Delta_{c}\), which will be confirmed by numerics shortly, we expect the following character relation to hold \[\Theta^{\mathrm{SO}(2,d+1)}_{\mathcal{R}_{\Delta}}(q,\mathbf{x})-\Theta^{\mathrm{ SO}(2,d+1)}_{\mathcal{R}_{\Delta^{\prime}}}(q,\mathbf{x})=\int_{0}^{\infty}d\lambda\, \mathcal{K}_{\mathrm{rel}}(\lambda)\Theta^{\mathrm{SO}(1,d+1)}_{\Delta_{ \lambda}}(q,\mathbf{x}),\;\;\;\Delta,\Delta^{\prime}>\Delta_{c} \tag{116}\] where \(\Theta^{\text{SO}(1,d+1)}_{\Delta_{\lambda}}(q,\mathbf{x})\) is the \(\text{SO}(1,d+1)\) character of principal series representation \(\mathcal{P}_{\frac{d}{2}+i\lambda}\), and \(\mathcal{K}_{\text{rel}}(\lambda)\) is supposed to be the relative density of \(\mathcal{P}_{\frac{d}{2}+i\lambda}\) between the two primary representations \(\mathcal{R}_{\Delta}\) and \(\mathcal{R}_{\Delta^{\prime}}\). Using eq. (5.14) and making the change of variable \(q\to e^{-|t|}\), then the character relation (5.32) is equivalent to the Fourier transform \[\frac{e^{-(\Delta-\frac{d}{2})|t|}-e^{-(\Delta^{\prime}-\frac{d}{2})|t|}}{1-e^{ -|t|}}=\int_{\mathbb{R}}d\lambda\,\mathcal{K}_{\text{rel}}(\lambda)e^{i\lambda t} \tag{5.33}\] where we have extended \(\mathcal{K}_{\text{rel}}(\lambda)\) to an even function on the whole real line. For \(\Delta,\Delta^{\prime}>\frac{d}{2}\), the solution of eq. (5.33) is \[\mathcal{K}_{\text{rel}}(\lambda)=\frac{1}{2\pi}\sum_{\pm}\left[\psi\left( \Delta^{\prime}-\frac{d}{2}\pm i\lambda\right)-\psi\left(\Delta-\frac{d}{2} \pm i\lambda\right)\right]. \tag{5.34}\] Since the unitarity bound on \(\mathcal{R}_{\Delta}\) only requires \(\Delta>\frac{d-1}{2}\), \(\Delta-\frac{d}{2}\) can be negative. In this case, eq. (5.33) needs a modification, which follows from the integral eq. (A.9) with \(n=0\) \[\frac{d-1}{2}<\Delta<\frac{d}{2}:\ \ \frac{e^{-(\Delta-\frac{d}{2})|t|}-e^{-( \Delta^{\prime}-\frac{d}{2})|t|}}{1-e^{-|t|}}=\int_{\mathbb{R}}d\lambda\, \mathcal{K}_{\text{rel}}(\lambda)e^{i\lambda t}+\frac{e^{-(\Delta-\frac{d}{2}) t}+e^{(\Delta-\frac{d}{2})t}}{1-e^{-|t|}} \tag{5.35}\] and then the character relation (5.32) should change accordingly as (suppressing the arguments of characters) \[\Theta^{\text{SO}(2,d+1)}_{\mathcal{R}_{\Delta}}-\Theta^{\text{SO}(2,d+1)}_{ \mathcal{R}_{\Delta^{\prime}}}=\int_{0}^{\infty}d\lambda\,\mathcal{K}_{\text{ rel}}(\lambda)\Theta^{\text{SO}(1,d+1)}_{\Delta_{\lambda}}+\Theta^{\text{SO}(1,d+1)}_{d- \Delta} \tag{5.36}\] which suggests \(\Delta_{c}=\frac{d}{2}\) and the existence of a single complementary series representation \(\mathcal{C}_{d-\Delta}\) in \(\mathcal{R}_{\Delta}\) when \(\frac{d-1}{2}<\Delta<\frac{d}{2}\). The next step is to develop a numerical scheme to test these claims. #### 5.4.2 Numerical check To identify those continuous families of states, we need to study the spectrum of the \(\text{SO}(1,d+1)\) Casimir in the \(\text{SO}(d+1)\) singlet subspace of \(\mathcal{R}_{\Delta}\), which is spanned by \(|\phi_{n})=(L^{+}\cdot L^{+})^{n}\,|\Delta\rangle\). The \(\text{SO}(1,d+1)\) Casimir acting on \(|\phi_{n})\) is given by eq. (F.5) \[C^{\text{SO}(1,d+1)}|\phi_{n})=\frac{1}{4}|\phi_{n+1})+\frac{1}{4}L^{-}\cdot L^ {-}|\phi_{n})+\left(\frac{1}{2}L^{+}\cdot L^{-}+\frac{d+1}{2}\Delta+(d+1)n \right)|\phi_{n}) \tag{5.37}\] where both \(L^{-}\cdot L^{-}|\phi_{n})\) and \(L^{+}\cdot L^{-}|\phi_{n})\) are computed in appendix G, c.f. eq. (G.5) and eq. (G.3) respectively. Combining all the ingredients yields \[C^{\text{SO}(1,d+1)}|\phi_{n}) =\frac{1}{2}\left(4n(n+\Delta)+(d+1)\Delta\right)|\phi_{n})+\frac{ 1}{4}|\phi_{n+1})\] \[+4n\left(n+\Delta-1\right)\left(n+\frac{d-1}{2}\right)\left(n+ \Delta-\frac{d+1}{2}\right)|\phi_{n-1}) \tag{5.38}\] Define normalized states \(|\phi_{n})\equiv\frac{1}{\sqrt{(\phi_{n}|\phi_{n})}}|\phi_{n})\), where \((\phi_{n}|\phi_{n})\) is computed in eq. (G.6), then the nonvanishing matrix elements of \(C^{\text{SO}(1,d+1)}\) with respect to the normalized basis \(|\phi_{n})\) are \[\mathcal{Q}_{nn}=2n(n+\Delta)+\frac{1}{2}(d+1)\Delta\] \[\mathcal{Q}_{n+1,n}=\mathcal{Q}_{n,n+1}=\sqrt{(n+1)(n+\Delta)\left( n+\frac{d+1}{2}\right)\left(n+\Delta-\frac{d-1}{2}\right)} \tag{111}\] where \(\mathcal{Q}_{mn}\equiv\langle\phi_{m}|C^{\mathrm{SO}(1,d+1)}|\phi_{n}\rangle\). For each \(q_{\lambda}=\frac{d^{2}}{4}+\lambda^{2},\lambda\geq 0\) which corresponds to a principal series representation \(\mathcal{P}_{\frac{d}{2}+i\lambda}\), the matrix \(\mathcal{Q}\) admits an eigenvector \(v_{n}(\lambda)\) satisfying \(\mathcal{Q}_{nm}v_{m}(\lambda)=q_{\lambda}v_{n}(\lambda)\), and its asymptotic behavior at large \(n\) is given by \(v_{n}(\lambda)\sim\frac{R}{n^{\frac{1}{2}+i\lambda}}+c.c\), which implies that all \(v_{n}(\lambda)\) are \(\delta\)-function normalizable. Therefore, \(C^{\mathrm{SO}(1,d+1)}\) has a continuous spectrum in the region \(\left[\frac{d^{2}}{4},\infty\right)\). It is also possible for the infinite dimensional matrix \(\mathcal{Q}\) to have eigenvalues smaller than \(\frac{d^{2}}{4}\). To see these eigenvalues intuitively, we consider a truncated version of \(\mathcal{Q}_{nm}\) by imposing a hard cut-off \(\mathcal{N}\) on \(n\) and \(m\), which effectively cuts off the energy at \(\Delta+2\mathcal{N}\) in the primary representation \(\mathcal{R}_{\Delta}\). Then we can numerically diagonalize the truncation of \(\mathcal{Q}\). In fig. (10), with \(\mathcal{N}\) chosen to be \(10^{4}\), we plot the first three eigenvalues of \(\mathcal{Q}\) for primary representations \(\mathcal{R}_{\Delta}\) of \(\mathrm{SO}(2,4)\). It shows that when \(1<\Delta\lesssim\frac{3}{2}\), the smallest eigenvalue of \(\mathcal{Q}\) becomes smaller than \(\frac{9}{4}\), lying on the line \(\mathcal{Q}=\Delta(3-\Delta)\) which is equal to the \(\mathrm{SO}(1,4)\) Casimir of the complementary series representations \(\mathcal{C}_{3-\Delta}\), and when \(\Delta>\frac{3}{2}\) all the eigenvalues are larger than \(\frac{9}{4}\). This numerical result confirms the prediction of a complementary series representation \(\mathcal{C}_{d-\Delta}\) in \(\mathcal{R}_{\Delta}\) when \(\frac{d-1}{2}<\Delta<\frac{d}{2}\). The numerical diagonalization also allows us to extract information about principal series contained in \(\mathcal{R}_{\Delta}\). For each fixed \(\mathcal{N}\), we can define a coarse-grained density of principal series as the inverse spacing of \(\lambda_{n}=\sqrt{q_{n}-\frac{d^{2}}{4}}\), where \(q_{n}\) denote eigenvalues Figure 10: Low-lying eigenvalues of the \(\mathrm{SO}(1,4)\) Casimir matrix for the primary representation \(\mathcal{R}_{\Delta}\) of a CFT in \(\mathrm{dS}_{4}\). Left: The first three eigenvalues of \(\mathcal{Q}\), given by eq. (111), with the cut-off being \(\mathcal{N}=10^{4}\). The gray line is \(\mathcal{Q}=\Delta(3-\Delta)\) and the dashed line is \(\mathcal{Q}=\frac{9}{4}\). Right: The convergence of \(\mathcal{Q}_{\mathrm{min}}(\mathcal{N})\) to its expected value \(1.2\times(3-1.2)=2.16\) in the \(\mathcal{N}\to\infty\) limit. of \({\cal Q}\) that are larger than \(\frac{d^{2}}{4}\). Given two primary representations \({\cal R}_{\Delta}\) and \({\cal R}_{\Delta^{\prime}}\), we obtain similarly a relative coarse-grained density \(\rho_{\rm rel}\) that is finite in the large \({\cal N}\) limit. Fig. (5.4) confirms that \(\rho_{\rm rel}\) matches \({\cal K}_{\rm rel}\) (c.f. eq. (5.34)), derived from charaters. ### From \({\rm SO}(2,d+1)\) to \({\rm SO}(1,d+1)\): Some remarks on spinning primary representations Let \({\cal R}_{\Delta,\ell}\) be a primary representation of \({\rm SO}(2,d+1)\), built from a spin-\(\ell\) primary state \(|\Delta\rangle_{a_{1}\cdots a_{\ell}}\). As the first step to constrain the possible \({\rm SO}(1,d+1)\) species in \({\cal R}_{\Delta,\ell}\), we list all \({\rm SO}(d+1)\) irreducible representations in it. Descendants of \(|\Delta\rangle_{a_{1}\cdots a_{\ell}}\) are linear combinations of \(L^{+}_{b_{1}}\cdots L^{+}_{b_{k}}|\Delta\rangle_{a_{1}\cdots a_{\ell}}\). Treating \(L^{+}_{b_{1}}\cdots L^{+}_{b_{k}}\) as a symmetrized tensor product of \(k\) spin 1 representations of \({\rm SO}(d+1)\), which can be decomposed as a direct sum \(\oplus_{m}{\mathbb{Y}}_{k-2m}\), then the possible \({\rm SO}(d+1)\) structures of descendants are encoded in the tensor products \({\mathbb{Y}}_{n}\otimes{\mathbb{Y}}_{\ell},n\geq 0\), where \({\mathbb{Y}}_{\ell}\) corresponds to the primary states and \({\mathbb{Y}}_{n}\) corresponds to \((L^{+}_{b_{1}}\cdots L^{+}_{b_{n}}-{\rm trace})\). The tensor product decomposition of \({\mathbb{Y}}_{n}\otimes{\mathbb{Y}}_{\ell}\) is given by eq. (4.1): \[{\mathbb{Y}}_{n}\otimes{\mathbb{Y}}_{\ell}=\bigoplus_{a=0}^{\min(n,\ell)} \bigoplus_{b=0}^{\min(n,\ell)-a}{\mathbb{Y}}_{n+\ell-2a-b,b}. \tag{5.40}\] Some direct results of this decomposition are: * Since \(b\leq\min(n,\ell)\leq\ell\), \({\cal R}_{\Delta,\ell}\) does not contain any \({\mathbb{Y}}_{ss}\) with \(s\geq\ell+1\). It further implies that spin \(s\) UIRs of \({\rm SO}(1,d+1)\) cannot appear in \({\cal R}_{\Delta,\ell}\) if \(s\geq\ell+1\). Therefore, the possible \(\mathrm{SO}(1,d+1)\) UIRs in \(\mathcal{R}_{\Delta,\ell}\) are \[\mathcal{P}_{\frac{d}{2}+i\lambda,s},\;\;\;\mathcal{C}_{\frac{d}{2}+\mu,s},\; \;\;\mathcal{U}_{s,t}\,\] (118) for \(s=0,1,\cdots,\ell\), and the type I exceptional series. * The existence of some \(\mathbb{Y}_{ss}\) requires \(n+\ell-2a-b=b\), which is equivalent to \(a+b=\frac{n+\ell}{2}\). On the other hand, we have \(a+b\leq\min(n,\ell)\leq\frac{n+\ell}{2}\). Therefore, \(n=\ell\) and \(a+b=\ell\). In other words, \(\mathbb{Y}_{ss}\) with \(0\leq s\leq\ell\) only appears in the tensor product \(\mathbb{Y}_{\ell}\otimes\mathbb{Y}_{\ell}\) and appears exactly once. Altogether, the \(\mathbb{Y}_{ss}\)-type descendants of \(|\Delta\rangle_{a_{1}\cdots a_{\ell}}\) are of the form \[|s,n\rangle_{a_{1}\cdots a_{s},b_{1}\cdots b_{s}}\equiv(L^{+}\cdot L^{+})^{n} \,\hat{\Pi}_{ss}\left(L^{+}_{a_{1}}\cdots L^{+}_{a_{s}}L^{+}_{b_{s+1}}\cdots L ^{+}_{b_{\ell}}|\Delta\rangle_{b_{1}\cdots b_{\ell}}\right)\] (119) where \(n\geq 0,0\leq s\leq\ell\) and \(\hat{\Pi}_{ss}\) is a projector operator onto the \(\mathbb{Y}_{ss}\) part. For example, when \(s=1\), \(\hat{\Pi}_{11}\) simply antisymmetrizes \(a_{1}\) and \(b_{1}\). For higher \(s\), it antisymmtrizes the \(s\) pairs of indices \([a_{1},b_{1}],\cdots,[a_{s},b_{s}]\), and then projects out all types of trace. The goal for the remaining part of this section is to gain more intuitions of the possible \(\mathrm{SO}(1,d+1)\) UIRs contained in \(\mathcal{R}_{\Delta,\ell}\), at least numerically. For this purpose, we first study the spectrum of \(C^{\mathrm{SO}(1,d+1)}\) restricted to the subspace spanned by all \(|s,n\rangle_{a_{1}\cdots a_{s},b_{1}\cdots b_{s}}\) for any fixed \(s\in\{0,1,\cdots,\ell\}\), since it encodes information of spin \(s\) principal and complementary series and \(\mathcal{U}_{s,t}\) in \(\mathcal{R}_{\Delta,\ell}\). In appendix H, we have derived the matrix representation of \(C^{\mathrm{SO}(1,d+1)}\) in this subspace for \(s=0\) and \(1\). Denote the matrix by \(\mathcal{Q}^{(s)}\) and the nonvanishing matrix elements are given by eq. (104): \[\mathcal{Q}^{(s)}_{nn}=2n(n+\Delta+\ell)+\frac{(d+2\ell+1)\Delta- (d-1)\ell}{2}+C^{\mathrm{SO}(d)}(\mathbb{Y}_{s})\] \[\mathcal{Q}^{(s)}_{n+1,n}=\mathcal{Q}^{(s)}_{n,n+1}=\sqrt{(n+1)(n +\Delta+\ell)\left(n+\Delta-\frac{d-1}{2}\right)\left(n+\ell+\frac{d+1}{2} \right)} \tag{120}\] where the diagonal entries of \(\mathcal{Q}^{(s)}\) hold for _any_\(s\) and the off-diagonal ones are _only_ computed for \(s=0\) and \(1\). Surprisingly, we get the same off-diagonal entries for \(s=0\) and \(1\). Assuming that \(\mathcal{Q}^{(s)}_{n,n+1}=\mathcal{Q}^{(s)}_{n+1,n}\) are given by eq. (120) for any \(s\), then \(\mathcal{Q}^{(s)}=\mathcal{Q}^{(0)}+C^{\mathrm{SO}(d)}(\mathbb{Y}_{s})\). This relation has many interesting implications. First, using the asymptotic behavior of \(\mathcal{Q}^{(0)}\) for large \(n\), one can show that it has a continuous spectrum on \([\frac{d^{2}}{4},\infty)\) and hence \(\mathcal{R}_{\Delta,\ell}\) contains all the scalar principal series. Second, noticing that \(C^{\mathrm{SO}(1,d+1)}(\mathcal{F}_{\frac{d}{2}+i\lambda,s})=\frac{d^{2}}{4}+ \lambda^{2}+C^{\mathrm{SO}(d)}(\mathbb{Y}_{s})\), we can also immediately conclude the existence of all spin \(s\) principal series with \(s\in\{0,1,\cdots,\ell\}\). Third, the existence of \(\mathcal{U}_{s,t}\) is equivalent to the existence of the eigenvalue \((1-t)(d+t-1)+C^{\mathrm{SO}(d)}(\mathbb{Y}_{s})\) of \(\mathcal{Q}^{(s)}\), which is also equivalent to the existence of the eigenvalue \((1-t)(d+t-1)\) of \(\mathcal{Q}^{(0)}\). This eigenvalue corresponds to \(\mathcal{F}_{d+t-1}\) of \(\mathrm{SO}(1,d+1)\) which is nonunitary unless \(t=0\)23. Since we start with a unitary CFT in \(\mathrm{dS}_{d+1}\), we do not expect any nonunitary representations of \(\mathrm{SO}(1,d+1)\). To test the \(t=0\) one and other complementary series numerically, we can truncate \({\cal Q}^{(0)}\) by a hard cut-off \(n\leq{\cal N}\) and perform a diagonalization. In fig. (5.5), we show low-lying eigenvalues of the truncated \({\cal Q}^{(0)}\) for various primary representation \({\cal R}_{\Delta,\ell}\) in various dimensions. The absence of any eigenvalue below \(\frac{d^{2}}{4}\) seems to exclude complementary series and type II exceptional series in \({\cal R}_{\Delta,\ell}\) with \(\ell\geq 1\). The matrix \({\cal Q}^{(s)}\) cannot tell us anything about the type I exceptional series representations since the latter do not contain any \(\mathbb{Y}_{ss}\) representation of \({\rm SO}(d+1)\). Instead, as mentioned in the footnote 22, we should study states of symmetry \(\mathbb{Y}_{s}\). Although we have not solved this problem completely, we propose a potentially useful way to determine whether a type I exceptional series in \({\cal R}_{\Delta,\ell}\) by using the \(s=1\) case as an explicit example. When \(s=1\), the \({\cal V}_{1,0}\) is characterized by a spin 1 state \(|\psi\rangle_{a}\) satisfying \[L_{a0}|\psi\rangle_{a}=0,\ \ L_{a0}|\psi\rangle_{b}-L_{b0}|\psi\rangle_{a}=0 \tag{5.44}\] A generic \({\rm SO}(d+1)\) vector in \({\cal R}_{\Delta,\ell}\) is a linear combination of \[|\xi_{n}^{-}\rangle_{a}=(L^{+}\cdot L^{+})^{n}|\Delta\rangle_{a},\ \ \ |\xi_{n}^{+}\rangle_{a}=(L^{+}\cdot L^{+})^{n}L_{a}^{+}|\Delta\rangle \tag{5.45}\] where \(|\Delta\rangle_{a_{1}}=L_{a_{2}}^{+}\cdots L_{a_{\ell}}^{+}|\Delta\rangle_{a_ {1}\cdots a_{\ell}}\) and \(|\Delta\rangle=L_{a_{1}}^{+}\cdots L_{a_{\ell}}^{+}|\Delta\rangle_{a_{1}\cdots a _{\ell}}\). By definition, \(|\xi_{n}^{\pm}\rangle_{a}\) has level \(\ell+2n\pm 1\). Let \(|\psi\rangle_{c}=\sum_{n\geq 0}\left(a_{n}|\xi_{n}^{-}\rangle_{c}+b_{n}|\xi_{n}^{+ }\rangle_{c}\right)\) be a spin 1 state in \({\cal V}_{1,0}\). After some lengthy computations, one can show that the requirement \(L_{a0}|\psi\rangle_{b}-L_{b0}|\psi\rangle_{a}=0\) is equivalent to \[\frac{1}{2}a_{n}+2(n+1)\left(n+\Delta+\ell-\frac{d-1}{2}\right)a_{n+1}=\ell( \Delta-d)b_{n} \tag{5.46}\] and \(L_{a0}|\psi\rangle_{a}=0\) is equivalent to \[\frac{a_{n}+b_{n-1}}{2}+2(n+1)\left(n+\Delta+\ell+\frac{d-1}{2}\right)a_{n+1} +\left(2n\left(n+\Delta+\ell+\frac{d+1}{2}\right)+(d+1)\Delta+\ell(\Delta+1) \right)b_{n}=0 \tag{5.47}\] When \(d=1\), the UIR \({\cal V}_{1,0}\) reduces to a discrete series representation of \({\rm SO}(1,2)\), and in this case solving (5.46) and (5.47) reduces to \(b_{n-1}+4(n+\Delta)(n+\ell+1)b_{n}=0\), which is Figure 5.5: Plot of first three low-lying eigenvalues of \({\cal Q}^{(0)}\) for various spacetime dimensions and spins. The dashed lines are \({\cal Q}^{(0)}=\frac{d^{2}}{4}\), with \(d=3,4,6\). consistent with [14]. In principle, one can derive \(\{a_{n},b_{n}\}\) from (110) and (111), and then use them to check whether \(|\psi\rangle_{c}\) is normalizable. A normalizable \(|\psi\rangle_{c}\) corresponds to the existence of \(\mathcal{V}_{1,0}\). Unfortunately, we could not solve the two couple recurrence relations for \(d\geq 2\). On the other hand, we want to mention that a significant simplification happens when \(\Delta\) approaches the unitarity bound, i.e. \(\Delta=d+\ell-1\). In this case, the primary state, which will be denoted by \(|J_{\ell}\rangle_{a_{1}\cdots a_{\ell}}\) henceforth, is a spin \(\ell\) conserved current in the sense of \(L_{a_{1}}^{+}|J_{1}\rangle_{a_{1}\cdots a_{\ell}}=0\). Because of this property, all \(|\xi_{n}^{+}\rangle_{a}\) vanish for \(\ell\geq 0\), and all \(|\xi_{n}^{-}\rangle_{a}\) vanish for \(\ell\geq 1\), which means that the primary representation generated by a conserved current of spin larger than \(1\) cannot have \(\mathcal{V}_{1,0}\). For \(\ell=1\), \(|\psi\rangle_{a}\) should be a linear combination of \(|\xi_{n}^{-}\rangle_{a}\) only. We can simply set all \(b_{n}\equiv 0\) and take \(\ell=1,\Delta=d\) in eq. (110) or eq. (111). It gives \[4n\left(n+\frac{d+1}{2}\right)a_{n}+a_{n-1}=0\rightsquigarrow a_{n}=\frac{(-)^{ n}}{4^{n}n!(\frac{d+3}{2})_{n}} \tag{112}\] To obtain the norm of \(|\xi_{n}^{-}\rangle_{a}\), we need the following relation \[(L_{-}\cdot L_{-})\,|\xi_{n}^{-}\rangle_{a} =4n\left(n+\frac{d-1}{2}\right)\left(2(d+1)H+L^{+}\cdot L^{-} \right)|\xi_{n-1}^{-}\rangle_{a}-4n\left[L_{a}^{-},L_{b}^{+}\right]|\xi_{n-1} ^{-}\rangle_{b}\] \[=16n\left(n+\frac{d-3}{2}\right)\left(n+\frac{d+1}{2}\right)(n+d -1)|\xi_{n-1}^{-}\rangle_{a} \tag{113}\] It fixes \({}_{a}(\xi_{n}^{-}|\xi_{n}^{-}\rangle_{b}\) up to an overall constant (which is suppressed here) \[{}_{a}(\xi_{n}^{-}|\xi_{n}^{-}\rangle_{b}=4^{2n}n!\left(\frac{d-1}{2}\right)_ {n}\left(\frac{d+3}{2}\right)_{n}(d)_{n}\delta_{ab} \tag{114}\] Combining (112) and (114), we find that \(|\psi\rangle_{a}\) is nonnormalizable for \(d\geq 2\): \[{}_{a}(\psi|\psi\rangle_{b}=\delta_{ab}\sum_{n\geq 0}\frac{\left(\frac{d-1}{2} \right)_{n}(d)_{n}}{n!(\frac{d+3}{2})_{n}}=\infty \tag{115}\] It excludes the existence of \(\mathcal{V}_{1,0}\). Moreover, in this case, we can also prove that \(\mathcal{U}_{1,0}\) cannot be present. Recall that \(\mathcal{U}_{1,0}\) is characterized by a state \(|\psi\rangle_{a,b}=-|\psi\rangle_{b,a}\) satisfying \(L_{a0}|\psi\rangle_{a,b}=0\), because \(\mathcal{U}_{1,0}\) does not contain spin \(1\) representations of \(\mathrm{SO}(d+1)\). The state \(|\psi\rangle_{a,b}\) should be a linear combination \(|\psi\rangle_{a,b}=\sum_{n\geq 0}c_{n}|1,n\rangle_{a,b}\), where \[|1,n\rangle_{a,b}=(L^{+}\cdot L^{+})^{n}\left(L_{a}^{+}|J_{1}\rangle_{b}-L_{b }^{+}|J_{1}\rangle_{a}\right). \tag{116}\] After a small calculation, one can show \[L_{a0}|1,n\rangle_{a,b} =\frac{i}{2}\left(L_{a}^{+}+L_{a}^{-}\right)|1,n\rangle_{a,b}\] \[=\frac{i}{2}\left(L^{+}\cdot L^{+}\right)^{n+1}|J_{1}\rangle_{b}+ i(n+d)(2n+d-1)\left(L^{+}\cdot L^{+}\right)^{n}|J_{1}\rangle_{b}. \tag{117}\] Then it is clear that \(L_{a0}|\psi\rangle_{a,b}\) vanishes if and only if \(c_{0}=0\) and \[c_{n-1}+4(n+1)\left(n+\frac{d-1}{2}\right)c_{n}=0. \tag{118}\] For \(d\geq 2\), the recurrence relation eq. (100) together with the initial condition \(c_{0}=0\) only has trivial solution, i.e. \(c_{n}\equiv 0\). Therefore, the primary representation generated by a spin 1 conserved current does not contain any type \(\mathcal{U}\) exceptional series. The reader may wonder where the exceptional series are hiding in the conformal multiplets. For example, free Maxwell theory, which is a CFT in dS\({}_{4}\), must contain \(\mathcal{U}_{1,0}\) which describes single photon states. We expect that \(\mathcal{U}_{1,0}\) is contained in the conformal multiplet of the gauge invariant local operator that creates single photons, i.e. the field strength which has 2 _anti-symmetric_ indices. We speculate that conformal multiplets with two row Young tableaus contain \(\mathcal{U}_{s,t}\). ## Acknowledgement We thank Dio Anninos, Tarek Anous, Frederik Denef, Petr Kravchuk, Manuel Loparco, Dalimil Mazac and Beatrix Muhlmann for useful discussions. JP is supported by the Simons Foundation grant 488649 (Simons Collaboration on the Nonperturbative Bootstrap) and the Swiss National Science Foundation through the project 200020_197160 and through the National Centre of Competence in Research SwissMAP. ZS is supported by the US National Science Foundation under Grant No. PHY-2209997 and the Gravity Initiative at Princeton University. ## Appendix A Fourier transform of \(\psi\) functions Let \(z,w\in\mathbb{C}\) be two complex numbers with _positive_ real parts. Consider the following integral \[\mathcal{I}(z,w)\equiv\int_{0}^{\infty}\,dt\,\frac{e^{-zt}-e^{-wt}}{1-e^{-t}} \tag{110}\] Using the integral representation of \(\psi\) function \[\psi(z)=\int_{0}^{\infty}dt\left(\frac{e^{-t}}{t}-\frac{e^{-zt}}{1-e^{-t}}\right) \tag{111}\] one can immediately find that \[\mathcal{I}(z,w)=\psi(w)-\psi(z),\quad\text{when }\text{Re}\left(z\right)>0, \text{Re}\left(w\right)>0 \tag{112}\] Although the original integral definition of \(\mathcal{I}(z,w)\) only makes sense when \(\text{Re}\left(z\right),\text{Re}\left(w\right)>0\), the \(\psi\) function representation provides a natural analytical continuation to \(\mathbb{C}\times\mathbb{C}\) minus a discrete set of points. It allows us to consider the following Fourier transformation \[\mathcal{J}(a,b) \equiv\frac{1}{2\pi}\int_{\mathbb{R}}d\lambda\,\left(\mathcal{I}( a+i\lambda,b+i\lambda)+\mathcal{I}(a-i\lambda,b-i\lambda)\right)e^{i\lambda t}\] \[=\frac{1}{2\pi}\int_{\mathbb{R}}d\lambda\,\left(\psi(b+i\lambda)+ \psi(b-i\lambda)-\psi(a+i\lambda)-\psi(a-i\lambda)\right)e^{i\lambda t} \tag{113}\] for any \(a,b\in\mathbb{C}\). First consider \(\operatorname{Re}\left(a\right)\), \(\operatorname{Re}\left(b\right)>0\). In this case, we are allowed to use the integral representation (A.1) of \(\mathcal{I}\), which yields \[\mathcal{I}(a+i\lambda,b+i\lambda)+\mathcal{I}(a-i\lambda,b-i \lambda)=\int_{\mathbb{R}}dt\,\frac{e^{-a|t|}-e^{-b|t|}}{1-e^{-|t|}}e^{-i \lambda t}\] (A.5) Combining eq. (A.4) and eq. (A.5), we find that \[\mathcal{J}(a,b)=\frac{e^{-a|t|}-e^{-b|t|}}{1-e^{-|t|}},\quad\text {when }\operatorname{Re}\left(a\right)\text{,Re}\left(b\right)>0.\] (A.6) Next, consider \(\operatorname{Re}\left(a\right)<0\) and \(\operatorname{Re}\left(b\right)>0\). Without loss of generality, assume that \(\operatorname{Re}\left(a\right)\) is between \(-(n+1)\) and \(-n\) for some positive integer \(n\). The strategy is to find a relation between \(\mathcal{J}(a,b)\) and \(\mathcal{J}(a+n,b)\), since the latter can be computed with eq. (A.6). Applying the \(\psi\) function recurrence relation \(\psi(z+1)=\psi(z)+\frac{1}{z}\) to \(\mathcal{I}(a\pm i\lambda,b\pm i\lambda)\) yields \[\mathcal{I}(a\pm i\lambda,b\pm i\lambda) =\psi(b\pm i\lambda)-\psi(a\pm i\lambda)\] \[=\psi(b\pm i\lambda)-\psi(a+n+1\pm i\lambda)+\sum_{k=0}^{n}\frac{1 }{a+k\pm i\lambda}\] \[=\mathcal{I}(a+n+1\pm i\lambda,b\pm i\lambda)+\sum_{k=0}^{n}\frac {1}{a+k\pm i\lambda}\] (A.7) Plugging eq. (A.7) into the Fourier transform eq. (A.4), we obtain \[\mathcal{J}(a,b) =\mathcal{J}(a+n+1,b)+\sum_{k=0}^{n}(a+k)\int_{\mathbb{R}}\frac{ d\lambda}{\pi}\,\frac{e^{i\lambda t}}{(a+k)^{2}+\lambda^{2}}\] \[=\frac{e^{-(a+n+1)|t|}-e^{-b|t|}}{1-e^{-|t|}}-\sum_{k=0}^{n}e^{(a +k)|t|}\] (A.8) where we have used eq. (A.6) and \(\int_{\mathbb{R}}dt\frac{1}{x^{2}+t^{2}}e^{it\lambda}=\frac{\pi}{|x|}e^{-|x \lambda|}\). Some further rewriting of eq. (A.8) leads to \[\mathcal{J}(a,b)=\frac{e^{-a|t|}-e^{-b|t|}}{1-e^{-|t|}}-2\sum_{k=0 }^{n}\cosh\left((a+k)t\right)\] (A.9) when \(-(n+1)<\operatorname{Re}\left(a\right)<-n\), \(\operatorname{Re}\left(b\right)>0\). As both \(\operatorname{Re}\left(a\right)>0\) and \(\operatorname{Re}\left(a\right)<0\) have been discussed, the remaining case is \(\operatorname{Re}\left(a\right)=0\). However, for a generic imaginary \(a\), the integral (A.4) is not well-defined because \(\psi(a\pm i\lambda)\) have poles \(\lambda=\pm ia\), lying on the integration contour. The only exception is \(a=0\) in that as \(a\) approaches \(0\), the two poles \(\pm ia\) collide and annihilate each other. More explicitly, using the recurrence relation of \(\psi\) functions, we get \(\psi(i\lambda)+\psi(-i\lambda)=\psi(1+i\lambda)+\psi(1-i\lambda)\), which yields \[\mathcal{J}(0,b)=\mathcal{J}(1,b)=\frac{1-e^{-b|t|}}{1-e^{-|t|}} -1,\quad\text{when }\operatorname{Re}\left(b\right)>0.\] (A.10) where we have used eq. (A.6) for \(\mathcal{J}(1,b)\). Numerical analysis of \(L_{0}\neq 0\) sector of \(\text{SO}(1,2)\) In section 3.2.2, we numerically analyzed the decomposition of tensor product \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\) by focusing on \(L_{0}=0\) sector. The advantage of focusing on \(L_{0}=0\) sector is that the discrete series \(\mathcal{D}_{p}^{\pm}\) are absent. On the other hand, the formula (3.16) is not written in any particular \(L_{0}\) eigensector. Strictly speaking, the numerical analysis through which we are finding density of states \(\bar{\rho}_{\mathcal{N}}\) is \(L_{0}\)-variant and it is not a good candidate to match with the results in 3.2.2 since we do not project to any particular \(L_{0}\) eigenspace in the character analysis. In this section, we show that indeed, the \(L_{0}=0\) sector captures all the features of _relative density_ of states \(\rho_{\text{rel}}\) and considering it is enough to check the character analysis result. Let \(\mathcal{H}_{j}\) be the \(L_{0}=j\) subspace of \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\). Consider the basis of \(|\psi_{n,j}\rangle\equiv|n,j-n),\ n\in\mathbb{Z}\) for \(\mathcal{H}_{j}\)24 Footnote 24: There is no obvious parity symmetry here except for when \(\Delta_{1}=\Delta_{2}\) where one can define \[|\psi_{n,j}^{(\pm)}\rangle=|n,j-n\rangle\pm|j-n,n\rangle\] . Following equations (3.6) and (3.4), one finds that the matrix elements of Casimir \[\mathcal{Q}_{mn}^{(j)}\equiv\frac{(\psi_{m,j}|C^{\text{SO}(1,2)}| \psi_{n,j}\rangle)}{\sqrt{(\psi_{m,j}|\psi_{m,j}\rangle(\psi_{n,j}|\psi_{n,j} \rangle)}}\] (B.1) in \(\mathcal{H}_{j}\) is given by \[\mathcal{Q}_{nn}^{(j)}=2n(n-j)+\Delta_{1}\bar{\Delta}_{1}+\Delta_{2}\bar{ \Delta}_{2}\,\qquad\mathcal{Q}_{n+1,n}^{(j)}=\left(\mathcal{Q}_{n,n+1}^{(j)}\right)^{*}= \beta_{n,j}\] (B.2) where \[\beta_{n,j}=-\begin{cases}(n+\Delta_{1})(n-j+\Delta_{2}),&\mathcal{P}_{\Delta _{1}}\otimes\mathcal{P}_{\Delta_{2}}\\ (n+\Delta_{1})\sqrt{(n-j+\Delta_{2})(n-j+\bar{\Delta}_{2})},&\mathcal{P}_{ \Delta_{1}}\otimes\mathcal{C}_{\Delta_{2}}\\ (n-j+\Delta_{2})\sqrt{(n+\Delta_{1})(n+\Delta_{1})},&\mathcal{C}_{\Delta_{1}} \otimes\mathcal{P}_{\Delta_{2}}\\ \sqrt{(n+\Delta_{1})(n+\Delta_{1})(n-j+\Delta_{2})(n-j+\bar{\Delta}_{2})},& \mathcal{C}_{\Delta_{1}}\otimes\mathcal{C}_{\Delta_{2}}\end{cases}\] (B.3) A similar large \(n\) argument as in section 3.2.2, with the same \(\alpha_{\pm}(\lambda)\) in (3.36), shows that all the principal series representations appear in this tensor product as well. We may now diagonalize the \(\mathcal{Q}^{(j)}\) matrix introducing a cutoff \(-\mathcal{N}\leq n\leq\mathcal{N}\) and study the density of states corresponding to \(L_{0}=j\) subspace, \(\bar{\rho}_{\mathcal{N}}^{(j)}\), and its relative density \(\rho_{\text{rel}}^{(j)}\). Discrete series \(\mathcal{D}_{p}^{+}\) with \(0<p\leq j\) show up in the decomposition of subspace \(\mathcal{H}_{j}\). This can be seen in fig. (B.1) where the Casimir values of discrete series \(p(1-p)\) are apparent. In fig. (3.4), we observe a great agreement between \(\bar{\rho}_{\mathcal{N}}^{(j)}\) and hard-cutoff renormalized density of states \(\mathcal{K}_{\text{hc}}\) defined in (3.18) up to a shift. However, this is solely true in \(L_{0}=0\) sector. For \(L_{0}\neq 0\) sectors, in the small \(\lambda\) region the numerics do not agree with \(\mathcal{K}_{\text{hc}}\) as shown in fig. (B.2). On the other hand, the relative density of different \(L_{0}=j\) sectors defined as \[\rho_{\text{rel}}^{(j)}=\bar{\rho}_{\mathcal{N}}^{(j),\mathcal{P}_{\Delta_{1}} \otimes\mathcal{P}_{\Delta_{2}}}-\bar{\rho}_{\mathcal{N}}^{(j),\mathcal{P}_{ \Delta_{3}}\otimes\mathcal{P}_{\Delta_{4}}}\] (B.4) **Figure B.1**: First a few eigenvalues of Casimir \(\mathcal{Q}^{(j)}\). Left: tensor product of two principal series representations in the \(\mathcal{H}_{3}\) subspace. Right: tensor product of two complementary series representations in the \(\mathcal{H}_{1}\) subspace. Discrete series representations with \(\mathcal{D}_{p}^{\pm}\) with \(p\in\{1,2,\cdots,j\}\) also appear in the spectrum. The solid line \(\mathcal{Q}=2\mu(1-2\mu)\) in the right panel corresponds to the expected single complementary series representation \(\mathcal{C}_{2\mu}\) in the tensor \(\mathcal{C}_{\frac{1}{2}+\mu}\otimes\mathcal{C}_{\frac{1}{2}+\mu}\) when \(\frac{1}{4}<\mu<\frac{1}{2}\). **Figure B.2**: Left: Illustration of how the interpolation of \(\bar{\rho}_{\mathcal{N}}^{(j)}\) is derived. The raw data of \(\bar{\rho}_{\mathcal{N}}^{(j)}\) has oscillations (the red and blue dots). To derive a smooth \(\bar{\rho}_{\mathcal{N}}^{(j)}\) we first interpolate each red and blue points individually and then take an average of them to find the purple dashed line. The green line is the shifted \(\mathcal{K}_{\rm hc}\). Right: A plot of \(\bar{\rho}_{\mathcal{N}}^{(j)}\) for various values of \(j\) in small \(\lambda\) region. The dashed black line is the \(\mathcal{K}_{\rm hc}\) which is shifted by \(\sim+6.745\) in y-axis. The \(\bar{\rho}_{\mathcal{N}}^{(j)}\) is \(j\)-independent in large \(\lambda\) region but in small \(\lambda\) region they do not match as one varies \(j\). Only \(\bar{\rho}_{\mathcal{N}}^{(0)}\) agrees with \(\mathcal{K}_{\rm hc}\). converges to the \({\cal K}_{\rm rel}\) in (3.22). In addition to this numerical evidence for the claim that \(\rho_{\rm rel}\) is unique in all \(L_{0}\) sectors and matches with \({\cal K}_{\rm rel}\), in what follows we present an analytical argument why this is the case. The tensor product character in (3.15) can be computed in \(|n,m)\) basis as \[\sum_{j\in\mathbb{Z}}\sum_{n\in\mathbb{Z}}(n,j-n|q^{D}|n,j-n)=\sum_{j\in \mathbb{Z}}A_{j}(q)\] (B.5) where \[A_{j}(q)\equiv\sum_{n\in\mathbb{Z}}(n|q^{D}|n)(j-n|q^{D}|j-n)\] (B.6) and its dependence on \(\Delta_{1}=\frac{1}{2}+i\mu_{1}\) and \(\Delta_{2}=\frac{1}{2}+i\mu_{2}\) is implicit. Moreover, concerning the right side of (3.15), each character admits a similar expression: \[\frac{q^{\frac{1}{2}+i\lambda}+q^{\frac{1}{2}-i\lambda}}{1-q}=\sum_{j\in \mathbb{Z}}B_{j}(q)\,\qquad B_{j}(q)\equiv(j|q^{D}|j)\.\] (B.7) We would like to see whether, within each \(j\) sector, we can write an equation similar to (3.15): \[A_{j}(q)=\int_{\mathbb{R}}d\lambda\ {\cal K}_{j}(\lambda)B_{j}(q)\ +{\rm Discrete\ series}\.\] (B.8) To this end, let us write \(A_{j}(q)\) and \(B_{j}(q)\) in a \(\delta\)-function normalized basis \(|x\rangle\) of \({\cal P}_{\Delta}\) mentioned in [43] which satisfies the following relations: \[q^{D}|x\rangle=q^{\bar{\Delta}}\big{|}qx\rangle,\ \ \ \ \langle x|n\rangle=\frac{2^{ \Delta}}{\sqrt{2\pi}}\left(\frac{1-ix}{1+ix}\right)^{n}\frac{1}{(1+x^{2})^{ \Delta}}\.\] (B.9) Figure 3: A perfect match of \({\cal K}_{\rm rel}\) (the dashed black line) and \(\rho_{\rm rel}^{(j)}\)s. The \(\rho_{\rm rel}^{(j)}\) for different \(j\) converges to the same function in large \({\cal N}\). The agreement is good enough that puts them on top of each other in the plot and make them indistinguishable. Plugging in the identity operator into definition of, we find (114) where we define (115) Similarly, for, we have (116) where in the second line we perform the integral of the phase (117) over by noticing that (118) with support at. Equation (113) is formal since the integral representation for is divergent because of the singularity at. However, if we take a derivative with respect to one of the's (), then this equation becomes well defined. Of course, in this way we will lose information about the -independent part of including the discrete series. Consider derivative of (113): (119) where we introduce the superscript to remark the -dependence. Plugging in the integral representations of and introduced in (116) and (114), and using the fact that the integrals are well-defined, so we may interchange the integrals over and, one finds that both equations have the same dependence through and what is left is: (120) where. The left hand side of this equation is -independent which means that is also -independent. From this, one concludes that the difference of is also -independent. This completes the argument that within each sector - which corresponds to the tensor product character and - has to be -independent and illustrates results in fig. (114) where is -independent in large. #### CFT in dS There is a similar story for the case of CFT in dS\({}_{2}\). Given a primary representation \(\mathcal{R}_{\Delta,\ell}\) of SO\((2,2)\), let \(\mathcal{H}^{(s)}_{\Delta,\ell}\) be the subspace spanned by all descendants of spin \(s\in\mathbb{Z}\)25. It is shown in the appendix D of [14] that the SO\((1,2)\) Casimir restricted in the subspace \(\mathcal{H}^{(s)}_{\Delta,\ell}\) can be represented by the following matrix: Footnote 25: The spin \(s\) is an eigenvalue of the generator \(L_{0}\in\text{SO}(1,2)\subset\text{SO}(2,2)\). So \(\mathcal{H}^{(s)}_{\Delta,\ell}\) is the counterpart of \(\mathcal{H}_{j}\) in the tensor product case. \[\begin{split}&\mathcal{Q}^{(s)}_{n,n}=2n(n+\ell-s)+\Delta(\ell+2n+ 1)-s(\ell+\Delta)\,\\ &\mathcal{Q}^{(s)}_{n+1,n}=\mathcal{Q}^{(s)}_{n,n+1}=-\sqrt{(n+1 )(n+\Delta-s)(n+\ell-s+1)(n+\Delta+\ell)}\end{split} \tag{102}\] with \(n\geq|\text{Min}(0,\ell-s)|\). Figure 4: First five low-lying eigenvalues of SO\((1,2)\) Casimir matrix restricted to the subspace \(s=0,1,2,3\) for \(\mathcal{R}_{\Delta,\ell=3}\). The points above the dashed line belong to principal series. In addition, the discrete series with Casimir \(p(1-p)\) where \(p\in\{1,2,\cdots,s\}\) show up as discussed in the text. In section 5.2.2, we have studied the spectrum property of \(\mathcal{Q}^{(0)}\) numerically. Now we show some observations for the \(s\neq 0\) case. As before, we truncate the size of \(\mathcal{Q}^{(s)}\) by a large number \(\mathcal{N}\) and diagonalize the truncated matrix numerically. As proven in [14], in decomposition of \(\mathrm{SO}(2,2)\) primary representation \(\mathcal{R}_{\Delta,\ell}\), discrete series representations \(\{\mathcal{D}_{1}^{\mathrm{sign}(\ell)},\mathcal{D}_{2}^{\mathrm{sign}(\ell) },\cdots,\mathcal{D}_{\ell}^{\mathrm{sign}(\ell)}\}\) appear. In section 5.2.2, we were blind to them in the numerics as we consider \(\mathrm{SO}(1,2)\) Casimir in the \(s=0\) subspace. In fig. (B.4), we identify these discrete series representations as certain eigenvalues of \(\mathcal{Q}^{(s)}\) for \(s\neq 0\). Analogous to eq. (3.23) and eq. (4.14), we can define a regularized version of \(\mathcal{K}(\lambda;\ell)\) formally defined in (5.17) by taking the limit \(\Delta^{\prime}=\Lambda\to\infty\) in eq. (5.19): \[\mathcal{K}_{\mathrm{PV}}(\lambda;\ell)=\frac{\log(\Lambda)}{\pi}-\frac{1}{2 \pi}\sum_{\pm}\psi\left(\Delta-\frac{1}{2}\pm i\lambda\right)-\frac{1}{\pi} \sum_{k=1}^{|\ell|}\frac{k-\frac{1}{2}}{(k-\frac{1}{2})^{2}+\lambda^{2}}\.\] (B.18) Again, similar to what we see for the tensor products, the coarse-grained densities \(\bar{\rho}_{\mathcal{N}}^{(s)}\) extracted from \(\mathcal{Q}^{(s)}\) for various \(s\) differ in the small \(\lambda\) region and \(\mathcal{K}_{\mathrm{PV}}(\lambda;\ell)\) matches with \(\bar{\rho}_{\mathcal{N}}^{(s=0)}\) up to fixing a value for \(\Lambda\) - see the left panel of fig. (B.5) - similar to the observasion we had for the case of tensor products illustrated in fig. (3.4) and fig. (B.2). However, the relative density \(\rho_{\mathrm{rel}}\) has been observed to be \(s\)-independent in large \(\mathcal{N}\) limit and it matches perfectly with \(\mathcal{K}_{\mathrm{rel}}(\lambda;\ell)\) (c.f. eq. (5.19)), as shown in the right panel of fig. (B.5). In this section, we gave numerical and analytical evidence that for \(d=1\) and in the large \(\mathcal{N}\) limit, the _relative_ coarse-grained density extracted from diagonalizing the truncated Casimir matrix is the same for different subsectors labeled by \(L_{0}=j\) (for tensor product) or \(L_{0}=s\) (for CFT). Therefore, for convenience, we pick the \(L_{0}=0\) subspace in sections 3 and 5 for our numerical investigation. We conjecture this to be true in higher dimensions, although we did not explicitly check it. The matching of the relative density from the character analysis with the numerical results in sections 4 and 5, on the other hand, is an evidence for this conjecture. ## Appendix C Summing over \(\text{SO}(d)\) Weyl character In this appendix, we will give a proof of the following series involving \(\text{SO}(d)\) Weyl characters \[\sum_{s=0}^{\infty}\chi^{\text{SO}(d)}_{\mathbb{Y}_{s}}(\mathbf{x})\,q^{s}=\frac{1- q^{2}}{P_{d}(q,\mathbf{x})} \tag{102}\] where \(0<q<1\) and \(P_{d}(q,\mathbf{x})\) is defined in eq. (15). First we want to argue that it suffices to prove (102) for even \(d\). According to the branching rule, \(\text{SO}(2r+1)\) and \(\text{SO}(2r)\) characters are related by \[\chi^{\text{SO}(2r+1)}_{\mathbb{Y}_{s}}(\mathbf{x})=\sum_{m=0}^{s}\chi^{\text{SO}( 2r)}_{\mathbb{Y}_{m}}(\mathbf{x}) \tag{103}\] which implies \[\sum_{s=0}^{\infty}\chi^{\text{SO}(2r+1)}_{\mathbb{Y}_{s}}(\mathbf{x})\,q^{s}= \sum_{m\geq 0}\chi^{\text{SO}(2r)}_{\mathbb{Y}_{m}}(\mathbf{x})\sum_{s\geq m}\,q^{s}= \frac{1}{1-q}\sum_{m\geq 0}\chi^{\text{SO}(2r)}_{\mathbb{Y}_{m}}(\mathbf{x})q^{m} \tag{104}\] If (102) holds for even \(d=2r\), i.e. \(\sum_{m\geq 0}\chi^{\text{SO}(2r)}_{\mathbb{Y}_{m}}(\mathbf{x})q^{m}=\frac{1-q^{2}}{ P_{2r}(q,\mathbf{x})}\), then it is obviously correct for \(d=2r+1\) since \(P_{2r+1}(q,\mathbf{x})=(1-q)P_{2r}(q,\mathbf{x})\). To prove (102) for even \(d=2r\), we need an explicit expression of \(\chi^{\text{SO}(d)}_{\mathbb{Y}_{s}}(\mathbf{x})\), which is given by the famous Weyl character formula [64] \[\chi^{\text{SO}(d)}_{\mathbb{Y}_{s}}(\mathbf{x})=\frac{|x_{i}^{\ell_{j}}+x_{i}^{- \ell_{j}}|}{|x_{i}^{n-j}+x_{i}^{-(n-j)}|} \tag{105}\] Let's briefly explain the notations in this formula: * \(\ell_{j}\) is the \(j\)-th component of the vector \(\mathbf{\ell}=(s+r-1,r-2,r-3,\cdots,1,0)\). * The numerator \(|x_{i}^{\ell_{j}}+x_{i}^{-\ell_{j}}|\) means the determinant of the matrix \(N_{\rho}\) whose \((j,i)\) entry is \(x_{i}^{\ell_{j}}+x_{i}^{-\ell_{j}}\). * The denominator \(|x_{i}^{n-j}+x_{i}^{-(n-j)}|\) means the determinant of the matrix \(D_{\rho}\) whose \((j,i)\) entry is \(x_{i}^{n-j}+x_{i}^{-(n-j)}\). It is well known that \(D_{\rho}\) can be alternatively expressed as \[|D_{\rho}|=2\,\Psi(x_{i}+x_{i}^{-1}),\quad\Psi(\xi_{i})\equiv\prod_{i<j}(\xi_ {i}-\xi_{j})\] (106) Since only the first row of \(N_{\rho}\) depends on \(s\), the sum \(\sum_{s\geq 0}|N_{\rho}|q^{s}\) effectively changes the first row in the following way: \[x_{i}^{s+r-1}+x_{i}^{1-r-s}\rightarrow\frac{x_{i}^{r-1}}{1-x_{i}q}+\frac{x_{i}^{ 1-r}}{1-x_{i}^{-1}q} \tag{111}\] Denote this new matrix by \(N_{\rho}^{\prime}\) and it is related to the series \(\sum_{s\geq 0}\chi_{\mathbb{Y}_{s}}^{\text{SO}(d)}(\mathbf{x})q^{s}\) by \[\sum_{s\geq 0}\chi_{\mathbb{Y}_{s}}^{\text{SO}(d)}(\mathbf{x})q^{s}=\frac{|N_{ \rho}^{\prime}|}{|D_{\rho}|} \tag{112}\] Then we add the rest rows to the first row of \(N_{\rho}^{\prime}\), with a weight \(q^{-(j-1)}\) for the \(j\)-th row (except the last row, for which the weight is \(\frac{1}{2}q^{1-r}\)), which yields \[\frac{x_{i}^{r-1}}{1-x_{i}q}+\frac{x_{i}^{1-r}}{1-x_{i}^{-1}q} \rightarrow \frac{x_{i}^{r-1}}{1-x_{i}q}+\frac{x_{i}^{1-r}}{1-x_{i}^{-1}q}+ q^{-1}(x_{i}^{r-2}+x_{i}^{2-r})+\cdots+q^{-(r-1)} \tag{113}\] \[= \frac{q^{2-r}\left(q^{-1}-q\right)}{1+q^{2}-(x_{i}+x_{i}^{-1})q} =\frac{q^{2-r}\left(q^{-1}-q\right)}{P(q,x_{i})}\] where \(P(q,x)\equiv 1-(x+x^{-1})q+q^{2}\). Altogether, we can rewrite \(|N_{\rho}^{\prime}|\) as \[|N_{\rho}^{\prime}|=2q^{2-r}\left(q^{-1}-q\right)\det\left(\begin{matrix}P(q, x_{1})^{-1}&P(q,x_{2})^{-1}&\cdots&P(q,x_{r})^{-1}\\ x_{1}^{r-2}+x_{1}^{2-r}&x_{2}^{r-2}+x_{2}^{2-r}&\cdots&x_{r}^{r-2}+x_{r}^{2-r} \\ \cdots&\cdots&\cdots&\cdots\\ x_{1}+x_{1}^{-1}&x_{2}+x_{2}^{-1}&\cdots&x_{r}+x_{r}^{-1}\\ 1&1&\cdots&1\end{matrix}\right) \tag{114}\] where the factor 2 appears because we have rescaled the last row. Now a crucial step is to replace \(x_{i}^{j}+x_{i}^{-j}\) by \((-q)^{-j}P(q,x_{i})^{j}\), which does not change the determinant. With this replacement, the matrix becomes essentially a Vandermonde matrix up to some reshuffling and rescaling \[|N_{\rho}^{\prime}| =\frac{2q^{2-r}\left(q^{-1}-q\right)}{(-q)^{\frac{(r-1)(r-2)}{2}} }\det\left(\begin{matrix}P(q,x_{1})^{-1}&P(q,x_{2})^{-1}&\cdots&P(q,x_{r})^{-1 }\\ P(q,x_{1})^{r-2}&P(q,x_{2})^{r-2}&\cdots&P(q,x_{r})^{r-2}\\ \cdots&\cdots&\cdots&\cdots\\ P(q,x_{1})&P(q,x_{2})&\cdots&P(q,x_{r})\\ 1&1&\cdots&1\end{matrix}\right) \tag{115}\] \[=\frac{2\left(-\right)^{r-1}q^{2-r}\left(q^{-1}-q\right)}{(-q)^{ \frac{(r-1)(r-2)}{2}}\prod_{i=1}^{r}P(q,x_{i})}\det V(P(q,x_{i}))\] where \(V(\xi_{i})\) is the \(r\times r\) Vandermonde matrix of \(\xi_{1},\xi_{2},\cdots,\xi_{r}\). Plugging in \(\det V(\xi_{i})=\prod_{i<j}(\xi_{j}-\xi_{i})\) yields \[|N_{\rho}^{\prime}| =\frac{2\left(-\right)^{r-1}q^{2-r}\left(q^{-1}-q\right)}{(-q)^{ \frac{(r-1)(r-2)}{2}}\prod_{i=1}^{r}P(q,x_{i})}\prod_{i<j}\left(P(q,x_{j})-P(q,x _{i})\right) \tag{116}\] \[=\frac{(-)^{r-1}(-q)^{\frac{r(r-1)}{2}}q^{2-r}\left(q^{-1}-q \right)}{(-q)^{\frac{(r-1)(r-2)}{2}}\prod_{i=1}^{r}P(q,x_{i})}\left(2\Psi(x_{i }+x_{i}^{-1})\right)\] Notice that the last term is exactly the denominator \(|D_{\rho}|\) and hence we obtain \[\sum_{s\geq 0}\chi_{s}^{\text{SO}(d)}(\mathbf{x})q^{s}=\frac{(-)^{r-1}(-q) ^{\frac{r(r-1)}{2}}q^{2-r}\left(q^{-1}-q\right)}{(-q)^{\frac{(r-1)(r-2)}{2}}\prod _{i=1}^{r}P(q,x_{i})}=\frac{1-q^{2}}{P_{d}(q,\mathbf{x})} \tag{112}\] This finishes our proof of (111). ## Appendix D Matrix elements of the noncompact generators in \(\text{SO}(1,d+1)\) The noncompact generators of \(\text{SO}(1,d+1)\) are denoted by \(L_{0a}\) where \(a=1,\cdots,d+1\). In particular, \(L_{0,d+1}=D\) is the dilatation operator. The quadratic Casimir of \(\text{SO}(1,d+1)\) can be expressed as \[C^{\text{SO}(1,d+1)}=-L_{0a}^{2}-C^{\text{SO}(d+1)} \tag{113}\] where \(C^{\text{SO}(d+1)}\) is the usual \(\text{SO}(d+1)\) Casimir, which equals \(n(n+d-1)\) for a spin-\(n\) representation. In this appendix, we will derive explicitly how \(L_{0a}\) acts on the \(\text{SO}(d+1)\) content of a continuous UIR \(\mathcal{F}_{\Delta}\). We will first focus on the principal series case, e.g. \(\Delta=\frac{d}{2}+i\mu\), and then show how the results can be generalized to complementary series. With the action of \(L_{0a}\) being derived, we will use eq. (113) to compute the matrix elements of \(C^{\text{SO}(1,d+1)}\) in certain subsectors of the tensor product \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\). As we have reviewed in section 2, the Hilbert space of \(\mathcal{P}_{\Delta}\) consists of all single-row representations of \(\text{SO}(d+1)\) with multiplicity one for each. An obvious orthogonal and normalizable basis is \[|n\rangle_{a_{1}\cdots a_{n}},\ \ \ n\in\mathbb{Z},\ \ 1\leq a_{i}\leq d+1 \tag{114}\] The indices \((a_{1},a_{2},\cdots,a_{n})\) are symmetric and traceless, and hence for a fixed \(n\), the different components \(|n\rangle_{a_{1}\cdots a_{n}}\) furnish the spin-\(n\) representation of \(\text{SO}(d+1)\). By construction, \(|n\rangle_{a_{1}\cdots a_{n}}\) is an eigenstate of \(L_{0a}^{2}\) \[L_{0a}^{2}|n\rangle_{a_{1}\cdots a_{n}}=-\lambda_{n}|n\rangle_{a _{1}\cdots a_{n}},\ \ \lambda_{n}=\Delta\bar{\Delta}+n(n+d-1) \tag{115}\] The normalization of inner product is chosen to be \[\text{With summation}:\ \ a_{1}\cdots a_{n}\langle n|n\rangle_{a_{1}\cdots a _{n}}=D_{n}^{d+1} \tag{116}\] where \(D_{n}^{d+1}=\frac{(d+2n-1)\Gamma(n+d-1)}{\Gamma(d)\Gamma(n+1)}\) denotes the dimension of the spin-\(n\) representation of \(\text{SO}(d+1)\). For example, when \(n=2\), the normalization in (116) yields \[{}_{a_{1}a_{2}}\langle 2|2\rangle_{b_{1}b_{2}}=\frac{1}{2}\left( \delta_{a_{1}b_{1}}\delta_{a_{2}b_{2}}+\delta_{a_{1}b_{2}}\delta_{a_{2}b_{1}}- \frac{2}{d+1}\delta_{a_{1}a_{2}}\delta_{b_{1}b_{2}}\right) \tag{117}\] In the index free formalism, all \(|n\rangle_{a_{1}\cdots a_{n}}\) can be encoded in \[|n,z\rangle\equiv|n\rangle_{a_{1}\cdots a_{n}}z^{a_{1}}\cdots z^{a _{n}} \tag{118}\] where \(z^{a}\) is an auxiliary null vector in \(\mathbb{C}^{d+1}\). Introducing a different null vector \(w^{a}\), the inner product (46) is equivalent to \[\langle n,\bar{w}|m,z\rangle=\delta_{n,m}(\bar{w}\cdot z)^{n} \tag{49}\] Stripping off \(z^{a}\) is realized by the so-called interior derivative \[\mathcal{D}_{a}=\partial_{z^{a}}-\frac{1}{d-1+2\,z\cdot\partial_{z}}z_{a} \partial_{z}^{2} \tag{50}\] Since \(L_{0a}\) transforms as a vector under \(\mathrm{SO}(d+1)\), \(L_{0a}|n\rangle_{a_{1}\cdots a_{n}}\) has the symmetry \(\mathbb{Y}_{1}\otimes\mathbb{Y}_{n}\), which can be decomposed into three irreducible components: \(\mathbb{Y}_{n-1}\), \(\mathbb{Y}_{n+1}\) and \(\mathbb{Y}_{n,1}\). The \(\mathbb{Y}_{n,1}\) part should vanish identically because it does not belong to the \(\mathrm{SO}(d+1)\) content of \(\mathcal{P}_{\Delta}\). 26 Therefore, the action of \(L_{0a}\) on \(|n\rangle_{a_{1}\cdots a_{n}}\) should take the following form27 Footnote 26: When \(d=2\), \(\mathbb{Y}_{n,1}\) is the same as \(\mathbb{Y}_{n}\) due to the totally antisymmetric tensor \(\epsilon_{abc}\). However, one can still show that the \(\mathbb{Y}_{n}\) component cannot enter eq. (49) if \(|n\rangle_{a_{1}\cdots a_{n}}\) belongs to \(\mathcal{F}_{\Delta}\). This is not true if \(|n\rangle_{a_{1}\cdots a_{n}}\in\mathcal{F}_{\Delta,s}\) where \(s\neq 0\). Footnote 27: More explicitly, trace \(=\frac{n-1}{d+2n-3}\delta_{(a_{1}a_{2}}|n-1\rangle_{a_{3}\cdots a_{n})a}\) in this equation. \[L_{0a}|n\rangle_{a_{1}\cdots a_{n}}=\alpha_{n}|n+1\rangle_{aa_{1} \cdots a_{n}}+\beta_{n}\left(\delta_{a(a_{1}}|n-1\rangle_{a_{2}\cdots a_{n})} -\mathrm{trace}\right) \tag{51}\] where the coefficients \(\alpha_{n}\) and \(\beta_{n}\) are to be fixed. In the index-free formalism, eq. (51) is equivalent to \[L_{0a}|n,z\rangle=\frac{\alpha_{n}}{n+1}\mathcal{D}_{a}|n+1,z \rangle+\beta_{n}z_{a}|n-1,z\rangle \tag{52}\] Acting \(\mathcal{D}_{a}\) on both sides of (52) and summing over \(a\) yields \[L_{0a_{1}}|n\rangle_{a_{1}\cdots a_{n}}=\beta_{n}\frac{(d+n-2)(d+2 n-1)}{n(d+2n-3)}|n-1\rangle_{a_{2}\cdots a_{n}} \tag{53}\] where we have used \(\mathcal{D}^{2}=0\) and \[\mathcal{D}_{a}(z_{a}|n-1,z\rangle)=\frac{(d+n-2)(d+2n-1)}{d+2n-3}| n-1,z\rangle \tag{54}\] Acting \(L_{0a}\) on both sides of (52), we obtain our first recurrence relation \[\frac{(d+n-1)(d+2n+1)}{(n+1)(d+2n-1)}\alpha_{n}\beta_{n+1}+\alpha _{n-1}\beta_{n}=-\lambda_{n} \tag{55}\] Imposing the initial condition \(\beta_{0}=0\), we can completely fix the product \(\alpha_{n}\beta_{n+1}\) \[\alpha_{n}\beta_{n+1}=-\frac{(n+1)(\Delta+n)(\bar{\Delta}+n)}{d+2n +1} \tag{56}\] To derive another recurrence relation, let's compute the norm of \(L_{0a}|n,z\rangle\) using (49) and (52) \[\langle n,\bar{w}|L_{0a}^{\dagger}L_{0a}|n,z\rangle=\lambda_{n}( \bar{w}\cdot z)^{n}=\frac{|\alpha_{n}|^{2}}{(n+1)^{2}}\mathcal{D}_{a}^{(z)} \mathcal{D}_{a}^{(\bar{w})}(\bar{w}\cdot z)^{n+1}+|\beta_{n}|^{2}(\bar{w}\cdot z )^{n} \tag{57}\] which yields \[|\alpha_{n}|^{2}\frac{(d+n-1)(d+2n+1)}{(n+1)(d+2n-1)}+|\beta_{n}|^{2 }=\lambda_{n} \tag{111}\] The two recurrence relations (110) and (111) are invariant under an \(n\)-dependent U(1) transformation: \(\alpha_{n}\to e^{i\theta_{n}}\alpha_{n}\) and \(\beta_{n+1}\to e^{-i\theta_{n}}\beta_{n+1}\). The angle \(\theta_{n}\) can always be absorbed into a redefinition of \(|n\rangle_{a_{1}\cdots a_{n}}\). We will fix this U(1) degree of freedom by choosing \(\alpha_{n}\propto\Delta+n\) and \(\beta_{n+1}\propto\bar{\Delta}+n\). In particular, \(\alpha_{0}=\frac{\Delta}{\sqrt{d+1}}\). Then the solution of (110) and (111) is uniquely determined \[\alpha_{n}=\sqrt{\frac{n+1}{d+2n+1}}(\Delta+n),\;\;\;\beta_{n}=- \sqrt{\frac{n}{d+2n-1}}(\bar{\Delta}+n-1) \tag{112}\] The eq. (112) requires a minor modification if we replace the principal series \(\mathcal{P}_{\Delta}\) by complementary series \(\mathcal{C}_{\Delta}\) with \(\Delta=\frac{d}{2}+\mu\). The recurrence relations (110) and (111) still hold in this case, but \(\alpha_{n}\propto\Delta+n\) is not a proper choice anymore. It is easy to check that \(\alpha_{0}=\frac{\Delta}{\sqrt{d+1}}\) does not satisfy the \(n=0\) case of eq. (111) when \(\Delta\) is real. Instead, we will fix the U(1) degree of freedom differently by choosing all \(\alpha_{n}\) to be real and positive. In particular, it means \(\alpha_{0}=\sqrt{\frac{\Delta\bar{\Delta}}{d+1}}\). The general expressions of \(\alpha_{n}\) and \(\beta_{n}\) in eq. (112) should change accordingly as \[\alpha_{n}=\sqrt{\frac{(n+1)(\Delta+n)(\bar{\Delta}+n)}{d+2n+1}},\;\;\;\beta_{n}=-\sqrt{\frac{n(\Delta+n-1)(\bar{\Delta}+n-1)}{d+2n-1}} \tag{113}\] For principal series or complementary series, the coefficients \(\alpha_{n}\) and \(\beta_{n}\) given by eq. (112) or eq. (113) are nonvanishing, and hence these representations are irreducible. Pictorially, this property is shown in fig.(10). If we formally take \(\Delta=d+p-1,p\in\mathbb{Z}_{+}\) in eq. (113), which is clearly out of the unitarity regime of complementary series, \(\beta_{p}\) vanishes and all the rest \(\alpha_{n},\beta_{n}\) are nonvanishing. Using the definition of \(\alpha_{n}\) and \(\beta_{n}\) given by eq. (104), the vanishing of \(\beta_{p}\) implies that \(\{|n\rangle_{a_{1}\cdots a_{n}},n\geq p\}\) span an invariant subspace of \(\mathcal{C}_{\Delta}\). See fig.(10) for the \(p=2\) example: Indeed, this invariant subspace together with the inner product (110) is the type I exceptional series representation \(\mathcal{V}_{p,0}\). This construction manifests the SO(\(d+1\)) content of \(\mathcal{V}_{p,0}\) given by eq. (7). Next, we apply the results above to the tensor product of \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\). The space of SO(\(d+1\)) singlets in \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\) is spanned by \[|\psi_{n}\rangle=\frac{1}{\sqrt{D_{n}^{d+1}}}|n,\Delta_{1}\rangle _{a_{1}\cdots a_{n}}|n,\Delta_{2}\rangle_{a_{1}\cdots a_{n}},\;\;\;n\in \mathbb{N} \tag{114}\] Figure D.1: The action of noncompact generators of SO(\(1,d+1\)) in scalar principal and complementary series representations. where we have introduced the factor \(\frac{1}{\sqrt{D_{n}^{d+1}}}\) such that \(|\psi_{n}\rangle\) is normalized to be 1. Since \(|\psi_{n}\rangle\) is an \(\mathrm{SO}(d+1)\) singlet, the action of \(C^{\mathrm{SO}(1,d+1)}\) on it is equivalent to \(-L_{0,a}^{2}\) due to eq. (101): \[C^{\mathrm{SO}(1,d+1)}|\psi_{n}\rangle =-\frac{1}{\sqrt{D_{n}^{d+1}}}\left(L_{0a}^{2}|n,\Delta_{1} \rangle_{a_{1}\cdots a_{n}}|n,\Delta_{2}\rangle_{a_{1}\cdots a_{n}}+|n,\Delta_ {1}\rangle_{a_{1}\cdots a_{n}}L_{0a}^{2}|n,\Delta_{2}\rangle_{a_{1}\cdots a_{n }}\right)\] \[-\frac{2}{\sqrt{D_{n}^{d+1}}}L_{0a}|n,\Delta_{1}\rangle_{a_{1} \cdots a_{n}}L_{0a}|n,\Delta_{2}\rangle_{a_{1}\cdots a_{n}} \tag{102}\] Using eq. (103), we can easily write down the diagonal entry \(\langle\psi_{n}|C^{\mathrm{SO}(1,d+1)}|\psi_{n}\rangle\), which corresponds to the first line of (102) \[\langle\psi_{n}|C^{\mathrm{SO}(1,d+1)}|\psi_{n}\rangle=\Delta_{1}\bar{\Delta} _{1}+\Delta_{2}\bar{\Delta}_{2}+2n(n+d-1) \tag{103}\] Combining (101), (104) and the second line of (102), we obtain the off-diagonal entry \[\langle\psi_{n+1}|C^{\mathrm{SO}(1,d+1)}|\psi_{n}\rangle=-2\sqrt{\frac{D_{n+1 }^{d+1}}{D_{n}^{d+1}}}\alpha_{n}(\Delta_{1})\alpha_{n}(\Delta_{2}) \tag{104}\] which is the complex conjugate of \(\langle\psi_{n}|C^{\mathrm{SO}(1,d+1)}|\psi_{n+1}\rangle\). When both \(\mathcal{F}_{\Delta_{i}}\) belong to principal series, plugging eq. (104) into eq. (104) yields \[\mathcal{P}_{\Delta_{1}}\otimes\mathcal{P}_{\Delta_{2}}:\quad \langle\psi_{n+1}|C^{\mathrm{SO}(1,d+1)}|\psi_{n}\rangle=-\sqrt{\frac{(n+1)(n +d-1)}{(n+\frac{d-1}{2})(n+\frac{d+1}{2})}}(\Delta_{1}+n)(\Delta_{2}+n) \tag{105}\] and when both \(\mathcal{F}_{\Delta_{i}}\) belong to complementary series, plugging eq. (105) into eq. (104) yields \[\mathcal{C}_{\Delta_{1}}\otimes\mathcal{C}_{\Delta_{2}}:\quad \langle\psi_{n+1}|C^{\mathrm{SO}(1,d+1)}|\psi_{n}\rangle=-\sqrt{\frac{(n+1)(n +d-1)(\bar{\Delta}_{1}+n)(\bar{\Delta}_{1}+n)(\bar{\Delta}_{2}+n)(\bar{\Delta} _{2}+n)}{(n+\frac{d-1}{2})(n+\frac{d+1}{2})}} \tag{106}\] Appendix E Absence of exceptional series in \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\) In this appendix, we will show that the tensor product \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\) does not contain any exceptional series, where \(\mathcal{F}_{\Delta_{i}}\) belong to either principal series or complementary series. The main computational tool involved in our proof is established in the previous appendix. Figure 2: The action of noncompact generators of \(\mathrm{SO}(1,d+1)\) in \(\mathcal{V}_{2,0}\). For the type I exceptional series, we will use \(\mathcal{V}_{1,0}\) as an example to illustrate the main idea of the proof, which can be easily generalized to any \(\mathcal{V}_{p,0}\). Its \(\mathrm{SO}(d+1)\) content are given by \[\left.\mathcal{V}_{1,0}\right|_{\mathrm{SO}(d+1)}=\bigoplus_{n\geq 1} \mathbb{Y}_{n} \tag{112}\] As a result of (112), any state \(|\psi\rangle_{a}\) in the \(\mathbb{Y}_{1}\) component should satisfy \[L_{0a}|\psi\rangle_{a}=0,\ \ \ L_{0a}|\psi\rangle_{b}-L_{0b}|\psi \rangle_{a}=0 \tag{113}\] We will show that such states do not exists in \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\). First, notice that a generic state with the \(\mathbb{Y}_{1}\) symmetry takes the following form \[|\psi\rangle_{a_{1}}=\sum_{n\geq 0}\left(c_{n}|n+1,\Delta_{1} \rangle_{a_{1}\cdots a_{n+1}}|n,\Delta_{2}\rangle_{a_{2}\cdots a_{n+1}}+d_{n} |n,\Delta_{1}\rangle_{a_{2}\cdots a_{n+1}}|n+1,\Delta_{2}\rangle_{a_{1}\cdots a _{n+1}}\right) \tag{114}\] Imposing the constraint \(L_{0[a_{1}}|\psi\rangle_{b_{1}]}=0\) yields \(c_{n}\alpha_{n}(\Delta_{2})=d_{n}\alpha_{n}(\Delta_{1})\). Imposing the other constraint \(L_{a_{1}}|\psi\rangle_{a_{1}}=0\) leads to \(c_{0}\beta_{1}(\Delta_{1})+d_{0}\beta_{1}(\Delta_{2})=0\) and \[c_{n-1}\alpha_{n-1}(\Delta_{2})+d_{n-1}\alpha_{n-1}(\Delta_{1}) +\left(c_{n}\beta_{n+1}(\Delta_{1})+d_{n}\beta_{n+1}(\Delta_{2})\right)\frac{ (d+n-1)(d+2n+1)}{(n+1)(d+2n-1)}=0 \tag{115}\] Using (113), we find \(\alpha_{0}(\Delta_{1})\beta_{1}(\Delta_{1})+\alpha_{0}(\Delta_{2})\beta_{1}( \Delta_{2})\neq 0\) which implies that \(c_{0},d_{0}\) vanish. Then it is easy to conclude that all \(c_{n}\) and \(d_{n}\) are identically zero. This excludes the exceptional series \(\mathcal{V}_{1,0}\). For the type II exceptional series, we will use \(\mathcal{U}_{1,0}\) as an example. As reviewed in section 2, the \(\mathrm{SO}(d+1)\) content of \(\mathcal{U}_{1,0}\) are \[\left.\mathcal{U}_{1,0}\right|_{\mathrm{SO}(d+1)}=\bigoplus_{n \geq 1}\mathbb{Y}_{n,1} \tag{116}\] Unlike the other spin 1 UIRs, e.g. \(\mathcal{F}_{\Delta,1}\), \(\mathcal{U}_{1,0}\) does not contain the vector representation of \(\mathrm{SO}(d+1)\). It implies that for any state \(|\chi\rangle_{a,b}\) with symmetry \(\mathbb{Y}_{1,1}\), i.e. \(|\chi\rangle_{a,b}=-|\chi\rangle_{b,a}\), \(L_{0b}|\chi\rangle_{a,b}\) should vanish since it carries the spin 1 representation of \(\mathrm{SO}(d+1)\). We will show that such \(|\chi\rangle_{a,b}\) does not exist in the tensor product \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\). Let's first write down a basis of states in \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\) that carry the two-form symmetry of \(\mathrm{SO}(d+1)\): \[|\chi_{n}\rangle_{a_{1},b_{1}}=|n,\Delta_{1}\rangle_{a_{1}a_{2} \cdots a_{n}}|n,\Delta_{2}\rangle_{b_{1}a_{2}\cdots a_{n}}-(a_{1}\leftrightarrow b _{1}),\ \ n\geq 1 \tag{117}\] Then a generic state \(|\chi\rangle_{a,b}\) belonging to \(\mathbb{Y}_{1,1}\) should be a linear combination of \(|\chi_{n}\rangle_{a,b}\), e.g. \[|\chi\rangle_{a,b}=\sum_{n\geq 1}c_{n}|\chi_{n}\rangle_{a,b} \tag{118}\] Computing \(L_{b}|\chi\rangle_{a,b}\) using eq. (111) and (112) yields \[\sum_{n\geq 1}c_{n}\left(\alpha_{n}(\Delta_{1})|n\!+\!1,\Delta_{1} \rangle_{a_{1}\cdots a_{n+1}}|n,\Delta_{2}\rangle_{a_{2}\cdots a_{n+1}}\!+\! \frac{(n\!+\!d\!-\!1)\beta_{n}(\Delta_{2})}{n}|n,\Delta_{1}\rangle_{a_{1} \cdots a_{n}}|n\!-\!1,\Delta_{2}\rangle_{a_{2}\cdots a_{n}}\right)\] \[-\sum_{n\geq 1}c_{n}\left(\alpha_{n}(\Delta_{2})|n,\!\Delta_{1} \rangle_{a_{2}\cdots a_{n+1}}|n\!+\!1,\!\Delta_{2}\rangle_{a_{1}\cdots a_{n+1}} \!+\!\frac{(n\!+\!d\!-\!1)\beta_{n}(\Delta_{1})}{n}|n\!-\!1,\!\Delta_{1} \rangle_{a_{2}\cdots a_{n}}|n,\!\Delta_{2}\rangle_{a_{1}\cdots a_{n}}\right) \tag{113}\] Requiring \(L_{b}|\chi\rangle_{a,b}=0\) gives us two recurrence relations \[c_{n}\alpha_{n}(\Delta_{1})+c_{n+1}\beta_{n+1}(\Delta_{2})\frac {n+d}{n+1}=0\] \[c_{n}\alpha_{n}(\Delta_{2})+c_{n+1}\beta_{n+1}(\Delta_{1})\frac {n+d}{n+1}=0 \tag{114}\] and one initial condition \(c_{1}=0\). Apparently, the only solution is \(c_{n}=0\) for \(n\geq 1\). Therefore \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\) does not contain \(\mathcal{U}_{1,0}\). Although we have excluded \(\mathcal{U}_{1,0}\) in \(\mathcal{F}_{\Delta_{1}}\otimes\mathcal{F}_{\Delta_{2}}\), let's still compute the norm of all \(|\chi_{n}\rangle_{a,b}\), since it is needed in subsection 4.4 for the tensor product of \(\mathcal{V}_{1,0}\otimes\mathcal{V}_{1,0}\). By definition, we have \[\frac{1}{2}\,{}_{a_{1},b_{1}}(\chi_{n}|\chi_{n}\rangle_{a_{1},b_{ 1}}= \,{}_{a_{1}c_{2}\cdots c_{n}}\langle n|n\rangle_{a_{1}a_{2}\cdots a _{n}}\,{}_{b_{1}c_{2}\cdots c_{n}}\langle n|n\rangle_{b_{1}a_{2}\cdots a_{n}}\] \[- \,{}_{a_{1}c_{2}\cdots c_{n}}\langle n|n\rangle_{b_{1}a_{2}\cdots a _{n}}\,{}_{b_{1}c_{2}\cdots c_{n}}\langle n|n\rangle_{a_{1}a_{2}\cdots a_{n}} \tag{115}\] where we have suppressed the label of scaling dimensions because they are irrelevant for the calculation here. To compute each term in (115), consider the state \[|S\rangle_{a_{1}b_{1},c_{1}\cdots c_{n}}\equiv\,{}_{b_{1}a_{2}\cdots a_{n}} \langle n|n\rangle_{c_{1}c_{2}\cdots c_{n}}|n\rangle_{a_{1}a_{2}\cdots a_{n}} \tag{116}\] in terms of which, eq. (115) can be rewritten in the following form \[\frac{1}{2}\,{}_{a_{1},b_{1}}(\chi_{n}|\chi_{n})_{a_{1},b_{1}}=\,{}_{a_{1}c_{2 }\cdots c_{n}}\langle n|S\rangle_{a_{1}b_{1},b_{1}c_{2}\cdots c_{n}}-\,{}_{b_{ 1}c_{2}\cdots c_{n}}\langle n|S\rangle_{a_{1}b_{1},a_{1}c_{2}\cdots c_{n}} \tag{117}\] Without doing any explicit computation, one can almost fix \(|S\rangle_{a_{1}b_{1},c_{1}\cdots c_{n}}\) up to two unkown coefficients \(A,B\) \[|S\rangle_{a_{1}b_{1},c_{1}\cdots c_{n}}=A\left(\sum_{k=1}^{n}\delta_{b_{1}c_ {k}}|n\rangle_{a_{1}c_{1}\cdots\hat{c}_{k}\cdots c_{n}}-B\sum_{1\leq k<\ell \leq n}\delta_{c_{k}c_{\ell}}|n\rangle_{a_{1}b_{1}c_{1}\cdots\hat{c}_{k} \cdots\hat{c}_{\ell}\cdots c_{n}}\right) \tag{118}\] Contracting the indices \(a_{1}\) and \(b_{1}\), \(|S\rangle\) should reduce to \(|n\rangle_{c_{1}\cdots c_{n}}\) because \(|n\rangle_{a_{1}\cdots a_{n}}\,{}_{a_{1}\cdots a_{n}}\langle n|\) is the identity operator. This condition fixes \(A=\frac{1}{n}\). Next, we impose the traceless condition among the \(c_{j}\) indices. For instance, it we contract \(c_{1}\) and \(c_{2}\), the first sum in eq. (116) yields \(2|n\rangle_{a_{1}b_{1}c_{2}\cdots c_{n}}\) and the second double sum yields \((d+2n-3)|n\rangle_{a_{1}b_{1}c_{2}\cdots c_{n}}\). Therefore \(B\) is equal to \(\frac{2}{d+2n-3}\). With \(A\) and \(B\) known, the remaining task is to contract \((a_{1},c_{1})\) and \((b_{1},c_{1})\). After a simple calculation, we find \[|S\rangle_{a_{1}b_{1},b_{1}c_{2}\cdots c_{n}} =\frac{(n+d-2)(2n+d-1)}{n(2n+d-3)}|n\rangle_{a_{1}c_{2}\cdots c_{n}}\] \[|S\rangle_{a_{1}b_{1},a_{1}c_{2}\cdots c_{n}} =\frac{d-1}{n(2n+d-3)}|n\rangle_{b_{1}c_{2}\cdots c_{n}} \tag{119}\] Plugging (E.14) into (E.12) and using the normalization condition (D.4), we find \[{}_{a_{1},b_{1}}(\chi_{n}|\chi_{n})_{a_{1},b_{1}}=\frac{2(n+d-1)}{n}D_{n}^{d+1}=4 \left(1+\frac{d-1}{2n}\right)\binom{n+d-1}{d-1}\] (E.15) When \(d=3\), it becomes \[d=3:\quad{}_{a_{1},b_{1}}(\chi_{n}|\chi_{n})_{a_{1},b_{1}}=\frac{2(n+2)(n+1)^{2 }}{n}\] (E.16) ## Appendix F A review of \(\text{SO}(2,d+1)\) The Lie algebra of \(\text{SO}(2,d+1)\) is generated by \(L_{IJ}=-L_{IJ},I,J=0,1,\cdots,d+2\), which are anti-hermitian operators satisfying commutation relations \[[L_{IJ},L_{MN}]=\eta_{JM}L_{IN}-\eta_{IM}L_{JN}+\eta_{IN}L_{JM}-\eta_{JN}L_{IM}\] (F.1) where \(\eta_{IJ}=\text{diag}(-1,1,\cdots,1,-1)\). A convenient basis of \(\mathfrak{so}(2,d+1)\) is \[H=-iL_{0,d+2},\quad L_{a}^{\pm}=-iL_{a0}\mp L_{a,d+2},\quad M_{ab}=L_{ab}\] (F.2) where \(1\leq a,b\leq d+1\) and \(M_{ab}\) generate the \(\text{SO}(d+1)\) subgroup. Some nontrivial commutation relations are \[[H,L_{a}^{\pm}]=\pm L_{a}^{\pm},\quad[M_{ab},L_{c}^{\pm}]=\delta_{bc}L_{a}^{ \pm}-\delta_{ac}L_{b}^{\pm},\quad[L_{a}^{-},L_{b}^{+}]=2\delta_{ab}H-2M_{ab}\] (F.3) The \(\text{SO}(2,d+1)\) quadratic Casimir is given by \[C^{\text{SO}(2,d+1)}=H(H-d-1)-L^{+}\cdot L^{-}-C^{\text{SO}(d+1)}\] (F.4) Consider an \(\text{SO}(1,d+1)\) subgroup, that is generated by all \(L_{AB}\) with \(0\leq A,B\leq d+1\). Following the convention used in section 2, the \(\text{SO}(1,d+1)\) Casimir can be expressed as \[C^{\text{SO}(1,d+1)}=-L_{0a}^{2}+C^{\text{SO}(d+1)}=\frac{1}{4}L^{+}\cdot L^{ +}+\frac{1}{4}L^{-}\cdot L^{-}+\left(\frac{1}{2}L^{+}\cdot L^{-}+\frac{d+1}{2 }H+C^{\text{SO}(d+1)}\right)\] (F.5) A primary representation \(\mathcal{R}_{\Delta,\ell}\) is built upon a primary state \(|\Delta,\ell\rangle_{a_{1}\cdots a_{\ell}}\) satisfying \[H|\Delta,\ell\rangle_{a_{1}\cdots a_{\ell}}=\Delta|\Delta,\ell \rangle_{a_{1}\cdots a_{\ell}},\quad L_{a}^{-}|\Delta,\ell\rangle_{a_{1}\cdots a _{\ell}}=0\] \[C^{\text{SO}(d+1)}|\Delta,\ell\rangle_{a_{1}\cdots a_{\ell}}=- \ell(\ell+d-1)|\Delta,\ell\rangle_{a_{1}\cdots a_{\ell}}\] (F.6) where the Casimir condition of \(\text{SO}(d+1)\) implies that the indices \((a_{1}\cdots a_{\ell})\) are symmetric and traceless. We choose the inner product such that \(|\Delta,\ell\rangle_{a_{1}\cdots a_{\ell}}\) is normalized as follows \[\text{With summation}:{}_{a_{1}\cdots a_{\ell}}\langle\Delta,\ell|\Delta, \ell\rangle_{a_{1}\cdots a_{\ell}}=D_{\ell}^{d+1}\] (F.7) where \(D_{\ell}^{d+1}\) is the dimension of the spin \(\ell\) representation of \(\text{SO}(d+1)\). The Hilbert space of \(\mathcal{R}_{\Delta,\ell}\) is spanned by the primary state and its descendants of the form \[L_{b_{1}}^{+}\cdots L_{b_{k}}^{+}|\Delta,\ell\rangle_{a_{1}\cdots a_{\ell}}\] (F.8) The unitarity of \(\mathcal{R}_{\Delta,\ell}\) requires \(\Delta\geq\frac{d-1}{2}\) when \(\ell=0\) and \(\Delta\geq d+\ell-1\) when \(\ell\geq 1\), known as the unitarity bound. When the unitarity bound is saturated, say \(\Delta=d+\ell-1\), the primary state becomes a conserved current in the sense \[L_{a_{1}}^{+}|\Delta\rangle_{a_{1}\cdots a_{\ell}}=0 \tag{112}\] Acting with \(C^{\mathrm{SO}(2,d+1)}\) on the primary state \(|\Delta,\ell\rangle_{a_{1}\cdots a_{\ell}}\) yields its value on any state in \(\mathcal{R}_{\Delta,\ell}\) \[C^{\mathrm{SO}(2,d+1)}\left(\mathcal{R}_{\Delta,\ell}\right)= \Delta(\Delta-d-1)+\ell(\ell+d-1) \tag{113}\] When \(d=1\), \(\ell\) is an eigenvalue of the \(\mathrm{SO}(2)\) subgroup and hence takes values in all integers, i.e. \[iM_{12}|\Delta,\ell\rangle=\ell|\Delta,\ell\rangle,\ \ \ \ell\in\mathbb{Z} \tag{114}\] A generic descendant can be written as \[|n,\bar{n}\rangle=\left(L_{1}^{+}-iL_{2}^{+}\right)^{n}\left(L_{ 1}^{+}+iL_{2}^{+}\right)^{\bar{n}}|\Delta,\ell\rangle \tag{115}\] It has scaling dimension \(\Delta+n+\bar{n}\) and spin \(\ell+n-\bar{n}\). Fixing the normalization \(\langle\Delta,\ell|\Delta,\ell\rangle=1\), then the norm of \(|n,\bar{n}\rangle\) becomes \[(n,\bar{n}|n,\bar{n})=4^{n+\bar{n}}n!\bar{n}!(\Delta+\ell)_{n}( \Delta-\ell)_{\bar{n}} \tag{116}\] ## Appendix G Various properties of \(|\phi_{n}^{s}\rangle\) In this appendix, we compute the action of various \(\mathrm{SO}(1,d+1)\) generators and their products on the states \(|\phi_{n}^{s}\rangle\), defined by eq. (102). Let's first start with the \(s=0\) case as a simple exercise. Consider \(L_{a}^{-}|\phi_{n}\rangle\) and it has to take the following form, by symmetry and level counting \[L_{a}^{-}|\phi_{n}\rangle=\alpha_{n}\,L_{a}^{+}|\phi_{n-1}\rangle \tag{117}\] where \(\alpha_{n}\) is an unknown coefficient. To obtain \(\alpha_{n}\), we act with \(L_{a}^{+}\) on both sides of eq. (117) \[\left(L^{+}\cdot L^{-}\right)|\phi_{n}\rangle=\alpha_{n}|\phi_{n}\rangle \tag{118}\] which implies that \(\alpha_{n}\) is nothing but the eigenvalue of \(|\phi_{n}\rangle\) with respect to \(L^{+}\cdot L^{-}\). This eigenvalue can be easily derived by using eq. (116) \[\alpha_{n}=(\Delta+2n)(\Delta+2n-d-1)-\Delta(\Delta-d-1)=4n\left( n+\Delta-\frac{d+1}{2}\right) \tag{119}\] Acting with \(L_{a}^{-}\) on both sides of eq. (117) yields \[\left(L^{-}\cdot L^{-}\right)|\phi_{n}\rangle=\alpha_{n}\left(L^{ -}\cdot L^{+}\right)|\phi_{n-1}\rangle \tag{120}\] With the commutation relation \([L_{a}^{-},L_{b}^{+}]=2\delta_{ab}H-2M_{ab}\), we are allowed to replace \(L^{-}\cdot L^{+}\) by \(L^{+}\cdot L^{-}+2(d+1)H\) and then we have \[\left(L^{-}\cdot L^{-}\right)|\phi_{n}\rangle =\alpha_{n}(\alpha_{n-1}+2(d+1)(\Delta+2n-2))|\phi_{n-1}\rangle\] \[=16\,n\,(n+\Delta-1)\left(n+\frac{d-1}{2}\right)\left(n+\Delta- \frac{d+1}{2}\right)|\phi_{n-1}\rangle \tag{111}\] where we have used the eigenvalue interpretation of \(\alpha_{n-1}\). The eq. (111) together with the normalization \((\phi_{0}|\phi_{0})=\langle\Delta|\Delta\rangle=1\), also fixes the norm of all \(|\phi_{n}\rangle\) \[(\phi_{n}|\phi_{n})=16^{n}\,n!\,(\Delta)_{n}\left(\frac{d+1}{2}\right)_{n} \left(\Delta-\frac{d-1}{2}\right)_{n} \tag{112}\] For any \(s\geq 1\), using the same symmetry argument, we can write down the most general form of \(L_{a}^{-}|\phi_{n}^{s}\rangle\) as \[L_{a}^{-}|\phi_{n}^{s}\rangle=\beta_{n}^{s}\,L_{a}^{+}\,|\phi_{n-1}^{s}\rangle+ \gamma_{n}^{s}\,z_{a}\,|\phi_{n}^{s-1}\rangle \tag{113}\] where \(\beta_{n}^{s}\) and \(\gamma_{n}^{s}\) are coefficients to be fixed. Acting with \(L_{a}^{+}\) on both sides of eq.(113) yields that \(|\phi_{n}^{s}\rangle\) is eigenstate of \(L^{+}\cdot L^{-}\) with eigenvalue \(\alpha_{n}^{s}\equiv\beta_{n}^{s}+\gamma_{n}^{s}\). The latter can be easily computed via eq. (110), noticing that \(|\phi_{n}^{s}\rangle\) has energy \(\Delta+s+2n\) and spin \(s\) \[\alpha_{n}^{s}=(\Delta+s+2n)(\Delta+s+2n-d-1)-\Delta(\Delta-d-1)+s(s+d-1) \tag{114}\] In particular, when \(n=0\), \(\beta_{0}^{s}\) vanishes and then \(\gamma_{n}^{s}=\alpha_{0}^{s}=2s(s+\Delta-1)\). The vanishing of \(\beta_{0}^{s}\) also implies that \(|\phi_{0}^{s}\rangle\) is annihilated by \(z\cdot L^{-}\). To obtain \(\beta_{n}^{s}\) and \(\gamma_{n}^{s}\) separately, we contract both sides of eq.(113) with \(z_{a}\): \(z\cdot L^{-}|\phi_{n}^{s}\rangle=\beta_{n}^{s}|\phi_{n-1}^{s+1}\rangle\). Then the remaining task is to compute \(z\cdot L^{-}|\phi_{n}^{s}\rangle\) explicitly \[z\cdot L^{-}|\phi_{n}^{s}\rangle =\sum_{m=0}^{n-1}\left(L^{+}\cdot L^{+}\right)^{n-m-1}\left(4\,z \cdot L^{+}\,H-4\,z_{a}\,L_{b}^{+}M_{ab}-2(d-1)z\cdot L^{+}\right)\left(L^{+} \cdot L^{+}\right)^{m}|\phi_{0}^{s}\rangle\] \[=\sum_{m=0}^{n-1}\left(4(\Delta+s+2m)-2(d-1)-4s\right)|\phi_{n-1} ^{s+1}\rangle=4n\left(n+\Delta-\frac{d+1}{2}\right)|\phi_{n-1}^{s+1}\rangle \tag{115}\] where we have used \(z\cdot L^{-}|\phi_{0}^{s}\rangle=0\) and \(z_{a}M_{ab}|\phi_{0}^{s}\rangle=sz_{b}|\phi_{0}^{s}\rangle\). Altogether, we get \[\beta_{n}^{s}=4n\left(n+\Delta-\frac{d+1}{2}\right),\quad\gamma_{n}^{s}= \alpha_{n}^{s}-\beta_{n}^{s} \tag{116}\] With \(\beta_{n}^{s}\) and \(\gamma_{n}^{s}\) derived, we can proceed to compute the action of \(L^{-}\cdot L^{-}\) on \(|\phi_{n}^{s}\rangle\) \[L^{-}\cdot L^{-}|\phi_{n}^{s}\rangle =\beta_{n}^{s}\left(2(d+1)H+L^{+}\cdot L^{-}\right)|\phi_{n-1}^{s }\rangle+\gamma_{n}^{s}\,z_{a}\left(\beta_{n}^{s-1}\,L_{a}^{+}\,|\phi_{n-1}^{s -1}\rangle+\gamma_{n}^{s}\,z_{a}\,|\phi_{n}^{s-2}\rangle\right)\] \[=\left[\beta_{n}^{s}\left(2(d+1)(\Delta+s+2n-2)+\alpha_{n-1}^{s} \right)+\gamma_{n}^{s}\beta_{n}^{s-1}\right]|\phi_{n-1}^{s}\rangle\] \[=16n(n+s+\Delta-1)\left(n+\Delta-\frac{d+1}{2}\right)\left(n+s+ \frac{d-1}{2}\right)|\phi_{n-1}^{s}\rangle \tag{117}\] which fixes the norm of \(|\phi_{n}^{s}\rangle\) up to an \(n\)-independent factor: \[\frac{(\phi_{n}^{s}|\phi_{n}^{s})}{(\phi_{0}^{s}|\phi_{0}^{s})}=16^{n}n!(s+ \Delta)_{n}\left(\Delta-\frac{d-1}{2}\right)_{n}\left(s+\frac{d+1}{2}\right)_{n} \tag{118}\] Various properties of \(|s,n\rangle_{a_{1}\cdots a_{s},b_{1}\cdots b_{s}}\) In this appendix, we derive the action of \(C^{\mathrm{SO}(1,d+1)}\) on the states \(|s,n\rangle\) (dropping spin indices for the simplicity of notation), defined in eq. (110), for \(s=0\) and \(s=1\). For \(C^{\mathrm{SO}(1,d+1)}\), we use eq. (109) \[C^{\mathrm{SO}(1,d+1)}=\frac{1}{4}L^{+}\cdot L^{+}+\frac{1}{4}L^{-}\cdot L^{-} +\left(\frac{1}{2}L^{+}\cdot L^{-}+\frac{d+1}{2}H+C^{\mathrm{SO}(d+1)}\right) \tag{121}\] where the first term maps \(|s,n\rangle\) to \(\frac{1}{4}|s,n+1\rangle\) and the last term admits \(|s,n\rangle\) as an eigenstate. To compute this eigenvalue, we can use the \(\mathrm{SO}(2,d+1)\) Casimir, c.f. eq. (108), which yields the action of \(L^{+}\cdot L^{-}\) on \(|s,n\rangle\) \[L^{+}\cdot L^{-}|s,n\rangle =\left(H(H-d-1)-C^{\mathrm{SO}(d+1)}(\mathbb{Y}_{ss})-C^{\mathrm{ SO}(2,d+1)}(\mathcal{R}_{\Delta,\ell})\right)|s,n\rangle\] \[=\left((2n+\ell)(2\Delta+2n+\ell-d-1)+C^{\mathrm{SO}(d+1)}( \mathbb{Y}_{\ell})-C^{\mathrm{SO}(d+1)}(\mathbb{Y}_{ss})\right)|s,n\rangle \tag{122}\] and hence \[\left(\frac{1}{2}L^{+}\cdot L^{-}+\frac{d+1}{2}H+C^{\mathrm{SO}(d+1)}\right)| s,n\rangle =\left[2n(n+\Delta+\ell)+\frac{(d+2\ell+1)\Delta-(d-1)\ell}{2}+C^{ \mathrm{SO}(d)}(\mathbb{Y}_{s})\right]|s,n\rangle \tag{123}\] where we have used \(C^{\mathrm{SO}(d+1)}(\mathbb{Y}_{\ell})=-\ell(\ell+d-1)\) and \(\frac{1}{2}C^{\mathrm{SO}(d+1)}(\mathbb{Y}_{ss})=C^{\mathrm{SO}(d)}(\mathbb{Y} _{s})=-s(s+d-2)\). So the only nontrivial task is \(L^{-}\cdot L^{-}|s,n\rangle\). We will compute it explicitly for \(s=0\) and \(s=1\). Before diving into the computation, we write down a commutation relation that will be used a lot \[[L_{a}^{-},L^{+}\cdot L^{+}] =2(\delta_{ab}H-M_{ab})L_{b}^{+}+2L_{b}^{+}(\delta_{ab}H-M_{ab})\] \[=4L_{a}^{+}H-4L_{b}^{+}M_{ab}-2(d-1)L_{a}^{+}. \tag{124}\] ### \(s=0\) With eq. (124), we first compute the action of \(L_{a}^{-}\) on \(|0,n\rangle\) \[L_{a}^{-}|0,n\rangle =\sum_{t=0}^{n-1}(L^{+}\cdot L^{+})^{n-t-1}\left(4L_{a}^{+}H-4L_{b }^{+}M_{ab}-2(d-1)L_{a}^{+}\right)|0,t\rangle+(L^{+}\cdot L^{+})^{n}L_{a}^{-} |0,0\rangle\] \[=\sum_{t=0}^{n-1}\left(4(\Delta+\ell+2t)-2(d-1)\right)L_{a}^{+}| 0,n-1\rangle+(L^{+}\cdot L^{+})^{n}L_{a}^{-}|0,0\rangle\] \[=4n\left(n+\Delta+\ell-\frac{d+1}{2}\right)L_{a}^{+}|0,n-1\rangle +(L^{+}\cdot L^{+})^{n}L_{a}^{-}|0,0\rangle \tag{125}\] where we have used \(M_{ab}|0,t\rangle=0\) because \(|0,t\rangle\) is an \(\mathrm{SO}(d+1)\) scalar. Further acting with another \(L_{a}^{-}\) yields \[L^{-}\cdot L^{-}|0,n\rangle =4n\left(n+\Delta+\ell-\frac{d+1}{2}\right)(L^{-}\cdot L^{+})|0,n-1\rangle\] \[+4n\left(n+\Delta+\ell-1-\frac{d+1}{2}\right)(L^{+}\cdot L^{+})^{ n-1}(L^{+}\cdot L^{-})|0,0\rangle\] \[-4n(L^{+}\cdot L^{+})^{n-1}L_{b}^{+}M_{ab}L_{a}^{-}|0,0\rangle+(L^ {+}\cdot L^{+})^{n}(L^{-}\cdot L^{-})|0,0\rangle \tag{126}\] In the first line of (111), we write \(L^{-}\cdot L^{+}\) as \(2(d+1)H+L^{+}\cdot L^{-}\), and notice that \(|0,n-1\rangle\) is an eigenstate of both \(H\) and \(L^{+}\cdot L^{-}\). The eigenvalue of any \(|s,n\rangle\) with respect to \(L^{+}\cdot L^{-}\), denoted by \(\alpha_{s,n}\), is given by eq. (110) \[\alpha_{s,n}=(\ell+2n)(\ell+2n+2\Delta-d-1)+2s(s+d-2)-\ell(\ell+d-1) \tag{112}\] The second line of eq. (111) is a special case of (112) corresponding to \(s=n=0\). In the third term of (111), we first replace \(M_{ab}L_{a}^{-}\) by the commutator \([M_{ab},L_{a}^{-}]=-dL_{b}^{-}\) and then it is becomes equivalent to the second line. The last term of (111) vanishes because \((L^{-}\cdot L^{-})|0,0\rangle\) is at level \(\ell-2\) and it is clear that such a state cannot be an \(\mathrm{SO}(d+1)\) singlet. Altogether, we have \[L^{-}\cdot L^{-}|0,n\rangle =4n\left(n+\Delta+\ell-\frac{d+1}{2}\right)(2(d+1)(\Delta+\ell+2n -2)+\alpha_{0,n-1})\,|0,n-1\rangle\] \[+4n\left(n+\Delta+\ell+\frac{d+1}{2}-2\right)\alpha_{0,0}|0,n-1\rangle\] \[=16n(n+\Delta+\ell-1)\left(n+\Delta-\frac{d+1}{2}\right)\left(n+ \ell+\frac{d-1}{2}\right)|0,n-1\rangle \tag{113}\] ### \(s=1\) For the \(s=1\) case, we also start with computing \(L_{c}^{-}|1,n\rangle_{a,b}\) \[L_{c}^{-}|1,n\rangle_{a,b} =4n\left(n+\Delta+\ell-\frac{d+1}{2}\right)L_{c}^{+}|1,n-1\rangle _{a,b}\] \[-4n(L^{+}\cdot L^{+})^{n-1}L_{c^{\prime}}^{+}M_{cc^{\prime}}|1,0 \rangle_{a,b}+(L^{+}\cdot L^{+})^{n}L_{c}^{-}|1,0\rangle_{a,b} \tag{114}\] Since \(|1,0\rangle_{a,b}\) carries the 2-form representation of \(\mathrm{SO}(d+1)\), we can get rid of the generator \(M_{cc^{\prime}}\) \[L_{c^{\prime}}^{+}M_{cc^{\prime}}|1,0\rangle_{a,b}=\left(L_{a}^{+}|1,0\rangle_ {c,b}-L_{b}^{+}|1,0\rangle_{c,a}\right)-\left(\delta_{ac}L_{c^{\prime}}^{+}|1, 0\rangle_{c^{\prime},b}-\delta_{bc}L_{c^{\prime}}^{+}|1,0\rangle_{c^{\prime},a}\right) \tag{115}\] where the first bracket is equal to \(L_{c}^{+}|1,0\rangle_{a,b}\), due to a Bianchi identity following from the identification of \(|1,0\rangle_{a,b}=L_{a}^{+}|\Delta\rangle_{b}-L_{b}^{+}|\Delta\rangle_{a}\) as a curvature and \(L_{a}^{+}\) as a derivative. Combining the first term of (114) and the first term of (115) leads to \[L_{c}^{-}|1,n\rangle_{a,b} =(L^{+}\cdot L^{+})^{n}L_{c}^{-}|1,0\rangle_{a,b}+4n\left(n+\Delta +\ell-\frac{d+3}{2}\right)L_{c}^{+}|1,n-1\rangle_{a,b}\] \[+4n\left(\delta_{ac}L_{c^{\prime}}^{+}|1,n-1\rangle_{c^{\prime}, b}-\delta_{bc}L_{c^{\prime}}^{+}|1,n-1\rangle_{c^{\prime},a}\right) \tag{116}\] and hence the action of \(L^{-}\cdot L^{-}\) can be written as a sum of the following three pieces \[I_{1} =4n\left(n+\Delta+\ell-\frac{d+3}{2}\right)L^{-}\cdot L^{+}|1,n-1 \rangle_{a,b}\] \[I_{2} =4n\left(L_{a}^{-}L_{c}^{+}|1,n-1\rangle_{c,b}-L_{b}^{-}L_{c}^{+ }|1,n-1\rangle_{c,a}\right)\] \[I_{3} =L_{c}^{-}(L^{+}\cdot L^{+})^{n}L_{c}^{-}|1,0\rangle_{a,b}. \tag{117}\] For the term \(I_{1}\), it suffices to use \(L^{-}\cdot L^{+}=2(d+1)H+L^{+}\cdot L^{-}\) and eq. (H.7) \[(L^{-}\cdot L^{+})|1,n-1)_{a,b}=(2(d+1)(\Delta+\ell+2n-2)+\alpha_{1,n-1})\,|1,n-1 )_{a,b}\] (H.13) The term \(I_{2}\) can be computed as follows: \[L_{a}^{-}L_{c}^{+}|1,n-1)_{c,b} =2(\Delta+\ell+2n-2)|1,n-1)_{a,b}-2M_{ac}|1,n-1)_{c,b}+L_{c}^{+}L_ {a}^{-}|1,n-1)_{c,b}\] \[=2(\Delta+\ell+2n-d-1)|1,n-1)_{a,b}+L_{c}^{+}L_{a}^{-}|1,n-1)_{c,b}\] (H.14) where \[L_{c}^{+}L_{a}^{-}|1,n-1)_{c,b}=(L^{+}\cdot L^{+})^{n-1}L_{c}^{+}L_{a}^{-}|1,0 )_{c,b}+4(n-1)\left(n+\Delta+\ell-\frac{d+3}{2}\right)L_{c}^{+}L_{a}^{+}|1,n- 2)_{c,b}\] (H.15) because of eq. (H.11). After antisymmetrizing \(a\) and \(b\), the second term of (H.15) is proportional to \(|1,n-1)_{a,b}\), again as a result of Bianchi identity, and hence we have \[L_{a}^{-}L_{c}^{+}|1,n-1)_{c,b}-(a\leftrightarrow b) =4\left[\Delta+\ell+2n-d-1+(n-1)\left(n+\Delta+\ell-\frac{d+3}{2} \right)\right]|1,n-1)_{a,b}\] \[+(L^{+}\cdot L^{+})^{n-1}\left(L_{c}^{+}L_{a}^{-}|1,0)_{c,b}-L_{ c}^{+}L_{b}^{-}|1,0)_{c,a}\right)\] (H.16) Using the notation \(|\Delta\rangle_{a_{1}\cdots a_{s}}\equiv L_{a_{s+1}}^{+}\cdots L_{a_{\ell}}^{+ }|\Delta\rangle_{a_{1}\cdots a_{\ell}}\), then \(L_{a}^{-}|1,0)_{c,b}\) can be written as \[L_{a}^{-}|1,0)_{c,b}=2\delta_{ac}(\Delta+\ell-2)|\Delta\rangle_{b}+L_{c}^{+}L_ {a}^{-}|\Delta\rangle_{b}-(b\leftrightarrow c)\] (H.17) Now a crucial step is to compute \(L_{a}^{-}|\Delta\rangle_{b}\), which can be significantly simplified with the aid of group theory. On the one side, treating \(L_{a}^{-}|\Delta\rangle_{b}\) as the tensor product of two fundamental representation of \(\mathrm{SO}(d+1)\), it should has the structure \(\bullet\oplus\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed{ \boxed{\boxed{\boxed{\boxed{\boxed{\boxed{\boxed Altogether, (H.13), (H.18) and (H.19) yield \[(L^{-}\cdot L^{-})|1,n)_{a,b}=16n(n+\Delta+\ell-1)\left(n+\Delta- \frac{d+1}{2}\right)\left(n+\ell+\frac{d-1}{2}\right)|1,n-1)_{a,b}\] (H.20) which takes exactly the same form as eq. (H.8). ### Matrix elements For \(s=0\) or \(1\), the action of \(C^{\mathrm{SO}(1,d+1)}\) on \(|s,n)\) can be summarized as (dropping the spin indices again) \[C^{\mathrm{SO}(1,d+1)}|s,n) =\left[2n(n\!+\!\Delta\!+\!\ell)\!+\!\frac{(d\!+\!2\ell\!+\!1) \Delta\!-\!(d\!-\!1)\ell}{2}\!+\!C^{\mathrm{SO}(d)}(\mathbb{Y}_{s})\right]|s,n )+\frac{1}{4}|s,n+1)\] \[+16n(n+\Delta+\ell-1)\left(n+\Delta-\frac{d+1}{2}\right)\left(n+ \ell+\frac{d-1}{2}\right)|s,n-1)\] (H.21) Using eq. (H.8) and eq. (H.20), we find the \(n\)-dependent part of \((s,n|s,n)\) to be \[(s,n|s,n)\propto 16^{n}n!(\Delta+\ell)_{n}\left(\Delta-\frac{d-1}{2} \right)_{n}\left(\ell+\frac{d+1}{2}\right)_{n}\] (H.22) and hence the action \(C^{\mathrm{SO}(1,d+1)}\) on \(|s,n)=\frac{1}{\sqrt{(s,n|s,n)}}|s,n)\) is \[C^{\mathrm{SO}(1,d+1)}|s,n) =\left[2n(n\!+\!\Delta\!+\!\ell)\!+\!\frac{(d\!+\!2\ell\!+\!1) \Delta\!-\!(d\!-\!1)\ell}{2}\!+\!C^{\mathrm{SO}(d)}(\mathbb{Y}_{s})\right]|s,n)\] \[+\sqrt{(n+1)(n+\Delta+\ell)\left(n+\Delta-\frac{d-1}{2}\right) \left(n+\ell+\frac{d+1}{2}\right)}|s,n+1)\] \[+\sqrt{n(n+\Delta+\ell-1)\left(n+\Delta-\frac{d+1}{2}\right) \left(n+\ell+\frac{d-1}{2}\right)}|s,n-1)\] (H.23)
2308.10438
Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks
In this paper, we propose a novel layer-adaptive weight-pruning approach for Deep Neural Networks (DNNs) that addresses the challenge of optimizing the output distortion minimization while adhering to a target pruning ratio constraint. Our approach takes into account the collective influence of all layers to design a layer-adaptive pruning scheme. We discover and utilize a very important additivity property of output distortion caused by pruning weights on multiple layers. This property enables us to formulate the pruning as a combinatorial optimization problem and efficiently solve it through dynamic programming. By decomposing the problem into sub-problems, we achieve linear time complexity, making our optimization algorithm fast and feasible to run on CPUs. Our extensive experiments demonstrate the superiority of our approach over existing methods on the ImageNet and CIFAR-10 datasets. On CIFAR-10, our method achieves remarkable improvements, outperforming others by up to 1.0% for ResNet-32, 0.5% for VGG-16, and 0.7% for DenseNet-121 in terms of top-1 accuracy. On ImageNet, we achieve up to 4.7% and 4.6% higher top-1 accuracy compared to other methods for VGG-16 and ResNet-50, respectively. These results highlight the effectiveness and practicality of our approach for enhancing DNN performance through layer-adaptive weight pruning. Code will be available on https://github.com/Akimoto-Cris/RD_VIT_PRUNE.
Kaixin Xu, Zhe Wang, Xue Geng, Jie Lin, Min Wu, Xiaoli Li, Weisi Lin
2023-08-21T03:22:47Z
http://arxiv.org/abs/2308.10438v2
# Efficient Joint Optimization of Layer-Adaptive Weight Pruning ###### Abstract In this paper, we propose a novel layer-adaptive weight-pruning approach for Deep Neural Networks (DNNs) that addresses the challenge of optimizing the output distortion minimization while adhering to a target pruning ratio constraint. Our approach takes into account the collective influence of all layers to design a layer-adaptive pruning scheme. We discover and utilize a very important additivity property of output distortion caused by pruning weights on multiple layers. This property enables us to formulate the pruning as a combinatorial optimization problem and efficiently solve it through dynamic programming. By decomposing the problem into sub-problems, we achieve linear time complexity, making our optimization algorithm fast and feasible to run on CPUs. Our extensive experiments demonstrate the superiority of our approach over existing methods on the ImageNet and CIFAR-10 datasets. On CIFAR-10, our method achieves remarkable improvements, outperforming others by up to 1.0% for ResNet-32, 0.5% for VGG-16, and 0.7% for DenseNet-121 in terms of top-1 accuracy. On ImageNet, we achieve up to 4.7% and 4.6% higher top-1 accuracy compared to other methods for VGG-16 and ResNet-50, respectively. These results highlight the effectiveness and practicality of our approach for enhancing DNN performance through layer-adaptive weight pruning. Code will be available on [https://github.com/Akimoto-Cris/RD_VIT_PRUNE](https://github.com/Akimoto-Cris/RD_VIT_PRUNE). ## 1 Introduction Deep Neural Networks (DNNs) [22, 34, 35, 17, 19] play a critical role in various computer vision tasks. However, to achieve high accuracy, DNNs typically require large number of parameters, which makes it very energy-consuming and is difficult to be deployed on resource-limited mobile devices [16, 15]. Pruning is one of the powerful ways to reduce the complexity of DNNs. By removing the redundant parameters, the operations can be significantly reduced (e.g., FLOPs), which leads to faster speed and less energy-consuming. Typically, pruning approaches can be divided into two categories: structured pruning [14, 1, 9, 31, 18, 30] and weight (unstructured) pruning [27, 32, 28, 39, 16, 15]. Structured pruning approaches consider a channel or a kernel as a basic pruning unit, while weight pruning approaches consider a weight as a basic pruning unit. The former is more hardware-friendly and the latter is able to achieve higher pruning ratio. In this paper, we focus on improving the weight pruning and propose a novel jointly-optimized layer-adaptive approach to achieve state-of-the-art results between FLOPs and accuracy. Recent discoveries [10, 13, 25] demonstrate that layer-adaptive sparsity is the superior pruning scheme. However, one drawback in prior layer-adaptive approaches is that they only consider the impact of a single layer when deciding the pruning ratio of that layer. The mutual impact between different layers is ignored. Moreover, another challenge is that the search space of the pruning ratio for each layer increases exponentially as the number of layers. In a deep neural network, the number of layers can be a hundred or even a thousand, which makes it very difficult to find the solution efficiently. In our approach, we define a joint learning objective to learn the layer-adaptive pruning scheme. We aim to minimize the output distortion of the network when pruning weights on all layers under the constraint of target pruning ratio. As the output distortion is highly related to accuracy, our approach is able to maintain accuracy even at high pruning ratios. We explore an important property of the output distortion and find that the additivity property [42, 41, 38] holds when we prune weights on multiple layers. In other words, the output distortion caused by pruning all layers' weights equals to the sum of the output distortion due to the pruning of each individual layer. We provide a mathematical derivation for the additivity property by using the Taylor series expansion. Moreover, utilizing the additivity property, we develop a very fast method to solve the optimization via dynamic programming, which has only linear time complexity. We rewrite the objective function as a combinatorial optimization problem. By defining the state function and the recursive equation between different states, we can decompose the whole problem into sub-problems and solve it via dynamic programming. In practice, our approach is able to find the solution in a few minutes on CPUs for deep neural networks. Note that different from other approximation algorithms, dynamic programming is able to find the global optimal solution, which means that our approach provides optimal pruning scheme with minimal output distortion. We summarize the main contributions of our paper as follows: * We propose a novel layer-adaptive pruning scheme that jointly minimizes the output distortion when pruning the weights in all layers. As the output distortion is highly related to the accuracy, our approach maintains high accuracy even when most of the weights are pruned. We also explore an important additivity property for the output distortion based on Taylor series expansion. * We develop a fast algorithm to solve the optimization via dynamic programming. The key idea is to rewrite the objective function as a combinatorial optimization problem and then relax the whole problem into tractable sub-problems. Our method can find the solution of a deep neural network in a few minutes. * Our approach improves state-of-the-arts on various deep neural networks and datasets. The rest of our paper is organized as follows. We discuss the related works in section 2. In section 3, we develop our approach in detail. We present the objective function, the optimization method, and the time complexity analysis of the algorithm. In the last section, we provide the comprehensive experimental results. ## 2 Related Works Our focus of this work generally falls into the magnitude-based pruning (MP) track within model compression of neural networks, with early works such as OBD [24]. MP is done by ranking or penalizing weights according to some criterion (_e.g._ magnitude) and removing low-ranked weights. Many efforts have been made ever since under the context of [24, 16], which can be roughly divided into the following approaches depending on their timing of pruning embedded in the network training. **Post-training Pruning.** Post-training pruning scheme prunes out network parameters after standard network training, _i.e._ prunes from a pretrained converged model. Under this context, parameters can be pruned out at once to achieve target sparsity constraint (ont-shot pruning), or pruned out gradually during the sparse model fine-tuning (iterative pruning). [16] proposed an iterative pruning scheme that determines layerwise sparsity using layer statistics heuristic. [45, 10] adopted a global pruning threshold throughout all layers in the network to meet the model sparsity constraint. [5][33] pooled all layers together and determined pruning thresholds for different layers in an integrated fashion. [12] proposed to rewind the weights from previous iterative pruning phase based on the lottery ticket hypothesis. LAMP[25] derived a closed-form layerwise sparsity selection from a relaxed layerwise \(l2\) distortion minimization problem that is compatible with various post-training pruning schemes including iterative and one-shot pruning. PGMPF [4] adopted simple \(l2\)-based layerwise Figure 1: An example of additivity property collected on ResNet-32 on CIFAR-10. The vertical axis shows the output distortion when pruning only two consecutive layers. The horizontal axis shows the sum of the output distortion due to the pruning the involved two layers individually. Sub-figures display the situations when all layers in the model are assigned with the corresponding sparsity. pruning criterion and improved the weight masking and updating rules during finetuning. [6] adopted a one-shot pruning method by leveraging zero-invariant groups. [23] proposed to re-calibrate the biases and variances of model weights and activations, similar to the widely adopted bias correction in model quantization [11, 2]. [32] presented an iterative-pruning method that leverage taylor expansion of model loss and derived a gradient based pruning criteria. Our method leverages taylor expansion on output distortion parametrized by layer weights, which is fundamentally different from [32]. SuRP [20] recursively applies triangular inequality and assumes laplacian distribution to approximate output distortion to achieve joint-optimization similar to us. However, our approximation is more straight-forward and do not need any assumptions on the distribution. **Pruning at Initialization.** In contrast to the previous scheme, there is an emerging line of work that aims to remove connections or neurons from scratch at the initialization of training, with the merit of avoiding pretraining and complex pruning schedules. SNIP [26] prunes parameters only once at the initialization phase of training. The normalized magnitudes of the derivatives of parameters are defined as the pruning criterion. [7] presented a modified saliency metric based on SNIP [26], allowing for calculating saliences of partially pruned networks. [36] engineered the gradient flow when training sparse networks from scratch to achieve better convergence. Since pruning at initialization out of our research scope, one may refer to related surveys [37] for more comprehensive introduction. **Other Pruning Schemes.**[3] interleaves the pruning in between normal training course, gradually pruning out more connections and neurons from the networks. This scheme is similar to the previous iterative pruning, however, here the model is trained from scratch. ProbMask [43] similarly leverages projected gradient descent with progressive pruning strategy to directly train sparse networks. [40] integrates supermask training with gradient-drive sparsity for training sparse networks. Since our main contribution is the improvement of the pruning criteria, we mainly evaluate our method under post-training unstructured pruning paradigms, such as iterative pruning and one-shot pruning. Although our method may have equal potential effectiveness on other sparsity structures and pruning schemes like Pruning at Initialization, we leave such validations for future works. ## 3 Approach In this section, we present our approach in detail. We first give the formulation of our objective function and then provide the optimization method. An additivity property is derived based on Taylor series approximation. The implementation details of the dynamic programming and the analysis of the time complexity are also provided. ### Objective Function Following the notations in [25], let \(f\) denote a neural network, define \(W^{(1:l)}=\big{(}W^{(1)},...,W^{(l)}\big{)}\) as all the parameters of \(f\), where \(l\) is the number of layers and \(W^{(i)}\) is the weights in layer \(i\). When we prune part of the parameters of \(f\), we will receive a modified neural network with the new parameter set \(\tilde{W}^{(1:l)}\). We view the impact of pruning as the distance between the network outputs \(f(x;W^{(1:l)})\) and \(f(x;\tilde{W}^{(1:l)})\). The learning objective is to minimize the output distortion caused by pruning under the constraint of the pruning ratio, \[\min\ \|f(x;W^{(1:l)})-f(x;\tilde{W}^{(1:l)})\|^{2}\ \ s.t.\ \frac{\|\tilde{W}^{(1:l)}\|_{0}}{\|W^{(1:l)}\|_{0}} \leq R, \tag{1}\] where \(R\) denotes the pruning ratio for the entire network. An important property we discover is that the expectation of output distortion, caused by pruning all layers' weights, equals the sum of expectation of output distortion due to the pruning of each individual layer, \[E\left(\|f(x;W^{(1:l)})-f(x;\tilde{W}^{(1:l)})\|^{2}\right)=\sum_{i=1}^{l}E( \delta_{i}), \tag{2}\] where \(\delta_{i}\) denotes the output distortion when only pruning the weights in layer \(i\). ### Analysis We provide a mathematical derivation for the additivity property. We make the following two assumptions for the proof of additivity property: **Assumption 1**: _Taylor first order expansion: The neural network \(f\) parametrized by \(W^{(1:l)}\) when given a small perturbation \(\Delta W^{(1:l)}\) resulting in \(\tilde{W}^{(1:l)}=W^{(1:l)}+\Delta W^{(1:l)}\) can be expanded as the following:_ \[f(x;\tilde{W}^{(1:l)})=f(x;W^{(1:l)})+\sum_{i=1}^{l}\frac{\partial f}{ \partial W^{(i)}}\Delta W^{(i)}. \tag{3}\] **Assumption 2**: _I.d.d. weight perturbation across layers [44]: \(\forall 0<i\neq j<L,E(\Delta W^{(i)})E(\Delta W^{(j)})=0\)._ According to Eq. (3), \(\delta=\|f(x;W^{(1:l)})-f(x;\tilde{W}^{(1:l)})\|^{2}\) can be written as \[\delta=\Big{(}\sum_{i=1}^{l}\Delta{W^{(i)}}^{\top}\frac{\partial f}{\partial W ^{(i)}}^{\top}\Big{)}\Big{(}\sum_{j=1}^{l}\frac{\partial f}{\partial W^{(j)} }\Delta W^{(j)}\Big{)}. \tag{4}\] When we take the expectation of Eq. (4) for both sides, the right hand side can be opened up into additive terms (vector transpose is agnostic inside expectation): \[E(\delta)=\sum_{1\leq i,j\leq l}E\left(\Delta W^{(i)}\frac{\partial f}{ \partial W^{(i)}}\right)E\left(\Delta W^{(j)}\frac{\partial f}{\partial W^{(j) }}\right)\!. \tag{5}\] Further, since the derivative \(\frac{\partial f}{\partial W^{(i)}}\) is a constant as we consider trained fixed network weights, we can derive the following from Assumption 2: \[E\left(\Delta W^{(i)}\frac{\partial f}{\partial W^{(i)}}\right)E\left(\Delta W^{ (j)}\frac{\partial f}{\partial W^{(j)}}\right)=0. \tag{6}\] Therfore, the cross terms (\(i\neq j\)) in Eq. (5) disappear, obtaining: \[E(\delta)=\sum_{i=1}^{l}E\left(\|\frac{\partial f}{\partial W^{(i)}}\Delta W^{ (i)}\|^{2}\right). \tag{7}\] Eq. (7) is the result we want because, again, according to Assumption 1, \[\begin{split}\frac{\partial f}{\partial W^{(i)}}\Delta W^{(i)}& =f(x;W^{(1:i-1)},\tilde{W}^{(i)},W^{(i+1,l)})\\ &-f(x;W^{(1;l)}).\end{split} \tag{8}\] Therefore, the left hand side of Eq. (7) becomes the real output distortion \(\delta\) when all layers are pruned, and the right hand side becomes the sum of the output distortion due to the individual pruning of each single layer's weights, which can be used to approximate the output distortion. We have done an empirical examination of our theoretically proposed additivity property on real network. As shown in Fig. 1, when we examine the cases where only pruning two adjacent layers each time in a pretrained model, contributing to the right hand side addable distortion terms while other layers contributing zero to the approximation, we observe that the additivity holds quite well with marginal residuals, where almost all observation points sit close to the identity line. ### Optimization via Dynamic Programming By utilizing the additivity property, we can rewrite the objective function as a combinatorial optimization problem and solve it efficiently using dynamic programming. The objective function is re-written as, \[\min\ \delta_{1}+\delta_{2}+...+\delta_{l}\ \ s.t.\ \ t_{1}+t_{2}+...+t_{l}=T, \tag{9}\] where \(T\) denotes the total number of weights to prune and \(t_{i}\) denotes the number of weights to prune in layer \(i\). We solve (9) by decomposing the whole problem into sub-problems. The basic idea is that we define a state function and find the recursive equation between the states. The problem is solved based on the recursive equation. Specifically, define \(g\) as the state function, in which \(g_{i}^{j}\) means the minimal distortion caused when pruning \(j\) weights at the first \(i\) layers. Our goal is to calculate \(g_{l}^{T}\). For initialization, we have, \[g_{1}^{j}=\delta_{1}(j),\ for\ \ 1\leq j\leq T, \tag{10}\] where \(\delta_{i}(j)\) denotes the distortion caused when pruning \(j\) weights at layer \(i\). Then we have the recursive equation between the states \(g_{i}\) and \(g_{i-1}\), which is, \[g_{i}^{j}=\min\{g_{i-1}^{j-k}+\delta_{i}(k)\},\ where\ \ 1\leq k\leq j. \tag{11}\] The state functions are calculated based on equation (11) in a bottom-up manner from \(g_{1}\) to \(g_{l}\). In practice, we need another variable \(s\) to store the decision of each state to know the number of weights pruned in each layer. \(s\) is defined as \[s_{i}^{j}=\operatorname*{arg\,min}_{k}\{g_{i}^{j}=g_{i-1}^{j-k}+\delta_{i}(k)\}. \tag{12}\] Algorithm 1 shows the pseudo-codes to calculate the state function and find the pruning solution. ### Time complexity analysis The time complexity of the optimization algorithm using dynamic programming is \(O(l\times T^{2})\), as we have \(l\times T\) different states, and each state needs to enumerate the number of weights pruned in a layer. In practice, this algorithm is very fast which costs just a few seconds on CPUs for deep neural networks. We show the detailed results of the speed in the experimental section. ``` 0: Output distortion \(\delta_{i}(j)\) when pruning \(j\) weights in single layer \(i\), for \(1\leq i\leq l\) and \(1\leq j\leq T\). 0: The number of weights \(p_{i}\) pruned in layer \(i\). Initialize minimal output distortion \(g_{i}^{j}=0\) when pruning \(j\) weights in the first \(i\) layers. Initialize state function \(s_{i}^{j}=-1\) where \(s_{i}^{j}\) denotes the number of weights pruned in layer \(i\) when pruning \(j\) weights in the first \(i\) layers. for\(i\) from \(1\) to \(l\)do for\(j\) from \(0\) to \(T\)do If \(i=1\): \(g_{1}^{j}=\delta_{1}(j)\), \(s_{1}^{j}=j\). Else: \(g_{i}^{j}=\min\{g_{i-1}^{j-k}+\delta_{i}(k)\}\), \(s_{i}^{j}=\operatorname*{arg\,min}_{k}\{g_{i}^{j}\}\). endfor endfor The number of weights pruned in layer \(l\) is \(p_{l}=s_{l}^{T}\). Update \(T=T-s_{l}^{T}\). for\(i\) from \(l-1\) to \(1\)do The number of weights pruned in layer \(i\) is \(p_{i}=s_{i}^{T}\). Update \(T=T-s_{i}^{T}\). endfor ``` **Algorithm 1** Optimization via dynamic programming. ## 4 Experiment Results **Implementation Details.** As our contribution to the existing pruning schemes is on the layer-wise sparsity selection, we evaluate our rate-distortion-based pruning method under different experimental settings, including iterative pruning and one-shot pruning, as well as on multiple network \begin{table} \begin{tabular}{c|c|c|c c|c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Arch} & \multirow{2}{*}{Method} & Sparsity & Remaining & \multirow{2}{*}{Top-1 (\%) \(\uparrow\)} & Top-1 \\ & & & (\%) & FLOPs (\%) & & drop (\%) \(\downarrow\) \\ \hline \multirow{8}{*}{CIFAR-10} & \multirow{8}{*}{ResNet-32} & LAMP [25] & \(79.03\) & \(36.02\) & \(92.58\pm 0.25\) & \(1.41\) \\ & & Ours & \(58.65\) & \(34.5\) & \(\mathbf{93.62\pm 0.23}\) & \(\mathbf{0.37}\) \\ \cline{2-6} & \multirow{8}{*}{ResNet-32} & LAMP [25] & \(89.3\) & \(21.66\) & \(91.94\pm 0.24\) & \(2.05\) \\ & & Ours & \(73.76\) & \(22.21\) & \(\mathbf{93.34\pm 0.10}\) & \(\mathbf{0.65}\) \\ \cline{2-6} & \multirow{8}{*}{(Dense: \(93.99\))} & LAMP [25] & \(95.5\) & \(11\) & \(90.04\pm 0.13\) & \(3.95\) \\ & & Ours & \(86.57\) & \(11.25\) & \(\mathbf{92.56\pm 0.20}\) & \(\mathbf{1.43}\) \\ \cline{2-6} & & LAMP [25] & \(98.85\) & \(3.29\) & \(83.66\pm 0.29\) & \(10.33\) \\ & & Ours & \(95.5\) & \(3.59\) & \(\mathbf{90.83\pm 0.24}\) & \(\mathbf{3.16}\) \\ \cline{2-6} & \multirow{8}{*}{CIFAR-10} & \multirow{8}{*}{CIFAR-16} & E-R ker. [10] & \(95.6\) & / & \(91.99\pm 0.14\) & \(-0.79\) \\ & & DPF [29] & \(95\) & / & \(93.87\pm 0.15\) & \(-0.13\) \\ & & LAMP [25] & \(95.6\) & \(15.34\) & \(92.06\pm 0.21\) & \(-0.86\) \\ & & SuRP [20] & \(95.6\) & / & \(92.13\) & \(-0.93\) \\ & & Ours & \(95.6\) & \(6.83\) & \(\mathbf{92.59\pm 0.17}\) & \(\mathbf{-0.88}\) \\ \cline{2-6} & \multirow{8}{*}{(Dense: \(91.71\))} & Global [33] & \(98.85\) & / & \(81.56\pm 3.73\) & \(9.64\) \\ & & Uniform [45] & \(98.85\) & / & \(55.68\pm 12.20\) & \(35.52\) \\ & & Uniform+ [13] & \(98.85\) & / & \(87.85\pm 0.26\) & \(3.35\) \\ & & E-R ker. [10] & \(98.85\) & / & \(90.55\pm 0.19\) & \(0.65\) \\ & & LAMP [25] & \(98.85\) & 6.65 & \(91.07\pm 0.4\) & \(0.13\) \\ & & SuRP [20] & \(98.84\) & / & \(91.21\) & \(-0.01\) \\ & & Ours & \(98.85\) & \(3.43\) & \(\mathbf{92.14\pm 0.18}\) & \(\mathbf{-0.43}\) \\ \cline{2-6} & \multirow{8}{*}{DenseNet-121} & PGMPF [4] & / & \(33\) & \(\mathbf{93.6}\) & \(0.08\) \\ & & LAMP [25] & \(86.58\) & \(33.53\) & \(92.22\pm 0.05\) & \(-0.51\) \\ & & Ours & \(67.21\) & \(35.49\) & \(92.76\pm 0.18\) & \(\mathbf{-1.05}\) \\ \hline \multirow{8}{*}{DenseNet-121 (Dense: \(91.14\))} & LAMP [25] & \(95.5\) & \(6.45\) & \(90.11\pm 0.13\) & \(1.03\) \\ & & SuRP [20] & \(95.5\) & / & \(90.75\) & \(0.39\) \\ & & Ours & \(95.5\) & \(6.72\) & \(\mathbf{91.49\pm 0.21}\) & \(\mathbf{-0.35}\) \\ \cline{2-6} & \multirow{8}{*}{(Dense: \(91.81\))} & Global [33] & \(98.85\) & / & \(45.30\pm 27.75\) & \(45.84\) \\ & & Uniform [45] & \(98.85\) & / & \(66.46\pm 18.72\) & \(24.68\) \\ & & Uniform+ [13] & \(98.85\) & / & \(69.25\pm 19.28\) & \(21.89\) \\ & & E-R ker. [10] & \(98.85\) & / & \(59.06\pm 25.61\) & \(32.08\) \\ & & LAMP [25] & \(98.85\) & \(1.71\) & \(85.13\pm 0.31\) & \(6.01\) \\ & & SuRP [20] & \(98.56\) & / & \(86.71\) & \(4.43\) \\ & & Ours & \(98.85\) & \(2.02\) & \(\mathbf{87.7\pm 0.24}\) & \(\mathbf{3.44}\) \\ \hline \multirow{8}{*}{ImageNet} & \multirow{8}{*}{VGG-16-BN} & LAMP [25] & \(95.5\) & \(37.16\) & \(64.63\) & \(8.73\) \\ & & Ours & \(95.5\) & \(9.12\) & \(\mathbf{66.9}\) & \(6.47\) \\ \cline{1-1} & & Ours & \(73.54\) & \(34.95\) & \(\mathbf{69.35}\) & \(\mathbf{4.02}\) \\ \cline{1-1} \cline{2-6} & \multirow{8}{*}{(Dense: \(73.37\))} & LAMP [25] & \(98.85\) & \(16.73\) & \(51.59\) & \(21.78\) \\ \cline{1-1} & & Ours & \(89.3\) & \(17.71\) & \(\mathbf{68.88}\) & \(\mathbf{4.49}\) \\ \cline{1-1} & & Ours & \(98.85\) & \(3.51\) & \(\mathbf{59.41}\) & \(\mathbf{13.96}\) \\ \cline{1-1} \cline{2-6} & \multirow{8}{*}{ResNet-50 (Dense: \(76.14\))} & PGMPF [4] & / & \(53.5\) & \(75.11\) & \(0.52\) \\ \cline{1-1} & & Ours & \(41\) & \(53.5\) & \(\mathbf{75.90}\) & \(\mathbf{0.24}\) \\ \cline{1-1} \cline{2-6} & \multirow{8}{*}{(Dense: \(76.14\))} & LAMP [25] & \(89.3\) & \(26.1\) & \(72.56\) & \(3.58\) \\ \cline{1-1} & & Ours & \(67.22\) & \(28.52\) & \(\mathbf{73.47}\) & \(\mathbf{2.67}\) \\ \cline{1-1} \cline{2-6} & \multirow{8}{*}{(Dense: \(76.14\))} & LAMP [25] & \(95.5\) & \(15.47\) & \(66.04\) & \(10.1\) \\ \cline{1-1} & & Ours & \(95.5\) & \(2.85\) & \(\mathbf{66.06}\) & \(\mathbf{10.08}\) \\ \cline{1-1} & & Ours & \(79.01\) & \(16.58\) & \(\mathbf{72.26}\) & \(\mathbf{3.88}\) \\ \cline{1-1} \cline{2-6} & \ architectures and image classification datasets. We consider 3 models on CIFAR-10 dataset [21], _i.e._, VGG-16 following the adapted architectures in [25], ResNet-32 [17], DenseNet-121 [19], while on ImageNet dataset [8], we evaluate VGG-16 with BatchNorm [34] and ResNet-50 [17]. On CIFAR-10, following the baseline method [25], we perform _five independent trials_ for each method, and we report the averages and standard deviations among the trials. On the much larger scaled ImageNet, we only perform one trial for each method. For other implementation details, please refer to the supplementary material. **Details when generating rate-distortion curves.** In the experimentations, we need to generate rate-distortion curves for every layer to enable sparsity optimization, where points on the curves are a pair of sparsity level and the model output distortion when certain layer is pruned to that sparsity. For non-data-free scheme, the curves are sampled on a randomly selected calibration set from training dataset, while it is also possible to make it data-free by leveraging synthesized data sampled from certain distribution, _e.g._ standard normal distribution. The size of calibration set is set to \(1024\) samples for CIFAR-10 and \(256\) for ImageNet respectively. However, rate-distortion curves obtained by the above process may be interfered by real-world factors resulting in noisy curves. Therefore we designed various strategies to refine the raw rate-distortion curves and better aid the optimization thereafter. Specifically, (1) **Worst case sampling**: inspired by LAMP [25], we calculate the distortion as the _maximum_ squared norm error among all calibration samples instead of calculating the _MSE_ for the whole calibration set; (2) **Outliers filtering**: treat the local maxima points on the curves that break monotonicity as outlier noises and remove them in order to facilitate Algorithm 1, especially to effectively perform Eq. (12). We provide ablation studies in the later Sec. 4.4 to discuss the individual effects of these strategies. ### Iterative Pruning Results In the iterative pruning scheme, one starts with a pretrained full-capacity model. During the finetuning process of the pretrained model, we gradually prune out parameters from the model by a certain amount at each iterative stage. Figure 2: Iterative pruning process of various classification models and datasets. Under different stages of iterative pruning, we get a set of sparse models with gradually increasing sparsity and decreasing computation complexity (FLOPs). Following the iterative pruning settings in LAMP [25], we prune out \(20\%\) of the remaining parameters from the model each time after a round of finetuning. The hyper-parameters setup of the finetuning is detailed in the supplementary material. Tab. 1 compares the results of model accuracies produced during the iterative pruning process by our method and other pruning method counterpart. Given non-standardized adoption of CNN models for experimentations in post-training pruning works, we examined as most models as appeared in various literature and add those as baselines in our comparison, including Global [33], Uniform [45], Uniform+ [13], LAMP [25], E-R ker. [10], which is an extended Erdos-Renyi method for CNNs pruning, where layer-wise sparsity is selected by a closed-form criterion dependent on merely the layer architecture (e.g., the numbers of input and output channels, the convolutional kernel sizes). Fig. 2 further demonstrates the detailed iterative pruning procedures of different methods, where the remaining FLOPs (X-axis) gradually decreases in the course of finetuning. **Results on CIFAR.** From Tab. 1, we observe that our method consistently produces pruned models with higher test performance and less test accuracy drop for the same computational complexity (FLOPs) compared to other methods. Fig. 2 further verifies that this observation holds throughout the pruning procedure. For example for ResNet-32 on CIFAR-10, our method obtains Top-1 accuracy at \(92.56\) on average with \(11.25\%\) remaining FLOPs, while baseline result [25] is only \(90.04\) at \(11\%\) FLOPs; When remaining FLOPs is around \(3\%\), we even improve the accuracy by \(7.17\), _i.e._ only \(3.16\) accuracy drop with only \(4.4\%\) survived parameters. For VGG-16 on CIFAR-10, we also observe similar results, where our method achieves the least accuracy drop among various counterparts, _e.g._, when FLOPs are within the range of \(33\pm 2\%\), without the advance design of soft gradient masking and weight updating strategies adopted in [4], ours achieves \(-1.05\%\) drop of Top-1 at \(35.49\%\) FLOPs, which means that the pruned network performs better by \(1.05\%\) than the unpruned one. PGMPF [4] achieves a higher accuracy score than us on VGG-16 model with \(33\%\) remaining FLOPs, which was obtained from a higher performance unpruned model, but still underperforms us regarding the accuracy drop (Top-1 dropped by \(0.08\%\)). **Results on ImageNet.** On the larger scale dataset ImageNet, we also observe similar behaviors from our approach. For VGG16-BN, we outperform others on both \(35\pm 2\%\) and \(16\pm 2\) FLOPs groups. Noticeably, when model sparsity is as high as \(98.85\%\), _i.e._ only \(1.15\%\) surviving parameters, our method still has \(59.41\%\) accuracy, while LAMP already drops to around \(52\). This is also observed on ResNet-50, where we outperform LAMP by a large margin at \(6\%\) FLOPs group. From Fig. 2c, there is a minor observation that although consistently higher test accuracy with \(<50\%\) FLOPs, VGG-16-BN performs slightly lower within the \(30\)\(50\%\) FLOPs range before going up again in the following finetuning iterations. It is speculated that VGG-16-BN is more sensitive to large structural changes for post-train pruning. In all, for both datasets, we observe that our method generates higher accuracy sparse models given either the same FLOPs or sparsity constraint. ### One-shot Pruning Results In one-shot pruning scheme, we directly prune the model to the target computation or parameter constraint, followed by a one-time finetuning. Tab. 2 summarizes the one-shot pruning results using various unstructured pruning algorithms. We carry out comparison on ResNet-50 on ImageNet. The result verifies that our method still fits in the one-shot pruning scheme, with higher accuracy at \(34.5\%\) FLOPs than both baselines [25, 6]. ### Zero-data Pruning To evaluate whether our method is compatible with zero-data pruning scheme, which is promising to achieve better generalizability than standard prun \begin{table} \begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{Method} & Sparsity & Remaining & Top-1 & Top-1 \\ & (\%) & FLOPs (\%) & (\%) & drop(\%) \\ \hline Unpruned & \(0\) & \(100\) & \(76.14\) & - \\ \hline LAMP [25] & \(64.5\) & \(55\) & \(75.43\) & \(0.71\) \\ OTO [6] & \(64.5\) & \(34.5\) & \(75.1\) & \(1.04\) \\ Ours & \(58\) & \(34.5\) & \(\mathbf{75.59}\) & \(\mathbf{0.55}\) \\ \hline \hline \end{tabular} \end{table} Table 2: One-shot pruning results of ResNet-50 on ImageNet. \begin{table} \begin{tabular}{c|c c|c c} \hline \hline \multirow{2}{*}{Method} & Sparsity & Remaining & Top-1 & Top-1 \\ & (\%) & FLOPs (\%) & (\%) & drop(\%) \\ \hline Unpruned & \(0\) & \(100\) & \(76.14\) & - \\ \hline [23] & \(50\) & / & \(73.89\) & \(2.16\) \\ LAMP [25] & \(50\) & \(67.05\) & \(74.9\) & \(1.24\) \\ Ours* & \(50\) & \(42.48\) & \(\mathbf{75.13}\) & \(\mathbf{1.01}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Zero-data one-shot pruning results of ResNet-50 on ImageNet dataset. **Ours*** denotes the zero-data alternative of our method by using white noise data to generate rate-distortion curves. usually data dependant, we attempt to adopt our method to zero-data pruning, by replacing the calibration images set that is sampled from real test set with white noise (pixels in each color channel are independently generated by the same distribution required by the classification model, _e.g._, standard normalized distribution \(\mathcal{N}(0,1)\)). Tab. 3 summarizes the results of zero-data variant of our approach compared to the baseline [23] result with the same data synthesize strategy (white-noise). We also include LAMP [25] in the comparison since it happens to require no calibration set to obtain layerwise pruning thresholds for good. Our approach still achieves superior results under zero-data scenario, with only \(1.01\%\) performance drop. This is within our expectation since our rate-distortion theory-based algorithm does not depend on any specific input data distribution. ### Ablation Studies Since our major contribution is the joint-optimization strategy, we first conducted a comparison with the case not using joint-optimization where we directly solve layer-wise sparsity on the output features of each layer, resulting in the performance shown in Tab. 4. As indicated in the table, we observe deteriorated performances for such single-layer optimization on both tested models, showing that our joint-optimization strategy is optimal. We also evaluate the individual effectiveness of the aforementioned rate-distortion curves refining strategies. We first perform ablation on CIFAR-10 dataset. From Tab. 5, we observe that at the same model sparsity \(89\%\), which is relatively high for one-shot pruning scheme, both strategies are shown to work positively for our approach. Therefore, we included both strategies to conduct experiments of main results. We also observe the same on ImageNet dataset, as shown in Tab. 6. Particularly, Outlier filtering strategy brings slightly more improvement on both CIFAR-10 and ImageNet, where Worst case sampling makes no difference at this particular sparsity target. ### Other discussions There is also an interesting observation from Tab. 1 that with the same model sparsity, our method constantly reduces more FLOPs from the model. To better analyze this phenomenon, we take a closer look into the layerwise sparsity solution given by different approaches. As shown in Fig. 3, our method prunes out more parameters from deeper layers than LAMP [25]. Since activations in deeper layers in CNNs usually have more channels and features than shallow layers, pruning out more parameters from deep layers will reduce more operations, resulting in less remaining FLOPs. From Fig. 3, another observation is that both methods prune out more parameters from the last layer of ResNet-32 which is the fully-connected layer, implying that parameters of the last layer contain large redundancy. Meanwhile, we observe that DenseNet-121 on CIFAR-10 does not display the above phenomenon, where our method reduces the same level of FLOPs compared with LAMP under the same sparsity. We elaborate this in the supplementary material. ### Time Complexity We provide the empirical optimization time complexity analysis in Tab. 7. In practice, we use ternary search algorithm to search the solution of \(s_{i}^{j}\) in Eq. (12), which has logarithmic time complexity given the search range. On small datasets like CIFAR-10, with \(35\) layers, our method takes less than a second to calculate the layerwise sparsity, \begin{table} \begin{tabular}{c|c|c|c c} \hline \hline \multirow{2}{*}{WCS} & \multirow{2}{*}{OF} & \multirow{2}{*}{Sparsity (\%)} & Top-1 & Top-1 \\ & & & (\%) & drop(\%) \\ \hline \multirow{2}{*}{ResNet-50} & Ours & \(58\) & \(\mathbf{75.59}\) & \(\mathbf{0.55}\) \\ \cline{2-5} & Ours* & \(60\) & \(74.89\) & \(1.45\) \\ \hline \multirow{2}{*}{VGG-16-BN} & Ours & \(60\) & \(\mathbf{69.01}\) & \(\mathbf{4.36}\) \\ \cline{2-5} & Ours* & \(59\) & \(62.50\) & \(10.87\) \\ \hline \hline \end{tabular} \end{table} Table 4: Comparison of joint-optimization objective and vanilla (single-layer) optimization (denoted by Ours*). \begin{table} \begin{tabular}{c|c|c|c|c} \hline \hline \multirow{2}{*}{Arch} & \multirow{2}{*}{Method} & Sparsity & Top-1 & Top-1 \\ & & (\%) & (\%) & drop(\%) \\ \hline \multirow{2}{*}{ResNet-50} & Ours & \(58\) & \(\mathbf{75.59}\) & \(\mathbf{0.55}\) \\ \cline{2-5} & Ours* & \(60\) & \(74.89\) & \(1.45\) \\ \hline \multirow{2}{*}{VGG-16-BN} & Ours & \(60\) & \(\mathbf{69.01}\) & \(\mathbf{4.36}\) \\ \cline{2-5} & Ours* & \(59\) & \(62.50\) & \(10.87\) \\ \hline \hline \end{tabular} \end{table} Table 6: Different post-processing strategies of RD curves on ResNet-50 on ImageNet with one-shot pruning scheme. Test accuracy of the model **before** finetuning is reported. \begin{table} \begin{tabular}{c c|c|c c} \hline \hline \multirow{2}{*}{WCS} & \multirow{2}{*}{OF} & \multirow{2}{*}{Sparsity (\%)} & Top-1 & Top-1 \\ & & & (\%) & drop(\%) \\ \hline Unpruned & \(0\) & \(93.99\) & - \\ \hline \multirow{4}{*}{\(\surd\)} & \(89.3\) & \(91.15\) & \(2.84\) \\ & & \(89.2\) & \(91.31\) & \(2.68\) \\ \cline{1-1} & \(\surd\) & \(89\) & \(91.51\) & \(2.58\) \\ \cline{1-1} \cline{2-5} \(\surd\) & \(\surd\) & \(89.3\) & \(\mathbf{92.3}\) & \(\mathbf{1.69}\) \\ \hline \hline \end{tabular} \end{table} Table 5: Different post-processing strategies of RD curves on ResNet-32 on CIFAR-10 with iterative pruning scheme. **WCS**: Worst case sampling, **OF**: Outlier filtering. while on larger ImageNet, our method still only takes a few seconds. We also analyze the curve generation costs. For each layer, we traverse all calibration set samples to calculate output distortion at all sparsity levels to generate a rate-distoriton curve. Therefore, the cost of generating rate-distortion curves becomes \(O(lSN)\), where \(l\) is the number of layers, \(S\) is the number of sparsity levels (we set \(S=100\) in practice), and \(N\) is the size of calibration set. We provide the actual time costs for two CIFAR-10 models in Tab. 8. In practice, we used optimized dataloader and parallelized curve generation of different layers to cut down the inference time per sample. ### Analysis of Approximation Error Given the locality nature of taylor expansion, we expect an increasing discrepancy of the taylor approximation under large distortion. We analyze the empirical approximation error in Fig. 4. The left figure visualizes the relations between the taylor-based approximated output distortion (X-axis) and the real output distortion (Y-axis), we notice that the data points in the figure are very close to the diagonal. The right figure plots the approximation error at different sparsity levels. The approximation error inflates at large sparsities, e.g. \(>50\%\). ## 5 Conclusions We have presented a new rate-distortion based unstructured pruning criterion. We revealed the output distortion additivity of CNN models unstructured pruning, supported by theory and experiments. We exploited this property to simplify the NP-hard layerwise sparsity optimization problem into a fast pruning criterion with only \(O(l\times T^{2})\) complexity. Benefiting from the direct optimization on the output distortion, our proposed criterion shows superiority over existing methods in various post-training pruning schemes. Our criterion prefer to prune deep and large layers, leading to significant model size and FLOPs reductions. ## Acknowledgement This research is supported by the Agency for Science, Technology and Research (A*STAR) under its Funds (Project Number A1892b0026 and C211118009 and MTC Programmatic Funds (Grant No. M23L7b0021)). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of the A*STAR. \begin{table} \begin{tabular}{c|c|c|c} \hline \hline Configuration & No. & Sparsity & Time (s) \\ \cline{2-4} & layers & (\%) & \\ \hline ResNet-32@CIFAR-10 & \(35\) & \(20\) & \(0.46\pm 0.09\) \\ \hline ResNet-50@ImageNet & \(54\) & \(50\) & \(2.08\pm 0.21\) \\ \hline \hline \end{tabular} \end{table} Table 7: Time spent on layerwise sparsity optimization of our method. \begin{table} \begin{tabular}{c|c|c} \hline \hline Configuration & Curve & Optimize (s) \\ \hline ResNet-18@CIFAR-10 & 1052.64 & 0.84 \\ \hline VGG-16@CIFAR-10 & 664.19 & 2.20 \\ \hline \hline \end{tabular} \end{table} Table 8: Comparison of time cost of RD curve generations and optimization. Figure 4: Empirical Approximation Error Analysis. Figure 3: Layer-wise sparsity statistics of ResNet-32 on CIFAR-10 of different methods during iterative pruning. Height of bars denotes the pruning rate with \(\{0.36,0.74,0.89,0.96,0.98\}\) model sparsities.
2309.00502
How heat propagates in liquid $^3$He
In Landau's Fermi liquid picture, transport is governed by scattering between quasi-particles. The normal liquid $^3$He conforms to this picture but only at very low temperature. Here, we show that the deviation from the standard behavior is concomitant with the fermion-fermion scattering time falling below the Planckian time, $\frac{\hbar}{k_{\rm B}T}$ and the thermal diffusivity of this quantum liquid is bounded by a minimum set by fundamental physical constants and observed in classical liquids. This points to collective excitations (a sound mode) as carriers of heat. We propose that this mode has a wavevector of 2$k_F$ and a mean free path equal to the de Broglie thermal length. This would provide an additional conducting channel with a $T^{1/2}$ temperature dependence, matching what is observed by experiments. The experimental data from 0.007 K to 3 K can be accounted for, with a margin of 10\%, if thermal conductivity is the sum of two contributions: one by quasi-particles (varying as the inverse of temperature) and and another by sound (following the square root of temperature).
Kamran Behnia, Kostya Trachenko
2023-09-01T14:50:13Z
http://arxiv.org/abs/2309.00502v4
# How heat propagates in 'non-Fermi liquid' \({}^{3}\)He ###### Abstract In Landau's Fermi liquid, transport is governed by scattering between quasi-particles. The normal liquid \({}^{3}\)He conforms to this picture, but only when T\(<0.02\) T\({}_{F}\). Here, we observe that the deviation from the standard behavior is concomitant with the fermion-fermion scattering time falling below the Planckian time, \(\frac{\hbar}{k_{F}T}\). The thermal diffusivity of this quantum liquid is bounded by a minimum set by fundamental physical constants and earlier observed in classical liquids. This implies that collective excitations of the liquid (a sound mode) are carrying heat. We argue that if heat is carried by 2k\({}_{F}\) hydrodynamic sound mode, both the amplitude and the hitherto unexplained \(T^{1/2}\) temperature dependence of thermal conductivity find an explanation with no other adjustable parameter. The impact of Landau's Fermi liquid (FL) theory [1] in condensed matter physics of the twentieth century can not be exaggerated. Before its formulation, the success of Sondheimer's picture of electrons in metals as a degenerate Fermi gas was a mystery [2]. Even in a simple alkali metal such as Na, the Coulomb interaction between electrons is larger than the Fermi energy. Why wasn't this a problem for understanding physics of metallic solids? Landau solved this mystery by proposing that a strongly interacting system of fermions can be mapped to an ideal system consisting of "quasi-particles" without interaction, or rather with an interaction weak enough to be considered as a perturbation [2; 3]. His theory was inspired by the less common isotope of helium, namely \({}^{3}\)He [4], the first experimental platform for testing the theory. Decades later, the theory was also applied to strongly correlated metals, known as heavy-fermion systems [5]. Leggett, reviewing liquid \({}^{3}\)He [6], writes that it is "historically the first strongly interacting system of fermions of which we have been able to obtain a semi-quantitative description in the low-temperature limit." He also adds that the theory "seems to agree quantitatively with experiment only for \(T\lesssim 100\) mK". Compared to the degeneracy temperature of non-interacting fermions of the system (\(\sim 5\) K), this is a very low temperature. The properties of the ground state (and its evolution with pressure) have been the subject of numerous theoretical papers [7; 8; 9; 10; 11; 12; 13; 14]. In contrast, the breakdown of the Fermi liquid picture at very low temperatures, earlier noted by Emery [15] and Anderson [16], ceased to be widely debated afterwards. Here, we begin by recalling that the most comprehensive set of thermal conductivity data by Greywall [17] shows that as low as \(T\approx 0.01\) K, the Fermi liquid picture does not hold. Thermal conductivity, \(\kappa\), deviates from the expected \(T^{-1}\) behavior and the extracted scattering time, \(\tau_{\kappa}\), is not \(\propto T^{-2}\). We show that this 'non-Fermi-liquid' (NFL) regime emerges when the fermion-fermion scattering time becomes comparable or shorter the Planckian time [18; 19], the time scale often invoked in the context of "strange" metallicity [20]. Remarkably, in this regime, the thermal diffusivity, \(D_{th}\), of quantum liquid \({}^{3}\)He matches the minimum empirically observed [21] and theoretically justified [22; 23] in classical liquids. This implies that collective excitations play a role in heat transport comparable to the role played by phonons in solids and in classical liquids. We find that the magnitude and the temperature dependence of the thermal conductivity can be accounted for if heat is carried by a phonon-like zero sound mode with a 2k\({}_{F}\) wave-vector and an evanescence set by the thermal thickness of the Fermi surface in the momentum space [24]. This phononic mechanism of heat propagation is distinguished from all those previously identified in solids and liquids, either classical or quantum. On the other hand, when the temperature becomes of the order of the Fermi temperature, its expression becomes another version of the Bridgman formula [25] for classical liquids. Fig. 1 reproduces figures reported by Greywall, who performed the most extensive study of thermal transport in normal liquid \({}^{3}\)He [17]. Samples with different molar volumes correspond to different pressures in the \(T=0\) limit. As seen in Fig. 1a, thermal conductivity, \(\kappa\), at low temperature is inversely proportional to temperature, as expected in the Fermi liquid picture. But the temperature window for this behavior, already narrow at zero pressure, shrinks with increasing pressure. By the melting pressure, the Fermi Liquid (FL) regime has almost vanished. The breakdown is even more visible in Fig. 1b. It shows the temperature dependence of the inverse of of \(\tau_{\kappa}T^{2}\), the scattering time extracted from thermal conductivity multiplied by the square of temperature, which should be constant in the Fermi liquid picture. A deviation is visible even at 8 mK and shoots up with increasing pressure. In Fig. 2a, we compare the temperature dependence of \(\tau_{\kappa}\) according to Greywall's data (Fig. 1b)) with the Planckian time, \(\tau_{P}=\frac{\hbar}{k_{B}T}\)[18; 19]. One can see that at zero pressure \(\tau_{\kappa}\) becomes of the order of \(\tau_{P}\) at \(T\approx 0.1\) K. At 3 MPa, near the melting pressure, \(\tau_{\kappa}\) falls below at \(\approx 0.043\) K. As seen in Fig. 2b, which reproduces the phase diagram of \({}^{3}\)He [4; 10; 14], the crossover between the FL and the NFL regions of the phase diagram is concomitant with the passage from \(\tau_{\kappa}\gg\tau_{P}\) to \(\tau_{\kappa}\lesssim\tau_{P}\). The possibility of a Planckian bound on dissipation is a subject hotly debated in condensed matter physics [18]. An important clue is provided by the temperature dependence of thermal diffusivity, \(D_{th}\), obtained from \(\kappa\)[17] and specific heat [26]. As seen in Fig. 3a, it shows a minimum, both at zero pressure and at 3 MPa. In all classical fluids, thermal diffusivity goes through a minimum at the intersection between a liquid-like regime, where it decreases with temperature and a gas-like regime where it increases with temperature [27]. We illustrate this in Fig. 3b, which shows the temperature dependence of \(D_{th}\) in two fluids with slightly different atomic or molecular masses, namely H\({}_{2}\) and \({}^{4}\)He [27]. In all three cases, there is a minimum of thermal diffusivity, consistent with other classical liquids [21; 27] (See the supplement for more details). Although the minima are seen at different temperatures, the minimum \(D_{th}\) is very similar: Expressed in mm\({}^{2}\)s\({}^{-1}\), \(D_{min}\), is 0.063 in \({}^{3}\)He, \(\sim 0.049\) in \({}^{4}\)He and \(\sim 0.065\) in H\({}_{2}\). The observation implies a role played by phonon-like collective excitations of \({}^{3}\)He in heat transport. In a classical fluid, diffusion at high temperature is driven by the random walk of the particles. Cooling lowers the diffusion constant by decreasing the velocity and the mean free path. Below a give temperature, collective excitations (in other word, sounds) begin to operate. In a quantum liquid, this process occurs along the opposite direction: The diffusion constant is dominated by quasiparticles at low temperature. With warming, the mean path decreases, pulling down the diffusion constant towards a minimum, above which sound becomes relevant. Using Landau's own words,"sonic excitations in the gas of quasi-particles (phonons of the 'zeroth sound')" [28], are to become the main carriers of heat above this minimum. A collision time shorter than the Planckian time means that the frequency of thermally excited zero sound (which increases linearly with temperature (\(\omega_{zes}=\frac{k_{B}T}{\hbar}\))) becomes smaller than the scattering rate (which increases quadratically with temperature). This inequality (\(\omega_{zes}\tau_{\kappa}<1\)) means that the thermally excited zero sound is in the hydrodynamic limit, where the distinction between zero sound and first sound fades away [3]. In liquid \({}^{3}\)He, this occurs at a remarkably low temperature, because \(\tau_{\kappa}T^{2}\ll\frac{\hbar E_{F}}{k_{B}^{2}}\)[29] (See the supplement). As seen in Fig. 1a, Greywall observed, without commenting on it, that above 0.5 K, \(\kappa\propto T^{1/2}\). Fig. 4a shows Figure 1: **The narrow validity of the Fermi liquid picture to the thermal conductivity in \({}^{3}\)He:** a) Thermal conductivity, \(\kappa\) as a function of temperature. b) The extracted scattering time \(\tau_{\kappa}\) extracted from the same data and specific heat, times the square of temperature, \(T^{2}\). Contrary to what is expected in the standard Fermi liquid theory, \(\tau_{\kappa}T^{2}\) is never constant. The figures are from Ref. [17]. The horizontal red solid line in panel (a) represents what is expected for a classical liquid according to Bridgman’s formula. the temperature dependence of \(\kappa/T\) in normal liquid \({}^{3}\)He. Greywall's data at \(T<1\) K [17], measured at constant molar volume of 36.68 cm\({}^{3}\)/mol (corresponding to zero pressure in the low temperature limit) is in rather good agreement with the data reported by Murphy and Meyer [30], measured at saturating vapor pressure (SVP) above 1.2 K (Fig.4a). Despite the imperfect agreement (unsurprising given the change in the molar volume at SVP), one can see that Murphy and Meyer [30] roughly confirm Greywall's observation about the asymptotic tendency of thermal conductivity: \(\kappa\propto T^{1/2}\). This temperature dependence is distinct from what is known to occur in different regimes of phonon thermal conductivity in crystals and glasses (see Table 1) and also from the Bridgman formula (\(\kappa_{B}=rk_{B}n^{2/3}v_{s}\)) [31; 25; 32; 33] for classical liquids, which does not contain any temperature dependence. To find the source of the temperature dependence of thermal conductivity, let us turn to Landauer's picture of conduction as transmission [43]. When heat is transmitted by a wave, thermal conductivity becomes [44; 45; 24]: \[\frac{\kappa}{T}=\frac{\pi^{2}}{3}\frac{k_{B}^{2}}{h}\mathcal{T} \tag{1}\] Here, \(\frac{\pi^{2}}{3}\frac{k_{B}^{2}}{h}\) is the quantum of thermal conductance [46]. The transmission coefficient \(\mathcal{T}\) is set by the number of carrier modes and their mean free path. Its units depends on dimensions (meters in 1D, dimensionless in 2D and meters\({}^{-1}\) in 3D). In three dimensions, transmission by a wave indexed \(\nu\) follows \(\mathcal{T}_{q,\nu}\propto q_{\nu}^{2}\ell_{q,\nu}\), where \(q_{\nu}\), and \(\ell_{q,\nu}\) are the wave-vector and the mean free path of mode \(\nu\). At very low temperature, the response to temperature gradient is dominated by quasi-particles (Fig.4b). These are plane waves within a thermal window of the Fermi level which can carry heat. \(\kappa/T\) decreases quadratically with temperature, due to the temperature dependence of the mean free path of quasi-particles, \(\ell_{qp}\). In order to identify the collective transport mode leading to the \(T^{1/2}\) temperature dependence dominant above 0.5 K, let us compare it with two other cases. In crystals, the cubic temperature dependence of phonons at low temperature reflects the temperature dependence of the volume of the Debye sphere when the mean-free-path is saturated to a constant a value. In glasses, the asymptotic low temperature dependence of thermal conductivity by 'propagons' is close to quadratic. The slower temperature dependence \(\kappa\) is due to an increase in mean-free-path with cooling. In both cases, the presence of long wavelength carriers leads to a superlinear exponent (between 2 and 3) in the temperature dependence. Our case requires a scenario circumventing the cubic temperature dependence of a Debye sphere. A collective transmission by the whole Fermi surface will meet this requirement. This would be a sound mode with a wave-vector fixed at twice the Fermi radius ((Fig. 4c). There are two reasons for distinguishing 2k\({}_{F}\) as a wavevector. The first is theoretical. The Lindhard function, which quantifies the susceptibility of a fermionic gas to an external perturbation has a singularity at q=2k Figure 2: **Scattering time, Planckian bound and the FL-NFL cross-over:** a) Fermion-fermion scattering time, \(\tau_{\kappa}\) as a function of temperature for two molar volumes [17]. The blue solid line represents the Planckian time: \(\tau_{P}=\frac{h}{k_{B}T}\). Note that \(\tau_{\kappa}\) tends to fall below \(\tau_{P}\) at sufficiently high temperature. b) The phase diagram of \({}^{3}\)He [10] and the fuzzy border between the Fermi liquid (FL) and the non-Fermi liquid (NFL) regimes. Also shown are temperatures below which \(\tau_{\kappa}>\tau_{P}\) or \(\tau_{\kappa}>2\tau_{P}\). [35]. The second is experimental. Inelastic X-ray scattering experiments [47; 48] find that the dispersion of second sound has a pronounced anomaly near q=2k\({}_{F}\) (See the supplement). What is the Landauer transmission rate of such a heat-carrying mode? It is simply set by the number of \(2k_{F}\) modes in three dimensions and their evanescence. The latter can be assimilated to the thermal fuzziness of the Fermi surface in the momentum space \(:\simeq\frac{2\pi}{\Lambda}\)[24]. Here \(\Lambda=\frac{\hbar}{\sqrt{2\pi m^{*}k_{B}T}}\) is the de Broglie thermal length (m\({}^{*}\) is the effective mass). This yields a heat transmission rate of : \[\mathcal{T}_{s}\simeq\frac{4}{3}k_{F}^{2}\frac{\hbar}{\sqrt{2\pi m^{*}k_{B}T}} \tag{2}\] In the real space, this collective mode is a visco-elastic [4; 49] phonon (Fig. 4c) with a real angular frequency and a complex wave-vector. Injecting this \(\mathcal{T}\) in Eq. 1 gives: \[\kappa_{s}\simeq rk_{B}n^{2/3}\sqrt{\frac{2k_{B}T}{m}} \tag{3}\] Here \(n=\frac{k_{B}^{3}}{3\pi^{2}}\) is the particle density of the liquid and \(r=\frac{\pi^{1/6}}{34/3}\simeq 1.87\). \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline System & sonic heat carriers & Temperature dependence & Driver & reference \\ \hline \hline Crystal (T\(\rightarrow\) 0) & small-\(q\) phonons & \(\kappa\propto T^{3}\) & Boundary scattering & [34; 35] \\ \hline Crystal (T\(\sim\)T\({}_{D}\)) & \(q\sim 2\pi/a\) & \(\kappa\propto T^{-1}\) & Umklapp scattering & [34; 36; 37] \\ \hline Glass (T\(\rightarrow\) 0) & ‘diffusons’ & \(\kappa\propto T^{\sim 2}\) & Rayleigh scattering & [38; 39; 40] \\ \hline Glass (high T) & ‘propagons’ & \(\kappa\propto T^{0}\) & Minimum mean free path & [40; 41; 42] \\ \hline Classical liquid & \(q\sim 2\pi/a\) phonons & \(\kappa\propto T^{0}\) & Minimum mean free path & [25] \\ \hline Quantum liquid \({}^{3}\)He & \(q\sim 2k_{F}\) phonons & \(\kappa\propto T^{1/2}\) & Thermal fuzziness of the Fermi radius & This work \\ \hline \end{tabular} \end{table} Table 1: **Different cases of phonon thermal conductivity:** Comparison between the present case of heat propagation by collective excitations with other and better-understood regimes of phonon thermal conductivity. Figure 3: **A bound to thermal diffusivity:** a) Thermal diffusivity, \(D_{th}=\kappa/C\), of \({}^{3}\)He, as a function of temperature. The Fermi liquid regime (\(\propto T^{-2}\)) is restricted to low temperatures. It is followed by a saturation and a minimum. b) Thermal diffusivity as a function of temperature in two classical fluids. In all three cases, \(D_{th}\) has the same value at the minimum, shown by the arrows. Equation 3 has two parameters, the particle density, n, and the effective mass, m\({}^{*}\). In Fig. 4a, it is plotted using the effective mass (m\({}^{*}\)=2.7 m\({}_{3}\)[26]= 1.35 \(\times 10^{-26}\) kg) and the zero-pressure carrier density (n= 1.64 \(\times 10^{28}m^{3}\), corresponding to a molar volume of 36.68 cm\({}^{3}\)/mol [17]) of normal liquid \({}^{3}\)He. At 1 (2) K, the experimentally measured thermal conductivity is 11 (20) percent larger what is expected by our equation. Given the simplicity of the picture drawn here and the absence of any adjustable parameter, this is a surprisingly good agreement. Let us recall that the in the case of the S.V.P. data by Murphy and Meyer [30], particle density increases with warming. The growing discrepancy with warming can be accounted for by the temperature-induced change in the density. This picture provides also an explanation for the very narrow validity of the standard Fermi liquid approach. As the red line in Fig. 4a shows, at any arbitrary temperature between 0.01 K and 3 K, the experimentally measured \(\kappa/T\) can be described by a sum of \(T^{-2}\) and \(T^{-1/2}\) terms. Thus, assuming a 2k\({}_{F}\) heat-carrying sound mode can account for [90 percent of] the amplitude of \(\kappa\) at 1 K (with no adjustable parameter), its \(T^{1/2}\) temperature dependence as well as the precocious deviation from the Fermi liquid regime at \(T\ll T_{F}\). Eq. 3 has a striking resemblance to the Bridgman formula for classical liquids [25; 31; 32; 33] (See the supplement), with the speed of sound replaced by a group ve Figure 4: **Two channels for the conduction of heat:** a) Thermal conductivity divided by temperature as reported by Greywall [17] (at zero pressure) and by Murphy and Meyer [30] (at saturating vapor pressure). At very low temperatures (\(T\sim 0.01\) K) in the Fermi liquid regime, \(\kappa/T\propto T^{-2}\) (purple line). Above 0.5 K, \(\kappa/T\propto T^{-1/2}\). The blue line represents what is expected by Eq. 3, using the effective mass and the carrier density of \({}^{3}\)He. The red solid line represents a fit to experimental data in the whole temperature range assuming that \(\kappa/T\) consists of the sum of \(T^{-1/2}\) (sound transmission) and \(T^{-2}\) term (quasi-particle transmission) terms. b) At low temperature, transmission by quasi-particles is dominant. Collisions in momentum space between quasi-particles lead to a \(\propto T^{-2}\) transmission. c) At high temperature, transmission is due to the collective response of the Fermi surface. The amplitude of transmission, set by the square of the Fermi radius and the de Broglie thermal wavelength, which determines the thermal thickness of the Fermi surface, is \(\propto T^{-1/2}\). In real space, this is a visco-elastic mode. locity of \(\sqrt{\frac{2k_{B}T}{m}}\). This can be accounted for by noticing, that the time scale for randomness in a quantum liquid is set by the ratio of the Fermi velocity to the de Broglie thermal length (and not by the ratio of the sound velocity to the inter-particle distance, as in a classical liquid or a glass). Interestingly, the two equations become similar at the classical/quantum boundary, that is when the de Broglie thermal length becomes equal to the Fermi wavelength (see the supplement). Arguably, normal liquid \({}^{3}\)He is the cleanest and the simplest known Fermi liquid. If collective excitations play such a central role in its transport properties across such a wide temperature range, what about other strongly interacting collections of fermions residing beyond Landau's paradigm [50]? We leave this question for future studies.
2304.01838
BugNIST -- a Large Volumetric Dataset for Object Detection under Domain Shift
Domain shift significantly influences the performance of deep learning algorithms, particularly for object detection within volumetric 3D images. Annotated training data is essential for deep learning-based object detection. However, annotating densely packed objects is time-consuming and costly. Instead, we suggest training models on individually scanned objects, causing a domain shift between training and detection data. To address this challenge, we introduce the BugNIST dataset, comprising 9154 micro-CT volumes of 12 bug types and 388 volumes of tightly packed bug mixtures. This dataset is characterized by having objects with the same appearance in the source and target domains, which is uncommon for other benchmark datasets for domain shift. During training, individual bug volumes labeled by class are utilized, while testing employs mixtures with center point annotations and bug type labels. Together with the dataset, we provide a baseline detection analysis, with the aim of advancing the field of 3D object detection methods.
Patrick Møller Jensen, Vedrana Andersen Dahl, Carsten Gundlach, Rebecca Engberg, Hans Martin Kjer, Anders Bjorholm Dahl
2023-04-04T14:44:06Z
http://arxiv.org/abs/2304.01838v3
# BugNIST: A New Large Scale Volumetric 3D Image Dataset ###### Abstract Progress in 3D volumetric image analysis research is limited by the lack of datasets and most advances in analysis methods for volumetric images are based on medical data. However, medical data do not necessarily resemble the characteristics of other volumetric images such as \(\mu\)CT. To promote research in 3D volumetric image analysis beyond medical data, we have created the BugNIST dataset and made it freely available. BugNIST is an extensive dataset of \(\mu\)CT scans of 12 types of bugs, such as insects and larvae. BugNIST contains 9437 volumes where 9087 are of individual bugs and 350 are mixtures of bugs and other material. The goal of BugNIST is to benchmark classification and detection methods, and we have designed the detection challenge such that detection models are trained on scans of individual bugs and tested on bug mixtures. Models capable of solving this task will be independent of the context, i.e., the surrounding material. This is a great advantage if the context is unknown or changing, as is often the case in \(\mu\)CT. Our initial baseline analysis shows that current state-of-the-art deep learning methods classify individual bugs very well, but has great difficulty with the detection challenge. Hereby, BugNIST enables research in image analysis areas that until now have missed relevant data -- both classification, detection, and hopefully more._ ## 1 Introduction We contribute BugNIST1 - a comprehensive and freely available 3D volumetric image dataset aimed at benchmarking deep learning algorithms. The name, BugNIST, is inspired by the influential MNIST dataset [23]. MNIST has been a driver for developing and testing deep learning methods, especially because of the small size and large number of images. BugNIST is made up of 3D \(\upmu\)CT scans of 12 types of bugs. Besides being 3D, the scans are also significantly larger than the small MNIST images, but due to modern hardware, BugNIST volumes are still easily accessible. The bugs we have scanned include insects, larvae, pupae, and woodlice, both as individual bugs and in mixtures. With 9437 volumes, BugNIST is, to our knowledge, the first large-size non-medical volumetric image dataset targeted for benchmarking deep learning methods. Hereby, BugNIST can be a foundation for advancing the field of volumetric 3D deep learning methods. Footnote 1: The BugNIST dataset is available from: [https://qim.dk/bubinst/](https://qim.dk/bubinst/) There is currently a shortage of volumetric 3D data for benchmarking deep neural networks [31], and the most available 3D data is medical. This limits the options in developing new deep learning-based methods for analyzing 3D volumes, especially on non-medical data. There is, however, a great need for efficient techniques to analyze volumetric imaging data [42, 45]. BugNIST is created to meet this need and is developed specifically for object classification and detection in 3D volumetric images. The 12 groups of individually scanned bugs in BugNIST are evenly distributed with 700-800 volumes of each group, see fig. 1. In the additional 350 volumes with mixtures of bugs and other materials, we have counted the number of bugs and their type. For 45 of the mixtures, we have also annotated the center positions of the bugs. For the individual bugs, the challenge is to classify volumes into the respective groups, and for the mixed volumes, the challenge is to detect, classify, and count the bugs. The detection and counting challenge that we pose here is special in that the detection algorithm should be trained on the scans of individual bugs alone and tested on counting and detecting bugs in the mixtures. This makes BugNIST unique and the first to pose a detection challenge in volumetric images using this approach with such an extensive set of training data. The benefit of posing the challenge this way is model generalizability. When solving the BugNIST detection problem, methods will be forced to learn the object's appearance and not the appearance of the context surrounding the objects, which will make the detection invariant to changes in the surrounding context. As we will show, this dramatically challenges state-of-the-art deep learning detection methods. Data is a driver for developing machine learning and especially deep learning methods. Since research in data-driven methods is dominating computer vision, most advances are seen in areas with good datasets [24]. This also means that less attention is given to areas with limited or no data available. The generic nature of deep learning allows models created for one type of data to be applied to other data types. E.g. methods developed for solving problems related to ImageNet [37] or COCO [24] have been applied to volumetric data. However, this transfer of domains is sub-optimal as the characteristics of volumetric data are different from photographs. Volumetric data is obtained by scanning and the voxel size is typically given by the scanning setup. In most cases, there will also be much knowledge about the sample and expected voxel intensities. This is not the case in photographs, where illumination and scale typically vary. In this respect, volumes are simpler than photos, but the fact that the volumes are 3D adds complexity. Furthermore, volumetric data is often limited by noise and resolution. From a user perspective, the 3D property makes volumetric images complex to visualize and inspect. Considering these characteristics when developing volumetric analysis methods has the potential for better-performing methods. The majority of volumetric image datasets come from the medical domain, especially from clinical 3D CT and MRI scanners. Deep learning methods for the analysis of volumetric images are primarily developed for medical problems and there is extensive research in deep learning for medical volumetric images [50]. Much medical data is difficult to use due to privacy concerns making data-handling cumbersome. To some extent, one would assume that the methods developed for volumetric medical data also can be applied to other types of 3D data. But it turns out that the use of deep learning for non-medical volumetric data is limited [45], and the question is why this is the case. There are important differences in structures and analysis problems between medical data and for example \(\upmu\)CT data, which might be one reason for the limited use of deep learning. Medical data is acquired following specific protocols that allow for investigating and diagnosing patients, whereas \(\upmu\)CT or electron microscopy is often used for inspecting new structures, making this type of data unique. To provide solutions for such problems based on deep learning, there is a need for a new type of volumetric datasets that will allow for developing deep learning methods targeting new types of problems. There is already much volumetric image data available, and it is easy to find datasets on platforms like Zenodo2 or TomoBank3. However, most of this data is not in a processed format nor of a type that makes it easy to use for deep learning. Often this data comes with no clear problem formulation or well-defined tasks that allow for a comparison of methods, and the data is typically very large volumes that are difficult to access. It can also be difficult to judge the complexity of the data, and analysis might either be very easy or extremely difficult. This makes it less suitable to use for developing new deep learning methods, and therefore much less attractive to employ for developing new analysis methods compared to standard benchmark data. Footnote 3: [https://tomobank.readthedocs.io/en/latest/index.html](https://tomobank.readthedocs.io/en/latest/index.html) Many 2D computer vision datasets address challenges for classification, detection, segmentation, etc., but for 3D volumetric data, most datasets focus on segmentation [2, 40]. Segmentation has the advantage that it allows for solving many problems, but it typically comes with the price of expensive annotations. But similar to 2D computer vision, one would also like to answer questions such as if an object is present and where it is. Datasets for object detection in volumetric data are less common than segmentation datasets, especially for non-medical data. The BugNIST dataset, which is the main contribution of this paper, provides a challenge for classification and detection, and there are other uses such as segmentation, geometric analysis, 3D generative models (GANs), etc. that we have not explored in this paper. Often, classification and detection tasks are useful for pretraining models, and the size of BugNIST could be useful for that. ## 2 Related work In computer vision, the term 3D images cover different types of data and include point clouds, surfaces, volumes, color+depth, etc. Here we are only concerned with 3D volumes, i.e. images represented on a voxel grid. BugNIST is created to test classification and object counting and detection. Classification is a well-defined problem with a clear goal, but for volumetric data, classification is less common than detection and especially segmentation. In, for example, medical image datasets, classification makes up 10% of tasks, whereas segmentation accounts for 70% of tasks [29]. In volumetric medical data, classification is used, for example, for diagnosis [7, 8, 14, 22, 38]. One classification example outside the medical field is baggage security screening [11, 43, 44] where object classification of prohibited items is used. When it is possible to formulate an analysis problem as a classification task, performance is usually high and often with an accuracy above 90%. However, object detection is difficult to formulate as a classification problem. Instead, we need to employ a model that is targeted for object detection. Counting is done by assessing the number and type of the objects whereas, for object detection, each object must also be located. Object detection is extensively explored for 2D images, where it has numerous applications. Deep learning models are dominating, and most models either use a two-stage detector such as R-CNN [12, 13, 35], with a region proposal and a classifier to choose among the proposed regions, or a one-stage detector such as YOLO [34], where regions and classes are proposed in a single network. Object detection in volumetric images is less common. Figure 2: Illustration of the setup for data acquisition. The left image shows the volume of straws containing the bugs, here grasshoppers, which are segmented into the volumes shown as small volumes. The cotton is seen as a gray shadow. The middle right image shows the setup with mixtures and how they are segmented into individual volumes shown far right. In the example shown here, the mixtures are made of only cotton and bugs. 3D object detection can be formulated as a segmentation task using, for example, a 3D U-Net [9] where each connected component of the foreground label class is a detection [19]. An example is lung nodule detection in [46]. Employing segmentation for object detection comes with the problem of separating out objects that may be connected or combining separated segments into one object. To avoid this problem, Jaeger et al. [19] propose the Retina U-Net: a one-shot 3D detection algorithm that combines the Retina Net [25] and U-Net [36] giving detections with 3D bounding boxes. Despite the good performance of Retina U-Net, changing to a new 3D detection problem requires a time-consuming method configuration. This has been addressed in nnDetection [5]. Inspired by the nnU-Net (no new U-Net) [18], they propose a no new 3D detection method. On the BugNIST data, we test detection with 3D U-Net [9] and the nnDetection method [5]. Most 3D datasets for benchmarking image analysis algorithms are medical. A lightweight dataset is MedMNIST by Yang et al. [48, 49], which, similar to MNIST [23], is comprised of \(28\times 28(\times 28)\) images. Six of the 18 datasets are 3D and have 1370 and 1909 volumes in 2-11 classes. There are other datasets, such as the PROSTATEx dataset [4] containing 720 MRI scans and tissue samples, the COVID-CT-MD dataset [1] containing 305 CT scans, the Mossmeddata [32] containing 1110 CT scans of covid lungs, and more classifications challenges with 2-4 classes such as health, covid19, or pneumonia as is the case for the COVID-CT-MD dataset. For 3D object detection, Baumgartner et al. [5] use 13 datasets to develop the nnDetection method. Two of these datasets were originally targeted 3D detection and include the LUNA16 for nodule detection [39], which is a subset of the LIDC dataset [3] and contains 888 CT scans of lungs, and the ADAM dataset [41] for detecting aneurysms in MRI scans. ADAM is made up of 254 MRI scans. The rest of the detection datasets are medical segmentation datasets such as data from the Medical Segmentation Decathlon [40], the rib fracture detection [21], the PROSTATEx dataset [4], and others. The segmentation datasets have been transformed into detection data by identifying center positions and bounding boxes for the segments. ## 3 Dataset The BugNIST dataset has a classification and detection task. It consists of 9087 volumes of individual bugs of 12 types (species and stages) and an additional 350 volumes with mixtures of bugs, leaves, cotton, shredded paper, and wood shaving, where the number of bugs and other materials varies. We have used the dataset to investigate the problems of classifying species of scans that contain individual bugs, as well as counting, classifying and localizing species in the mixtures. To classify, count, and localize the bugs in the mixtures, the task is to learn the bug appearance from the scans of individual bugs and use this on the mixtures. Our data is obtained using a laboratory \(\upmu\)CT scanner (Nikon metrology scanner) that records volumes of 2000 voxels cubed with an isotropic voxel size of 26.62 \(\upmu\)m. These large volumes have allowed us to scan multiple bugs at once, and then crop them into smaller volumes. To scan individual bugs, we made a setup with the bugs placed in plastic straws spaced with cotton. Each straw was 6 cm long and contained 2-3 bugs with cotton in between. By bundling the straws, we scanned between 40 and 150 bugs at once. We used two types of straws, one with a diameter of 5 mm and another with a diameter of 9 mm, depending on the size of each bug type. The cylindrical shape of the straws was easy to segment using a circular Hough-transform [47] and sparse layered graphs [20], leaving us with a cylinder of only bugs and cotton. The X-ray absorption of cotton is low, which makes it almost transparent compared to the bugs, so we just kept the cotton and air part in the cylinder. A similar setup was applied for the mixtures, but instead of straws, we used test tubes with a diameter of 14 mm. With this setup we can obtain 14 mixtures from each \(\upmu\)CT scan. To vary the difficulty of the bug counting and detection problem in the mixtures, we added either leaves, shredded paper, or wood shavings to some of the mixtures together with cotton, which was used for spacing. We also varied the number of bugs, their species, and how dense they were packed. The final volumes of scans of individual bugs are \(450\times 450\times 900\) voxels. They were obtained by placing the segmented cylinder centered in the volume and along the full \(900\) voxels length of the volume. Areas outside the cylinder were set to zero. The mixtures were segmented in the same way and the resulting volumes are \(650\times 650\times 900\) voxels. All the volumes are stored as 8-bit unsigned integers, but with their original spatial resolution (physical voxel size). The scans are illustrated in fig. 2. The bugs included in the BugNIST data are bred as fodder for pets and have been frozen before being packed for scanning. Despite careful packing, the scanned bugs may appear damaged, e.g. some of the crickets have broken off antennas or miss a leg. We have chosen to accept these defects because this also reflects w--hat you would expect to meet in a normal population, and add to the challenge of the dataset. In the 5 mm straws, i.e., for the two types of fly pupae, buffalo beetle larvae, maggots, curly-wing flies, and woodlice, we packed three bugs into each straw. This means that bugs were placed close together, and due to variations in the packing, some volumes have parts of the bug above or below the cropped-out volume. To create lightweight datasets for classification, we downsized the volumes to the datasets x64, x128, x256, and x512, where x64 is of size \(64\times 32\times 32\), x128 is \(128\times 64\times 64\), etc. These datasets have further been split into training (60%), validation (10%), and test sets (30%). We also made another variation of the datasets where the bugs were aligned and resized to approximately the same size. The axis alignment was done by aligning the bugs along the principal directions of thresholded bug volumes. The size of the bugs was determined by the lengths of the principal axes. For the mixtures, we counted the bugs that were placed in the mixtures when we packed them. This gives a ground truth for the models trained to predict the number of bugs in a mixture and their species. This can, however, not be used to determine where in the volume the bugs are detected. To facilitate that the BugNIST data can be used for detection, we have manually marked the center positions of the bugs in a subset of 45 mixed volumes. We have only marked the center position and not bounding boxes because some of the bugs, e.g. mealworms, are very elongated. If such a bug lies diagonal to the volume axis, its bounding box could take up most of the volume, and therefore contain much more than that bug. Manually annotating the center position of the bugs is time-consuming, which is the reason for only annotating a smaller fraction of the mixtures. Since the annotations are used for testing, and not for training and validation, this number of annotations is sufficient for evaluating a method. ## 4 Experiments To illustrate the use of the BugNIST dataset, we have made baseline results for the proposed tasks including classification, counting, and detection. To classify scans of individual bugs, we have selected a set of high-performing classification methods. The detection problem has been solved based on a multi-label segmentation approach. ClassificationThe investigated classification methods are DenseNet [17], ResNet [15], SEResNet50 [16] and Vision Transformers [10] from MONAI [6] and Swin Transformers [26, 27] and ConvNext [28] from torchvision [30]. The torchvision models were adapted to 3D by us. These models are selected because they are well-known for their high performance and span both older tried-and-tested methods and new state-of-the-art models. We train the models on the non-mix volumes to predict the bug species. After every epoch, we evaluate the models on the validation set and keep the best model. After training, we evaluate the model on the test set. We provide the remaining details on the training setup in the supplementary materials. DetectionFor detection, we investigate two methods. First, a simple baseline using U-Net [9] from MONAI [6] which is trained to segment bugs in the classification data. We extract semantic segmentation masks by first smoothing each image with a Gaussian kernel of std. 2 voxels, thresholding, and then adding the known label to the mask. Thus, each voxel has one of 13 labels: background or one of the twelve bugs. We train the model on the training and validation set, and evaluate on the test set after every epoch. After training, we apply the best model to the mixture images where each connected component is counted as a detection. Full details are in the supplementary materials. The second method is the nnDetection (nnDet) framework [5] using a Retina U-Net [19]. This is trained to detect bugs in the classification data by outputting bounding boxes. We extract segmentation masks as before, and then also extracts a bounding box for each foreground connected component for training. We then train the model on the training and validation set, and evaluate on the test set after every epoch. After training, we apply the best model to the mixture images. Again, full details are in the supplementary materials. We evaluate the mixture data detections on the following metrics: Precision and recall, detection-L1, and count error. Precision and recall are computed by matching detections with center point annotations based on their distances. For U-Net, we use the distances from an annotation to the nearest voxel in the connected component of the detection. For nnDet, we use the distance between an annotation and the bounding box centroid. Munkres' algorithm [33] then computes the optimal matching. For U-Net, we allow matches with distances under 8 voxels and for nnDet we require that an annotation must be contained in the detection bounding box. Precision is then computed as the ratio of matched detections and total annotations and recall as the ratio of matched annotations and total annotations. For detection-L1, we compute the absolute error in the number of detections for each class and then sum these. We then normalize this error by the total number of bugs. Finally, count error is simply absolute error on the total number of detected bugs normalized by the true number of bugs. Notice that precision and recall only asses the location of each detection but not the classes whereas detection-L1 assess the detected classes but not their location. Together, they allow a useful assessment of the detection quality with low annotation effort. ## 5 Results ClassificationWe show accuracy and AUC metrics for the classification results in table 1. Due to computational limitations, we only trained a few models on the largest versions. However, the smaller datasets still give a good assessment of the models, as they perform consistently over the dataset sizes. We also show a confusion matrix in fig. 3 for a selection of models on the BugNISTx64 data. As shown in table 1, all models perform well. It is only the fine grained classification of brown and black crickets (classes AC and BC) that cause issues. Looking at just these classes, the accuracy is between 0.70 and 0.86. DetectionWe show the performance of the detection methods in table 2, illustrate detection examples on the classification data in fig. 4, and show detection examples in fig. 5. U-Net outperforms nnDet for localizing the bugs in the mixtures but has is slightly worse when it comes to counting. Furthermore, nnDet has more misclassifications than the U-Net method, despite being good at localizing in the classification images. ## 6 Discussion and Conclusion ClassificationWhen creating the BugNIST dataset, the initial idea was to make a lightweight dataset for 3D volumetric image classification to create interest in volumetric data outside the medical domain. Our aim is to promote the development of methods for volumetric data with a different appearance than typical clinical data. Non-medical data is becoming more frequent and has received less attention than volumetric data from medical scanners. When planning the data acquisition, we wanted to make sure that the data spanned a wide variety of shapes with natural variation. Therefore, we chose to scan bugs, because their morphology is both varied and relatively complex, and they have good contrast and small details like legs and antennae. We also wanted to ensure that the dataset was sufficiently large to capture the complexity and variance of the chosen types of bugs, and therefore we settled on approximately 750 individuals of each type. We also wanted to utilize the capability of the \(\upmu\)CT scanner to ensure that the specimens were imaged at a resolution that was high enough to capture the morphological details of the bugs, such as wings, legs, antennas, hairs, etc., but with a scan time that was not too long. Therefore, we settled with the described setup of scanning multiple bugs together and segmenting \begin{table} \begin{tabular}{l c c c c c c c c c c c c} \hline \hline BugNIST & x64 & x128 & x256 & x512 & \_Ax64 & \_Ax128 & \_Ax256 & \_Ax512 \\ Model & Acc. AUC & Acc. AUC & Acc. AUC & Acc. AUC & Acc. AUC & Acc. AUC & Acc. AUC & Acc. AUC & Acc. AUC \\ \hline DenseNet121 [17] & 0.94 0.998 & 0.96 0.999 & 0.98 0.999 & 0.97 0.999 & 0.95 0.998 & 0.97 0.999 & 0.98 1.000 & 0.97 0.999 \\ DenseNet169 [17] & 0.95 0.998 & 0.97 0.999 & 0.97 0.999 & 0.98 0.999 & 0.94 0.997 & 0.97 0.999 & 0.97 0.999 & 0.98 0.999 \\ DenseNet201 [17] & 0.95 0.998 & 0.96 0.999 & 0.98 0.999 & - & - & 0.94 0.997 & 0.96 0.999 & 0.98 0.999 & - & - \\ \hline ResNet18 [15] & 0.94 0.998 & 0.96 0.998 & 0.97 0.999 & - & - & 0.94 0.998 & 0.97 0.999 & 0.98 1.000 & - & - \\ ResNet34 [15] & 0.94 0.997 & 0.97 0.999 & 0.98 0.999 & - & - & 0.94 0.997 & 0.97 0.999 & 0.98 0.999 & - & - \\ ResNet50 [15] & 0.95 0.998 & 0.96 0.999 & 0.98 0.999 & - & - & 0.95 0.997 & 0.97 0.999 & 0.97 0.999 & - & - \\ \hline SERest34 [16] & 0.93 0.997 & 0.94 0.997 & 0.97 0.998 & - & - & 0.92 0.995 & 0.94 0.997 & 0.97 0.999 & - & - \\ SERest50 [16] & 0.92 0.996 & 0.96 0.998 & 0.97 0.999 & - & - & 0.91 0.995 & 0.95 0.998 & 0.97 0.999 & - & - \\ \hline ConvNext-T [28] & 0.82 0.983 & 0.90 0.993 & - & - & - & - & 0.77 0.976 & 0.90 0.992 & - & - & - & - \\ ConvNext-B [28] & 0.84 0.988 & 0.90 0.994 & - & - & - & - & 0.78 0.980 & 0.89 0.992 & - & - & - & - \\ \hline ViT [10] & 0.86 0.979 & 0.87 0.980 & - & - & - & - & 0.81 0.964 & 0.82 0.965 & - & - & - & - \\ SwinT-v2-T [27] & 0.88 0.987 & 0.86 0.984 & - & - & - & - & 0.84 0.982 & 0.82 0.976 & - & - & - & - \\ SwinT-v2-S [27] & 0.90 0.992 & 0.90 0.993 & - & - & - & - & 0.88 0.989 & 0.90 0.992 & - & - & - & - \\ \hline \hline \end{tabular} \end{table} Table 1: Classification results on the BugNIST test data. BugNISTxN refers to the non-aligned data and BugNIST_AxN to the aligned data. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & Precision\(\uparrow\) & Recall\(\uparrow\) & Det-L1\(\downarrow\) & Count Err.\(\downarrow\) \\ \hline U-Net [9] & \(0.70\pm 0.26\) & \(0.39\pm 0.21\) & \(1.37\pm 0.45\) & \(0.55\pm 0.32\) \\ nnDet [5] & \(0.15\pm 0.09\) & \(0.16\pm 0.09\) & \(1.71\pm 0.55\) & \(0.42\pm 0.35\) \\ \hline \hline \end{tabular} \end{table} Table 2: Detection results on BugNIST mixtures. Figure 3: Confusion matrix for BugNISTx64. them out to individual volumes afterwards. We classified the individual bugs at varying resolutions with various models, and in all cases, the number of correctly classified bugs was surprisingly high. It seems that the task is easier than we expected. Our initial suspicion was that the different-sized straws and orientation of the bugs were determining the classification. To counteract that, the bugs were aligned and rescaled to have approximately the same size. This did not influence the results much, and we still obtained high classification rates. Taking a closer look at the chosen bug types, there are significant morphological differences between most bugs. The most similar bugs are the brown and black cricket, the mealworm and the buffalo beetle larvae, and the two types of fly pupae. This was, however, only apparent from the classification of the crickets, which has an accuracy of around 80%. From this, we can conclude that a classification challenge for volumetric data must be more fine-grained for current state-of-the-art classification models to be challenged. DetectionTo challenge the current deep learning models, we created a detection dataset. It is made up of volumes that contain mixtures of several bugs of the same or different species and are combined with other materials to ensure a variation in appearance. Here, the challenge is to determine how many bugs there are in a mixture and their location. But instead of training the model for detection and counting directly on the mixtures, the dataset is designed such that the models must be trained on the volumes of individual bugs and applied to the mixtures. This shift in domain significantly increases how difficult the analysis problem is. Methods that can solve this problem will, however, also be applicable to solving a wider range of detection problems, where it is essential to learn the appearance of the features that characterize the indi Figure 4: Example detections in the BugNIST classification images used for training the detection models. Rows show (top to bottom) the image, the training mask, the U-Net output and the nnDet output. Only the bounding box with the highest confidence is shown for nnDet. All images show a maximum projection of the 3D volume. Figure 5: Example detection results on the mixture data. Each image shows a maximum projection of the 3D volume. Center point annotations are shown as black dots. See fig. 4 for color codes. vidually scanned items. Furthermore, this task forces the model to learn different image features than what is needed for classifying the volumes of individually scanned bugs. In the classification problem, the bugs are placed in a volume with practically empty space around them. Here, the models can utilize the shape of the empty space as a feature for the classification. This is not possible in mixtures where several bugs are densely packed and mixed with material that is not present in the training data. Therefore, the detection model must focus on the appearance of the imaged bug and cannot rely on features of the surrounding space. As an initial investigation for the detection problem, we have tested a multi-class segmentation using a U-Net [9, 36]. Since the individual bug scans are trivially segmented using a threshold and a little image smoothing, the basis for training a U-Net on these images can easily be obtained. Training a U-Net turned out to be significantly more difficult than training a classifier, which is not that surprising since the model must now make pixel-wise labeling using the 13 labels (12 bugs and a background label). We also attempted to use the U-Net to detect bugs in the mixtures. Here, we saw that the model could segment the bugs, but often the label was wrong, and we also ran into the problem of multiple bugs being segmented as one connected component, which was difficult to separate into individual bugs. Employing the nnDetect [5] as an alternative to U-Net, we saw a decrease in performance. This model is not designed for the type of problem posed here with the built-in domain shift. Based on the results from the detection and counting problem, we can conclude that current state-of-the-art models cannot solve a detection problem that is set up the way that we propose. Use of BugNISTOur findings are quite surprising. We did not expect that the classification of 3D data would be so easy, and we did not expect the counting and detection of 3D data would be so difficult. For a human, 3D volumes are typically difficult to handle. Whether you visualize them as slices, volume- or surface renderings, it requires much manual interaction and handling to get a complete understanding. This is not the case for a 3D deep learning-based classifier that can utilize the full 3D information. The complexity of the individual bug scans is therefore much less than one would expect, meaning that the variation within a class is much smaller than between classes, which is demonstrated by the high classification rates obtained for a range of standard methods. From the observation of how easy classifying 12 species of bugs is, we expect that more fine-grained classification of volumetric data can also be solved. By more fine-grained, we mean specimens that have high similarity but still belong to different classes. Fine-grained classification would also entail more classes with fewer training samples from each class than we have recorded in the BugNIST dataset. This will, however, require that objects can be scanned as individual specimens, which is possible for objects like bugs. The most common situation, however, is that you need to detect objects in 3D volumes with other materials. In these cases, it can be a great advantage to have models that can learn to focus on the object of interest and ignore the other materials, like we have set up the challenge for the mixtures in the BugNIST dataset. Other datasets do not pose the detection challenge this way. In other volumetric datasets, the detection is learned in the same domain. The objects of interest are annotated in volumes and the task is to detect objects with the appearance of the context where they are placed. Therefore, the detection models will have both the object's appearance and its context as input for learning the detection. Separating the two will ensure that the detection model learns the appearance of the object. As we have shown, this is surprisingly difficult, but with BugNIST it is now possible to start investigating and developing models that can handle this challenge. BugNIST is an extensive dataset that can have other valuable use cases than what we have explored in this paper. It could be used for exploring segmentation methods, generative models, or image registration. The fact that we have scanned many specimens of each species also makes it possible to investigate methods for analyzing the morphological variation of the scanned bugs. Despite the limited scientific interest of the scanned bugs, the methodological developments can be significant. There might be many other uses, that add to the value of the BugNIST dataset. ConclusionCreating the BugNIST dataset and investigating its use for classification and detection has given important insights that set the direction for future research. Current state-of-the-art methods are able to classify 3D volumes of individual objects to a high degree of precision, and the expected methodological improvements for classification models that can be obtained from BugNIST will be marginal. This shows that there is a need for a fine-grained volumetric classification dataset. We also find that object detection in volumetric data is much more difficult than classification. This is especially the case when posing the detection problem as we have done in the BugNIST dataset, where there is a set of individually scanned bugs for training the detector and another set of mixed volumes for detecting the bugs. This illustrates that for volumetric image analysis, there are still many unsolved problems and a great need for standard benchmark data for promoting this research. BugNIST is a dataset, that fills out some of this gap and hereby promotes research in deep learning-based methods for 3D volumetric images.
2305.11902
Assurance for Autonomy -- JPL's past research, lessons learned, and future directions
Robotic space missions have long depended on automation, defined in the 2015 NASA Technology Roadmaps as "the automatically-controlled operation of an apparatus, process, or system using a pre-planned set of instructions (e.g., a command sequence)," to react to events when a rapid response is required. Autonomy, defined there as "the capacity of a system to achieve goals while operating independently from external control," is required when a wide variation in circumstances precludes responses being pre-planned, instead autonomy follows an on-board deliberative process to determine the situation, decide the response, and manage its execution. Autonomy is increasingly called for to support adventurous space mission concepts, as an enabling capability or as a significant enhancer of the science value that those missions can return. But if autonomy is to be allowed to control these missions' expensive assets, all parties in the lifetime of a mission, from proposers through ground control, must have high confidence that autonomy will perform as intended to keep the asset safe to (if possible) accomplish the mission objectives. The role of mission assurance is a key contributor to providing this confidence, yet assurance practices honed over decades of spaceflight have relatively little experience with autonomy. To remedy this situation, researchers in JPL's software assurance group have been involved in the development of techniques specific to the assurance of autonomy. This paper summarizes over two decades of this research, and offers a vision of where further work is needed to address open issues.
Martin S. Feather, Alessandro Pinto
2023-05-16T18:24:12Z
http://arxiv.org/abs/2305.11902v1
# Assurance for Autonomy - JPL's past research, lessons learned, and future directions ###### Abstract Robotic space missions have long depended on automation, defined in the 2015 NASA Technology Roadmaps as "the automatically-controlled operation of an apparatus, process, or system using a pre-planned set of instructions (e.g., a command sequence)," to react to events when a rapid response is required. Autonomy, defined there as "the capacity of a system to achieve goals while operating independently from external control," is required when a wide variation in circumstances precludes responses being pre-planned, instead autonomy follows an on-board deliberative process to determine the situation, decide the response, and manage its execution. Autonomy is increasingly called for to support adventurous space mission concepts, as an enabling capability or as a significant enhancer of the science value that those missions can return. But if autonomy is to be allowed to control these missions' expensive assets, all parties in the lifetime of a mission, from proposers through ground control, must have high confidence that autonomy will perform as intended to keep the asset safe to (if possible) accomplish the mission objectives. The role of mission assurance is a key contributor to providing this confidence, yet assurance practices honed over decades of spaceflight have relatively little experience with autonomy. To remedy this situation, researchers in JPL's software assurance group have been involved in the development of techniques specific to the assurance of autonomy. This paper summarizes over two decades of this research, and offers a vision of where further work is needed to address open issues. _Keywords-- assurance, autonomy, testing, validation, verification_ ## I Introduction Autonomy in the context of this paper is defined as the capacity of a system to achieve goals while operating independently from external control [1]. The NASA Autonomous Systems and Robotics roadmap [2] presents several advanced robotics and spacecraft autonomy technologies that will be need in the future to enable or augment space science and exploration missions. Applications include the ability to attend the operations inside habitats and spacecraft, explore extreme environments such as deep interiors, steep slopes, and long drives, build surface infrastructure, perform multi-spacecraft science experiments, and deliver cost effective long range and continuous worksite operations. Explicit mention of autonomous capabilities can also be found in the recent Planetary Science Decadal Survey [3] to enable missions such as Endurance-A, Enceladus Orbilander, and Uranus Orbiter and Probe. To enable infusion of autonomous systems in space missions, stakeholders need to understand the risk posture which requires several new advances in the area of autonomy assurance. For more than two decades, JPL has made use of on-board autonomy to control spacecraft. One area where autonomy is essential is fault protection. Often the urgency of a response precludes interaction with ground control to determine the situation and respond in time. Autonomy must itself protect the spacecraft in the event of faults in its hardware or software, or off-nominal conditions in its environment. Typically, this is done with monitor/response pairs, where a monitor that detects the symptoms of a fault or off-nominal condition is coupled with an appropriate response. Such spacecraft fault management was the topic of a NASA workshop held in 2008, results of which are summarized in [4]. This led to development of a draft Fault Management Handbook, [5], and the challenges of providing software assurance of fault protection were broached in a breakout session at the 2012 NASA Independent Verification and Validation (IV&V) Annual Workshop and reported in [6]. As spacecraft become more complex, and as their missions become more ambitious, fault protection's complexity increases dramatically, exacerbating the challenges of its assurance. Much of the research described in Section III has targeted this problem. Addressing the challenges of future missions such as the ones mentioned in the first paragraph, however, requires a broader range of techniques and an ambitious research and development program which we outline in Section IV. ## II Background ### _Software Assurance_ NASA defines software assurance as "the level of confidence that software is free from vulnerabilities, either intentionally designed into the software or accidentally inserted at any time during its life cycle, and that the software functions in an intended manner" [7]. Confidence is gained through a combination of activities, including following recommendations and guidelines (e.g., architectural principles, coding standards), appropriate training of software personnel, performing reviews
2306.08903
Two-Way Semantic Transmission of Images without Feedback
As a competitive technology for 6G, semantic communications can significantly improve transmission efficiency. However, many existing semantic communication systems require information feedback during the training coding process, resulting in a significant communication overhead. In this article, we consider a two-way semantic communication (TW-SC) system, where information feedback can be omitted by exploiting the weight reciprocity in the transceiver. Particularly, the channel simulator and semantic transceiver are implemented on both TW-SC nodes and the channel distribution is modeled by a conditional generative adversarial network. Simulation results demonstrate that the proposed TW-SC system performs closing to the state-of-the-art one-way semantic communication systems but requiring no feedback between the transceiver in the training process.
Kaiwen Yu, Qi He, Gang Wu
2023-06-15T07:17:12Z
http://arxiv.org/abs/2306.08903v1
# Two-Way Semantic Transmission of Images without Feedback ###### Abstract As a competitive technology for 6G, semantic communications can significantly improve transmission efficiency. However, many existing semantic communication systems require information feedback during the training coding process, resulting in a significant communication overhead. In this article, we consider a two-way semantic communication (TW-SC) system, where information feedback can be omitted by exploiting the weight reciprocity in the transceiver. Particularly, the channel simulator and semantic transceiver are implemented on both TW-SC nodes and the channel distribution is modeled by a conditional generative adversarial network. Simulation results demonstrate that the proposed TW-SC system performs closing to the state-of-the-art one-way semantic communication systems but requiring no feedback between the transceiver in the training process. Semantic communications, deep learning, generative adversarial network. ## I Introduction With the rapid development of industrial internets and internets of vehicles, the demand on content and control information transmission has increased dramatically [1]. Different from traditional communications that focus on recovering the transmitted symbols as accurate as possible, semantic communication recovers the semantic meaning of the transmitted content [2, 3], which requires much less communication resource and is expected to break through the bottleneck of the existing systems and become a very competitive 6G technology [4]. The existing research on semantic communications mainly focuses on the construction of a one-way or unidirectional communication system while the two-way semantic communications are required in many situations. For example, heterogeneous robots collaborate to complete a task or vehicles cooperate with each other for autonomous driving [5]. Bidirectional intelligent communication tasks inspire us to study two-way semantic communications. Assisted by deep learning methods, semantic communication systems can perform effectively in text transmission [6, 7, 8], speech transmission [9], image transmission [10, 11], intelligence tasks [12, 13], etc. Particularly, with the joint source-channel coding (JSCC) scheme, the transceiver can be learned in an end-to-end manner to fit the current channel condition, and achieve a better and more robust performance than modular-based systems [10]. However, in the training process of JSCC, the gradients of neural networks (NNs) should be back propagated from the receiver to the transmitter, which incurs high amount of data feedback. The offline training and online deployment strategy is an attractive option [6, 7, 8], while a non-negligible degradation will arise if the channel state used for offline training does not match the current situation [10, 14]. Taking image transmission as an example, the key metric, peak signal-to-noise ratio (PSNR), drops about 4 dB compared with online training if the current channel condition is 10 dB while the model trained under 0 dB [14]. Therefore, many existing works have studied online training to obtain the optimal transceiver or just adapt to the current channel condition to improve performance by fine-tuning [15, 16, 17], which however could introduce extra computational and communication overhead [12]. Generative adversarial networks (GANs) have been proposed to achieve efficient bit transmission under unknown channel models [18, 19]. By obtaining an equivalent NN of a random channel using GAN, the optimal parameter weights of the transmitter are learned at the receiver and sent back to the transmitter. Thus, another reliable reversed communication link is required in training. Simply extending the JSCC in one-way transmission technique to the two-way communication would bring in significant communication and delay overhead. Thus, efficient and practical two-way semantic communication techniques need to be further explored. This article explores the two-way semantic communication, named TW-SC, where transceiver NNs and channel NN are jointly designed to perform the bidirectional image transmission. Specifically, we use the conditional GAN (CGAN) as the channel simulator to model the wireless channel distribution for image transmission, where the received semantic pilot is treated as additional information to CGAN, referred to as SP-CGAN. By utilizing the _weight reciprocity_ between both sides, TW-SC can be learned locally and efficiently without any gradient exchange. It is worthy noting that the TW-SC framework can be easily extended to the bidirectional transmission of other types of data. Based on our simulation results, the proposed TW-SC system performs closing to that of the state-of-the-art JSCC-based one-way image transmission systems without requiring the overhead on data exchange for training. The codes are available at: [https://github.com/Kiven-ykw/TW-SemanticComm](https://github.com/Kiven-ykw/TW-SemanticComm). ## II System and Learning Models In this section, we introduce the proposed two-way semantic communication system, including communication model, channel modeling, and TW-SC. ### _Communication Model_ We consider that nodes A and B exchange image messages \(\mathbf{M}^{A},\mathbf{M}^{B}\in\mathbb{R}^{H\times W\times C}\), where \(H\), \(W\) and \(C\) are the image height, width, and channel, respectively. i.e., node A sends \(\mathbf{M}^{A}\) to B, and node B sends \(\mathbf{M}^{B}\) to A. The channel simulator is used to learn the wireless channel locally. The transmitter consists of semantic encoding and channel encoding, wherein semantic encoding converts the input image, \(\mathbf{M}^{J},J\in\{A,B\}\), into symbols \(\mathbf{\pi}^{J}\), that are coded by the channel encoding and transmitted over the physical wireless channel. Take the transmission from node A to node B as an example, \[\mathbf{x}^{A}=\mathbf{T}_{c,\beta}^{A}\left(\mathbf{T}_{s,\alpha}^{A}\left(\mathbf{M} ^{A}\right)\right), \tag{1}\] where \(\mathbf{T}_{c,\beta}^{A}\left(\cdot\right)\) and \(\mathbf{T}_{s,\alpha}^{A}\left(\cdot\right)\) denote the channel and semantic encoding NN with weight \(\beta\) and \(\alpha\), respectively. The received signal is represented by \[\mathbf{y}^{B}=\mathbf{H}\mathbf{x}^{A}+\mathbf{n}, \tag{2}\] where \(\mathbf{H}\) is the wireless physical channel and \(n\in\mathcal{N}\left(0,\sigma^{2}\right)\) is the additive circular complex Gaussian noise with zero mean and variance \(\sigma^{2}\). We assume that the channels between nodes A and B are completely reciprocal such as in the time division duplexing (TDD) communications, that is the channel from node A to node B is identical to that from B to A, e.g. \(\mathbf{H}_{B}=\mathbf{H}_{A}\). Denote the complex channel output as \(\mathbf{y}\), that is \(\mathbf{y}=\mathbf{H}_{A,B}\mathbf{x}+\mathbf{n}\). Similarly, the receiver is also composed of semantic decoding and channel decoding, which aims to recover the original image. As a result, the recovered signal can be expressed as, \[\hat{\mathbf{M}}^{B}=\mathbf{R}_{s,\sigma}^{B}\left(\mathbf{R}_{c,\omega}^{B} \left(\mathbf{y}^{B}\right)\right), \tag{3}\] where \(\mathbf{R}_{c,\omega}^{B}\left(\cdot\right)\) and \(\mathbf{R}_{s,\sigma}^{B}\left(\cdot\right)\) denote the channel and semantic decoding NN with weight \(\omega\) and \(\sigma\), respectively. Assuming there is no mutual interference between the two links, such as assigned with different time slots, the optimal goal of the designed TW-SC is to minimize the semantic errors in two-way communications, which can be formulated as \[\min_{\mathbf{T}_{c,\beta}^{J},\mathbf{T}_{s,\alpha}^{J},\mathbf{R}_{c, \omega}^{J},\mathbf{R}_{s,\sigma}^{J}}\frac{1}{2}\left(\underbrace{\left\|\hat {\mathbf{M}}^{A}-\mathbf{M}^{B}\right\|_{2}^{F}}_{\text{Node B to A}}+\underbrace{ \left\|\hat{\mathbf{M}}^{B}-\mathbf{M}^{A}\right\|_{2}^{F}}_{\text{Node A to B}} \right). \tag{4}\] ### _Modeling channel distribution_ Motivated by [19], we use CGAN to model the distribution of wireless channel \(\mathbf{H}\). The received semantic features are used as _semantic pilot_ as extra information to the generator and discriminator, dubbed SP-CGAN. The loss functions of the generator and discriminator are depicted as \[\mathcal{L}_{g}=\min_{\mathbf{G}_{\theta}}\mathbb{E}[\max\{1- \mathbf{D}_{\eta}(\mathbf{G}_{\theta}(\mathbf{z}),0)\}], \tag{5}\] \[\mathcal{L}_{d}=\min_{\mathbf{D}_{\eta}}\mathbb{E}[-\mathbf{D}_{ \eta}(\mathbf{H})-\max\{1-\mathbf{D}_{\eta}(\mathbf{G}_{\theta}(\mathbf{z})),0 \}], \tag{6}\] where \(\mathbf{G}_{\theta}\) and \(\mathbf{D}_{\eta}\) denote the generator NN with weight \(\theta\) and the discriminator NN with weight \(\eta\), respectively. \(\mathbf{D}_{\eta}(\mathbf{H})\) and \(\mathbf{D}_{\eta}(\mathbf{G}_{\theta}(\mathbf{z}))\) denote the probabilities that the discriminator considers that the sample is from a real channel or generated by a NN, respectively. \(\mathbf{z}\) denotes the sample noise. The \(\mathbf{G}_{\theta}\) and \(\mathbf{D}_{\eta}\) can be optimized by modeling as a two-player maximum minimization game problem, which is depicted as \[\min_{\mathbf{G}_{\theta}}\max_{\mathbf{D}_{\eta}}\mathbb{E}\left[\mathbf{D}_{ \eta}\left(\mathbf{H}\right)\right]+\mathbb{E}\left[\max\left\{1-\mathbf{D}_{ \eta}\left(\mathbf{G}_{\theta}\left(\mathbf{z}\right)\right),0\right\}\right]. \tag{7}\] Fig. 1: The structure of proposed TW-SC. The training algorithm for modeling the wireless channel distribution with the designed SP-CGAN is shown in Fig. 2 and Algorithm 1, where the NNs of the generator and the discriminator are trained iteratively in each iteration, that is, the parameters of one model will be fixed while training the other model. With the learned transmitter and receiver, the encoded semantic information from the transmitter through the actual channel can be used to obtain the true data whilst the encoded semantic information through the channel generator is used to obtain the fake data in the receiver. According to the loss functions of equations (5) and (6), the parameters of the generator and discriminator are updated, respectively. ### _TW-SC training without information feedback_ The proposed TW-SC algorithm is conducted in two stages, as illustrated in Fig. 1. In the first stage, represented by the solid red lines, transmitters at nodes A and B send training data to train the channel simulator and receiver to ensure convergence. In the second stage, represented by the dotted orange lines, the transmitters send training data inside the local node to train the transmitter, avoiding gradient back propagation over the wireless channel. The training data known at nodes A and B is represented by \(\mathbf{M}\). In the NN training, we adopt mean-squared error (MSE) as the end-to-end loss function, which can be expressed as, \[\mathcal{L}_{R}^{A}(\mathbf{M}^{B},\hat{\mathbf{M}}^{A})=\frac{1}{hw}\sum_{i=1}^{h} \sum_{j=1}^{w}\left(\mathbf{M}_{i,j}^{B}-\hat{\mathbf{M}}_{i,j}^{A}\right)^{2}, \tag{8}\] \[\mathcal{L}_{R}^{B}(\mathbf{M}^{A},\hat{\mathbf{M}}^{B})=\frac{1}{hw}\sum_{i=1}^{h} \sum_{j=1}^{w}\left(\mathbf{M}_{i,j}^{A}-\hat{\mathbf{M}}_{i,j}^{B}\right)^{2}, \tag{9}\] respectively, where \(\mathbf{M}_{i,j}^{J},j\in\{A,B\}\) denotes the pixel value at position \((i,j)\) of the image, \(h\times w\) is the image size. Note that the receiver's gradient calculation is performed locally by the gradient descent approach. Without the back propagation of gradients, the transmitter cannot obtain the loss information. We investigate the characteristics of TW-SC in order to get answers to this issue. **Remark 1**: **Weight Reciprocity:** _When the channels are reciprocal and the NNs of node A and node B have the same hyper parameters, the receivers of node A and node B have the same gradient, which helps us to complete the training of transmitters locally and does not require inter-node feedback, thus effectively reducing the communication overhead and delay._ The gradient of the transmitter can be determined locally using the _Weight Reciprocity_ and SP-CGAN. The loss function at the transmitter can be written as \[\mathcal{L}_{T}^{A}=\mathbb{E}\left\{l\left(\mathbf{M},\mathbf{R}_{s,\sigma}^{A} \left(\mathbf{R}_{c,\omega}^{A}\left(\mathbf{G}_{\theta}^{A}\left(\mathbf{C} ^{B},\mathbf{z}\right)\right)\right)\right)\right\}, \tag{10}\] \[\mathcal{L}_{T}^{B}=\mathbb{E}\left\{l\left(\mathbf{M},\mathbf{R}_{s,\sigma}^{B} \left(\mathbf{R}_{c,\omega}^{B}\left(\mathbf{G}_{\theta}^{B}\left(\mathbf{C} ^{A},\mathbf{z}\right)\right)\right)\right)\right\}, \tag{11}\] where \(l\) denotes the MSE loss function, \(\mathbf{C}^{J}=\mathbf{T}_{c,\beta}^{J}\left(\mathbf{T}_{s,\alpha}^{J}\left( \mathbf{M}^{J}\right)\right),J\in\{A,B\}\). Transmitters A and B are able to train the weights via their own internal receivers by minimizing the loss functions, (10) and (11), inside nodes A and B. The gradient loss can be internally back propagated to achieve convergence using the channel simulator SP-CGAN. Thus, the proposed method completely avoids random wireless channel, saves communication overhead, and reduces training latency, and is applicable to both online real-time and offline training. The proposed TW-SC algorithm is summarized in Algorithm 2. ``` 1:Initialize: Initialize the weights \(\mathbf{T}_{c,\beta}^{J}\left(\right)\), \(\mathbf{T}_{s,\alpha}^{J}\left(\right)\), \(\mathbf{R}_{c,\omega}^{J}\left(\right)\), \(\mathbf{R}_{s,\sigma}^{J}\), \(J\in\{A,B\}\); 2:Input: The node A and B transmit images \(\mathbf{M}\). 3:Output: The whole network \(\mathbf{T}_{c,\beta}^{J}\left(\right)\), \(\mathbf{T}_{s,\alpha}^{J}\left(\right)\), \(\mathbf{R}_{c,\omega}^{J}\left(\right)\), \(\mathbf{R}_{s,\sigma}^{J}\), \(J\in\{A,B\}\); 4:Training SP-CGAN and receivers cross nodes: 5:Semantic encoding: \(\mathbf{T}_{j,\alpha}^{J}\left(\mathbf{M}\right)\rightarrow\mathbf{s}\), 6:Channel encoding: \(\mathbf{T}_{c,\beta}^{T}\left(\mathbf{s}\right)\rightarrow\mathbf{x}\), 7:Though the real channel: \(\mathbf{y}^{J}=\mathbf{H}\mathbf{x}^{J}+\mathbf{n}\), 8:Channel decoding: \(\mathbf{R}_{c,\omega}^{J}\left(\mathbf{y}^{J}\right)\rightarrow\mathbf{\hat{s}}\) 9:Semantic decoding: \(\mathbf{R}_{s,\sigma}^{J}\left(\mathbf{\hat{s}}\right)\rightarrow\mathbf{\hat{M}}\) 10:Update the SP-CGAN by Algorithm 1 11:Update the receivers by minimizing the loss function (8) and (9) 12:Training transmitters at local node: 13:Semantic encoding: \(\mathbf{T}_{s,\alpha}^{J}\left(\mathbf{M}\right)\rightarrow\mathbf{s}\), 14:Channel encoding: \(\mathbf{T}_{c,\beta}^{J}\left(\mathbf{s}\right)\rightarrow\mathbf{x}\), 15:Though the channel simulator SP-CGAN: \(\mathbf{G}_{\theta}^{J}\left(\mathbf{x}\right)\rightarrow\mathbf{y}\); 16:Channel decoding: \(\mathbf{R}_{c,\omega}^{J}\left(\mathbf{y}\right)\rightarrow\mathbf{\hat{s}}\) 17:Semantic decoding: \(\mathbf{R}_{s,\sigma}^{J}\left(\mathbf{\hat{s}}\right)\rightarrow\mathbf{\hat{M}}\) 18:Update the transmitters by minimizing the loss function (10) and (11) ``` **Algorithm 2** TW-SC Training Algorithm. ## III Numerical Results In this part, we evaluate the proposed TW-SC in terms of transmission performance. We use the MNIST handwritten Fig. 2: The details of training SP-CGAN. digit dataset for the experiments, where the training set consists 60,000 elements and the test set includes 10,000 elements. In the experiment, the semantic encoding consists of five two-dimensional convolutional layers (Conv2D) with 4, 8, 8, 16, and 16 filters. The channel encoding consists of four Conv2D with 16, 16, 32, and 32 filters. The Conv2Ds are with a \(3\times 3\) kernels and followed by an ELU activation function. The channel decoding consists of four Transpose Conv2Ds (TransConv2D) with 32, 32, 16, and 16 filters and the semantic decoding consists of five TransConv2Ds with 8, 8, 4, 4, and 1 filters. The TransConv2Ds are also with a \(3\times 3\) kernels and followed by an ELU activation function. For the SP-CGAN, the generator consists of four Conv1Ds with 256, 128, 64, and 2 filters. The four Conv1Ds are with \(5\times 5\), \(3\times 3\), \(3\times 3\), and \(3\times 3\) kernels. Similarly, the discriminator consists of four Conv1Ds with 256, 128, 64, and 16 filters and with \(5\times 5\), \(3\times 3\), \(3\times 3\), and \(3\times 3\) kernels and a dense layer with 100 filters. The ReLU function is selected as the activation function. For all networks, the learning rate is \(\delta=1e^{-3}\) and learning rate decay factor is \(\delta_{d}=1e^{-4}\). The weights of models are updated by the Adam optimizer and the epoch for training is 100. Moreover, we set the training batch size to 128 and compare the proposed scheme with the two semantic communication algorithms under the AWGN channel and Rayleigh fading channel, respectively, * One-way end-to-end JSCC: It is trained under real channels and is used as the optimal algorithm. * One-way end-to-end semantic communication with unknown channels: This method simulates the channel using GAN and then conducts transceiver training according to the simulated channel, dubbed GAN-SC. Semantic pilot is not used by GAN as extra data. To make a fair comparison with the two benchmarks, we average the link performance of the proposed TW-SC scheme. In addition, the widely recognized evaluation metrics in computer vision, namely structure similarity (SSIM) and PSNR, are used to measure the performance of the three algorithms. The suffixes '-AWGN' and '-Rayleigh' indicate cases trained under the AWGN and Rayleigh channels, respectively. Fig. 3 shows the relationship between the PSNR score and SNR over the AWGN channel. For the AWGN channel, the suggested TW-SC-AWGN performs well and gets close to the optimal algorithm at the low SNR range, which verifies the efficiency of _weight reciprocity_ in the two-way system. The GAN-SC, based on the GAN simulated channel, performs poorest. We also examine the performance of methods in comparison developed under the Rayleigh channel to confirm the algorithm's generalization ability. The TW-SC trained under the AWGN channels outperforms the model trained under the Rayleigh channels. From the figure, the model trained under a different channel from the test channel suffers from some performance degradation. Fig. 4 shows the relationship between the PSNR score and SNR over the Rayleigh fading channel. Obviously, Rayleigh fading channel has more negative effect than the AWGN channel for the three methods. When SNR is lower than 10 dB, the proposed TW-SC trained under the Rayleigh channels has a higher PSNR performance than the models trained in the AWGN channels. Besides, the proposed TW-SC trained under the AWGN channels has a wonderful performance close to the optimal JSCC, which proves the validity of the proposed algorithm. On the other hand, the GAN-SC algorithm also has the worst performance since the GAN generates random data that obeys that distribution. In contrast, SP-CGAN is able to generate channel-specific distributions with the aid of semantic pilots, which demonstrates the necessity of pilot as additional information and the effectiveness of the proposed SP-CGAN algorithm. Fig. 5 demonstrates the relationship between the SNR and SSIM scores over the AWGN channel. From the figure, when Fig. 3: The PSNR score versus SNR in AWGN channel. Fig. 4: The PSNR score versus SNR in Rayleigh channel. SNR is lower than 6 dB, the models trained under the Rayleigh fading channels get the high SSIM performance, which is in line with existing studies that training under the fading channel can improve the robustness of models over various channel types [9]. The JSCC-AWGN schemes yield higher SSIM score than other schemes, when SNR is higher than 6 dB. Given the semantic pilot as additional information for CGAN, the proposed TW-SC performs steadily when coping with different fading channels and SNRs. However, for the GAN-SC, the performance is quite poor under dynamic channel conditions, especially in the low SNR regime. Fig. 6 compares the SSIM performance between TW-SC and the two benchmarks over the Rayleigh channel. The JSCC-Rayleigh gets the optimal performance. The proposed TW-SC achieves a very high SSIM score that is close to the optimal JSCC algorithm. As the SNR increases, the performance of the proposed algorithm almost overlaps with the optimal algorithm. Fig. 7 demonstrates the relationship between the training epoch and the SSIM score over the AWGN channel, which enables more intuitive verification of the reliability of the weight reciprocity and the high convergence speed of TW-SC. From the figure, all the three curves converge within 20 epochs. Meanwhile, the convergence speeds of links from node A to node B and from B to A are very similar and approximate to the ideal JSCC algorithm, which shows that the proposed training scheme can have the ability to train the transmitter without feedback while obtaining a performance close to the optimal algorithm. ## IV Conclusion In this article, we have studied the two-way semantic communication system and proposed a training scheme for the semantic transceiver locally by learning the channel distribution, thus enabling the training of the transceiver without information feedback and model transfer. The simulation results show that our proposed two-way semantic communication scheme performs close to the bidirectional scheme consisting of two independent optimal one-way semantic communication systems but gets rid of the feedback overhead in training.
2310.04607
A Comprehensive Performance Study of Large Language Models on Novel AI Accelerators
Artificial intelligence (AI) methods have become critical in scientific applications to help accelerate scientific discovery. Large language models (LLMs) are being considered as a promising approach to address some of the challenging problems because of their superior generalization capabilities across domains. The effectiveness of the models and the accuracy of the applications is contingent upon their efficient execution on the underlying hardware infrastructure. Specialized AI accelerator hardware systems have recently become available for accelerating AI applications. However, the comparative performance of these AI accelerators on large language models has not been previously studied. In this paper, we systematically study LLMs on multiple AI accelerators and GPUs and evaluate their performance characteristics for these models. We evaluate these systems with (i) a micro-benchmark using a core transformer block, (ii) a GPT- 2 model, and (iii) an LLM-driven science use case, GenSLM. We present our findings and analyses of the models' performance to better understand the intrinsic capabilities of AI accelerators. Furthermore, our analysis takes into account key factors such as sequence lengths, scaling behavior, sparsity, and sensitivity to gradient accumulation steps.
Murali Emani, Sam Foreman, Varuni Sastry, Zhen Xie, Siddhisanket Raskar, William Arnold, Rajeev Thakur, Venkatram Vishwanath, Michael E. Papka
2023-10-06T21:55:57Z
http://arxiv.org/abs/2310.04607v1
# A Comprehensive Performance Study of Large Language Models on Novel AI Accelerators ###### Abstract Artificial intelligence (AI) methods have become critical in scientific applications to help accelerate scientific discovery. Large language models (LLMs) are being considered as a promising approach to address some of the challenging problems because of their superior generalization capabilities across domains. The effectiveness of the models and the accuracy of the applications is contingent upon their efficient execution on the underlying hardware infrastructure. Specialized AI accelerator hardware systems have recently become available for accelerating AI applications. However, the comparative performance of these AI accelerators on large language models has not been previously studied. In this paper, we systematically study LLMs on multiple AI accelerators and GPUs and evaluate their performance characteristics for these models. We evaluate these systems with (i) a micro-benchmark using a core transformer block, (ii) a GPT-2 model, and (iii) an LLM-driven science use case, GenSLM. We present our findings and analyses of the models' performance to better understand the intrinsic capabilities of AI accelerators. Furthermore, our analysis takes into account key factors such as sequence lengths, scaling behavior, sparsity, and sensitivity to gradient accumulation steps. ## I Introduction The incorporation of Artificial Intelligence (AI) for Science has gained increasing interest in scientific research institutes and supercomputing facilities, with the goal of accelerating scientific discovery with novel approaches involving AI. This synergy has increased interest in the adoption of novel AI-driven techniques to help develop solutions to complex scientific problems such as protein-structure prediction [1, 2], cosmology parameter prediction [3], neutrino particle detection [4], drug design for precision medicine [5], genome-scale foundation model [6] and weather forecasting model [7]. Some of the most commonly used AI techniques include convolutional neural networks, recurrent neural networks, graph neural networks, and large language models (LLMs). These techniques, with their unique architectural characteristics, have become invaluable to assist scientists in their research. Within the AI landscape, the domain of Natural language processing (NLP) has experienced a massive surge in growth. fostering the development of LLMs to use in various tasks such as question answering, text summarization, and language translation. These models are becoming increasingly critical in scientific machine learning applications. LLMs, such as Generative Pre-trained Transformers (GPT) GPT-3 [8], LLaMA [9], Llama 2 [10], and Bloom [11] have seen a massive improvement in their complexity along with the quality of results for these tasks. This growth has been driven in part by the rapid emergence of transformer-based models as the _de-facto_ architecture for both traditional applications and a potent tool for scientific use cases. Transformer-based architectures have found a multitude of applications, from accelerating drug discovery to understanding genetic sequences. For example, GenSLM [6] provides an LLM-based foundation model to predict Sars-CoV2 variants of concern. Its strength lies in its capacity to inform the design of effective antiviral drugs. The GensLM model is trained on an extensive dataset of over 110 million raw nucleotide sequences and with model scales between 25 million to 25 billion trainable parameters. However, training GPT-variant LLMs with large model parameters and longer sequence lengths necessitates specialized computing resources and innovative implementation techniques and optimizations in the software stack. To address these requirements, AI accelerators, designed on non-traditional architectures, such as dataflow, have emerged. These accelerators are custom-built to efficiently support AI workloads with their powerful hardware compute engines and novel software optimizations. They are proven to effectively train several AI models, with a special focus on LLMs. With their unique system characteristics, these AI accelerators are empowered to tackle the challenges posed by LLMs. Besides their capability in training, these accelerators are able to run some of the largest GPT models, besides providing a suite of pre-trained GPT models [12, 13, 14, 15, 16, 17, 18, 19]. These models demonstrate the versatility and scalability of AI accelerators. With the increase in the size and complexity of LLMs, innovative training techniques have become critical, such as sparsity [20, 21, 22, 23], to further enhance the training of LLMs with billions of parameters. Evaluation of LLMs on diverse hardware platforms is of crucial importance to understand the capabilities and limitations of both traditional and non-traditional architectures. Prior work has studied LLMs on leadership class supercomputers [24] and with traditional deep learning benchmarks [25, 26] providing detailed insights into their capabilities. However, no comprehensive evaluation has been performed across a variety of AI accelerators, especially for LLMs. This paper aims to address this gap with a detailed performance evaluation of language models on multiple AI accelerators and is the first of its kind benchmarking study to our best knowledge. The major contributions of this paper are: * A systematic evaluation of LLMs on state-of-the-art AI accelerators. * Focus on a transformer block micro-benchmark that is a core component in GPT-based models. * Comprehensive evaluation of GPT-2 XL 1.5B parameter model to glean insights into model performance across all systems. * Porting and evaluation of a science application: GenSLM, a foundation model for gene sequencing. * Studying the impact of sequence lengths, sparsity, and gradient accumulation steps on model throughput. We present an overview of LLMs and the various AI accelerators in Section II, followed by details of the evaluated models, namely the transformer block micro-benchmark, GPT-2 XL, and GenSLM application in Section III. We describe in Section IV the implementation of the LLMs on different AI accelerators. We present experimental results in Section V, followed by conclusions in Section VI. ## II Overview of Large Language Models and AI Accelerators Large language models are a type of artificial intelligence system that use deep learning algorithms to process and generate natural language text. These models have become increasingly popular in recent years due to their ability to perform a wide range of language-related tasks, such as machine translation, text summarization, and question answering. The development of large language models has been driven by advances in deep learning techniques, particularly in the area of transformer models. These models use self-attention mechanisms to process text input, allowing them to capture complex patterns and relationships within language data. They are also pretrained on large datasets using unsupervised learning techniques, such as masked language modeling and next-sentence prediction, which help them to learn a broad range of language features and structures. Fine-tuning on specific tasks further improves their performance and adaptability. One of the most well-known LLMs is the GPT (Generative Pretrained Transformer) series, which was developed by OpenAI and is used to answer questions, translate languages, and generate text. These tasks are realized by generative pre-training of a language model on a diverse data corpus of unlabeled text, followed by discriminative fine-tuning on a specific task. Task-aware input transformations during fine-tuning help achieve effective transfer with minimal changes to the model architecture. Owing to the neural architectural differences, GPT models can broadly be classified into GPT [27] GPT-2 [28], GPT-3 [8], and more recently GPT-4 [29]. AI accelerators comprised of GPU-based systems and novel non-traditional AI hardware, such as dataflow architectures, have proven to boost the efficiency of diverse AI models. Below we describe the accelerators used in this study, while the configurations are listed in Table I. _**Nvidia A100:**_ The A100 GPU consists of 6912 CUDA cores and 432 Tensor cores to accelerate parallel workloads. On a single DGX node, there are 4 A100 GPUs, interconnected with NVLink. We use a forked implementation of Microsoft's Megatron-DeepSpeed6 framework for our evaluations. In doing so, we can take full advantage of DeepSpeed's various optimizations and convenient features such as ZeRO offloading and automatic metric tracking (with communication + FLOP profiling). All experiments were conducted on the Polaris [31] \begin{table} \begin{tabular}{l|l|l|l|l|l|l|l} \hline \hline **Feature** & **Nvidia A100** & **SambaNova** & **Cerebras CS-2** & **Graphcore Bow-Pod64** & **Habana Gaudi2** & **AMD MI250** \\ \hline System Size1 & 64 \(\left(=16\times 4\right)\) & 64 \(\left(=8\times 8\right)\) & 2 \(\left(=1\times 2\right)\) & 64 \(\left(=4\times 16\right)\) & 8 \(\left(=1\times 8\right)\) & 4 \(\left(=1\times 4\right)\) \\ \hline Memory (/node) & 160 GB & 8 TB & 1 TB & 3.6 GB/128 GB 2 & 768 GB & 512 GB \\ Memory (/device) & 40 GB & 1 TB & 1 TB & 900 MB/32 GB & 96 GB & 128 GB \\ Interconnect & NVLink & **Ellemnet-based** & **Ellemnet-based** & **IPU Link** & **Roche2** & **AMD CDNA** & **AMD CDNA** \\ Software Stack4 & 7, PT, Cnnx, & & & & & \\ & TMNET, CUDA & PT, TF & SDK & & PP, PT, CNNX, & Synapse AI, & TF, PT, ROCm \\ Precision5 & TF32, FP32, & FP32, & FP32, & FP32, & FP32, & FP32, & FP64, FP32, & FP64, FP32, & \\ & FP16, BP16 & Int132, & FP16, BP16, & & & & & \\ & Int16, Int8 & cblioat & & & & & \\ \hline \hline \end{tabular} \end{table} TABLE I: Features of evaluated AI accelerators supercomputer at the Argonne Leadership Computing Facility (ALCF) with 4 A100 GPUs and 40 GB memory per node. _Cerebras CS-2:_ The Cerebras CS-2 is a wafer-scale deep learning accelerator comprising 850,000 processing cores, each providing 48KB of dedicated SRAM memory for an on-chip total of 40 GB and interconnected to optimize bandwidth and latency. The system has been scaled to two CS-2 wafer-scale engines nodes interconnected by the SwarmX fabric and with the MemoryX memory subsystem to enable large models. The wafer-scale cluster supports a weight streaming execution mode [32] where each model layer is loaded one by one. This feature enables users to run large language models in which each layer's weights fit in memory, but not the entire model. The software platform integrates popular machine learning frameworks such as PyTorch. With a single Cerebras CS-2, the largest model supported is the GPT-3 (175B parameter model) and the CS-2 can support sequence lengths up to 58k. _SambaNova SN30:_ The SambaNova DataScale system uses the second-generation Reconfigurable Dataflow Unit (RDU) processor for optimal dataflow processing and acceleration. Each RDU has 1280 Pattern Compute Units (PCU) and 1 TB of off-chip memory. The system consists of eight nodes, each with eight RDUs interconnected to enable model and data parallelism. SambaFlow, its software stack, extracts, optimizes and maps dataflow graphs to the RDUs from the PyTorch machine learning framework. SN30 can train models up to 176 B parameters on 4 RDUs. _Graphcore Bow Pod64_: The Graphcore 22 petaflops Bow Pod64 system is the latest next-generation accelerator from Graphcore. It is a one-rack system consisting of 64 Bow-class IPUs with a custom interconnect. The Graphcore software stack includes the Poplar SDK and has support for TensorFlow and PyTorch. The Bow system currently supports GPT-3 175B parameter model with 256 IPUs. _Habana Gaudi2_: The Habana Gaudi2 processor features two Matrix Multiplication Engines (MME), 24 fully programmable VLIW SIMD tensor processor cores, integrating 24 100 GbE ports of RDMA over Converged Ethernet (RoCE) into each processor chip to efficiently scale training. The Gaudi system consists of one HLS-2 server with eight Gaudi2 HL-225H cards. The software stack comprises the SynapseAI stack and supports TensorFlow and PyTorch. It supports the existing deep learning optimization library DeepSpeed and the customized library Optimum Habana, which is the interface between the transformer library and Habana's Gaudi processor (HPU). On the Gaudi system, the largest model currently validated is the GPT-3 (175B parameter model) running on 384 Gaudi2 cards. _AMD MI250:_ The AMD MI250 GPU is based on CDNA2 architecture and consists of 13,312 stream processors distributed across 208 compute units coupled with 128 GB of dedicated HBM2e memory with 3.276 TB/s of memory bandwidth. It is able to achieve 362.1 TFlops of FP16 and 45.3 TFlops of FP32 peak performance. Each GPU, comprising of two tiles, is connected to the host using PCIe Gen4 and uses InfiniBand for to inter-node communication. AMD ROCm open software platform supports the common DL stack, including Tensorflow and PyTorch, in addition to libraries such as rocBLAS, rocSPARSE, rocFFT, and RCCL (ROCm Collective Communication Library). ## III Evaluation In this work, we primarily focused on evaluating (i) transformer benchmarks, (ii) the GPT 2-XL model, and (iii) a science application, GenSLM [6], a foundation model for genome sequencing. (1) **Transformer micro-benchmark**: To evaluate the performance of a transformer benchmark on AI accelerators, several key factors must be considered. First, it is important to choose appropriate micro-benchmarks that reflect the workload of the transformer models being used. Once the suitable micro-benchmarks have been selected, it is necessary to collect performance metrics, such as throughput, which can be done by measuring the number of input tokens processed per second to complete a certain number of iterations. Additionally, it is important to monitor the utilization of hardware resources, such as memory and compute units. Finally, it is recommended to compare the performance of traditional NVIDIA GPUs against other AI accelerators to gain a more comprehensive understanding of their strengths and weaknesses. By carefully evaluating these factors, it is possible to effectively predict the performance of transformer models on AI accelerators. A _transformer block_ (Figure 1) is a widely recognized and established micro-benchmark for transformer models. The transformer block is an ideal choice for a micro-benchmark due to several reasons. First, it is a widely used building block in natural language processing tasks such as language modeling and text generation, making it a relevant benchmark for many LLM applications. Second, a transformer block is relatively simple and small in size compared to larger transformer models, which makes it easier to run and evaluate. This also allows for faster experimentation in evaluating new hardware architectures. Finally, the transformer block includes many of the key components of transformer models, including self-attention and feedforward layers, making it a suitable representative of transformer models in general. Overall, the transformer block is a well-established and widely accepted micro-benchmark, making it an excellent choice for evaluating the performance of LLM models. A transformer block consists of a multi-head self-attention layer followed by a feedforward neural network layer. The input to the transformer block is a sequence of tokens. In this work, the length of the input sequence we evaluated is 1024. To calculate the FLOPs of the transformer block, we need to consider the number of operations required for each layer. The self-attention layer requires \(O(n^{2}d)\) FLOPs, where \(n\) is the sequence length and \(d\) is the hidden dimension. The feedforward neural network layer requires \(O(ndk)\) FLOPs, where \(k\) is the size of the hidden layer. Therefore, the total number of FLOPs for the transformer block is \(O(n^{2}d+ndk)\). (2) **GPT-2 XL**: For this study, we used the GPT-2 XL 1.5B parameter model for pre-training experimentation to analyze how performant the accelerators are in running large language models. Though the evaluated systems can individually support much larger models and varied configurations as stated in Section II, we chose GPT-2 XL because it can be easily implemented on each of the systems in a timely manner for a fair comparison. Also, the memory and compute requirements of a GPT-2-sized model fit well with the minimum unit of compute node on each of the systems; hence the insights gained here can be extended to help drive the decisions in choosing accelerators that can yield optimal performance for any given large transformer-based model architecture. The dataset used in this evaluation is Open Web Text (OWT) [34] which is an open-source version of the WebText corpus. It consists of 38 GB of text data from 8,013,769 documents, which constitute content extracted from URLs shared on Reddit with at least three upvotes. For this model, we measured the model throughput across a scale of devices on each system. Additionally, we evaluated the impact of sequence lengths, gradient accumulation steps (GAS), and sparsity on the model performance. (3) **GenSLM (Science Use Case)**: In addition to the benchmarks described above, we are also interested in evaluating how these models perform in real-world use cases. GenSLM [35, 6] is a genome-scale foundation model that can be generalized to other models. The goal of this model is to identify and classify different virus variants of concern, which can be then extended to gene or protein synthesis. It adapts GPT-based large language models with different parameters (25M-25B) with a 1.2B raw nucleotides dataset and is aimed at larger sequence lengths to help better capture the context and are generalizable to learn the evolution. Figure 2 shows the overview of the GenSLM model that takes SARS-CoV-2 genomes (nucleotide sequences encoded at the codon level where every three nucleotides represent a codon) as input, which are fed to a series of transformer layers. The intermediate layers learn the semantic embedding of individual codons, which can be mapped to 29 individual virus-encoded protein sequences. In this study, we focus on the evaluation of the GenSLM GPT model with 13B parameters on Nvidia A100, SambaNova SN30, and a 25B GPT3-XL model on Cerebras CS-2. **Metrics:** Evaluating the performance of large language models goes beyond traditional metrics and includes considerations of computational efficiency and hardware utilization. As these models have grown in size and complexity, it is essential to assess their ability to process and generate text efficiently, especially considering the computational resources required for training and inference. Throughput is a key performance metric that measures the rate at which a large language model can process a given amount of input data. It is often quantified in terms of tokens or sentences processed per second. Higher throughput values indicate better efficiency and the ability to handle large-scale language processing tasks effectively. In this work, we present the throughput in the number of tokens per second across the evaluated systems. Hardware utilization is another important metric in evaluating large language models, as it assesses how effectively the computational resources are utilized during model training and inference. It involves multiple design choices, such as model parallelism, memory management, and efficient data processing techniques. Profiling the models to extract hardware utilization across the evaluated systems is ongoing and will be included in the final version. ## IV Implementation on AI Accelerators The implementations of the evaluated models on each of the systems vary due to different software stacks and scales. Here we describe the implementation details on each system for the three cases: transformer micro-benchmark, GPT-2 XL pre-training, and the GenSLM science application. ### _Transformer micro-benchmark_ The evaluation of the transformer micro-benchmark involved a meticulous implementation process aimed at assessing the computational efficiency and performance characteristics of this kernel on different AI accelerators. The transformer micro-benchmark is designed to simulate the core Fig. 1: GPT transformer block [33] Fig. 2: Overview of GenSLM models for predictive modeling of SARS-CoV-2 evolution [6] operation in the transformer models, which is widely utilized in various natural language processing tasks. The transformer micro-benchmark utilizes a layer in a standardized GPT-2 XL model with an input sequence of 1024, ensuring consistent and comparable results across different platforms. The implementation is carried out using the same deep learning framework, PyTorch, tailored to the unique capabilities of each platform. The workload is used to exploit parallelism and hardware-specific optimizations and to achieve optimal throughput and computational speed. Careful attention is paid to factors such as batch size to avoid bottlenecks and fully utilize the available hardware resources. We use batch sizes of 8, 16, 32, and 64 for different configurations. Performance metrics, such as TFLOPS, are collected to quantify the capabilities of each hardware platform in handling the demanding computations required by the transformer models. This evaluation provides valuable insights into the strengths and weaknesses of different hardware when dealing with transformer-based workloads. ### _GPT-2 XL pre-training_ As part of the GPT-2 XL study, we pre-train the model on the OWT dataset [34] where the raw data is pre-processed and tokenized for a given sequence length. The details of the tokenization and model implementation for each of the systems are described below. _Nvidia A100:_ We ran the models with varying micro-batch sizes, sequence lengths, and _tensor-parallel_ degree on different node counts, up to 64 A100 GPUs. These are implemented with Megatron-DeepSpeed [36] using ZeRO Stage 1 with fp16 precision. In these experiments, flash attention flash-attn [37, 38] was enabled, which is known to relieve memory pressure and generally improve throughput. BPE-based tokenizer with a vocabulary size of was used. The sequence lengths up to 2k were used. Experiments with Nvidia's NeMo framework for generative AI [39] are part of planned future work. _Cerebras CS-2:_ On the Cerebras CS-2 system, we ran the GPT-2 XL model implemented in the PyTorch framework with sequence lengths of 1024 and 2048 with a batch size of 112 on a single CS-2 engine. The implementation uses a custom GPT-2 tokenizer based on byte-level Byte-Pair-Encoding (BPE) with a vocab size of 50257. The model is trained using mixed precision and precision opt level [40] of 1 (default) and 2. It uses an AdamW optimizer with a weight decay rate of 0.011. The model is trained with both dense and sparse configurations. In the sparsity approach, all the weights of the dense layers are sparsified based on the degree provided. We ran the GPT-3 6.7B and GPT-3 30B models with various sparsity values. Here, a sparsity value of 0.3 means that 30% of weights are pruned. The impact of sparsity on model throughput and loss are discussed in Section V-B4. _SambaNova SN30:_ We evaluated pre-training performance on the OWT dataset on the SambaNova next-generation datascale SN30, which has 8 RDUs per node and 8 tiles per RDU. We used the SN reference implementation of the GPT-1.5B model that fits on 4 tiles or half an RDU. This implementation is based on the PyTorch-based SambaFlow framework and uses mixed precision (16 bit(i.e. bf16) multipliers and 32-bit accumulators It uses a BPE-based GPT-2 tokenizer with a vocab size of 50260. We use data parallelism to scale across multiple tiles and nodes. We scale for a maximum of 8 nodes (which corresponds to 128 instances of data-parallel runs) with each instance having a micro-batch size of 16. _Graphcore Bow Pod64:_ On the Bow Pod64, we leverage 64 IPUs to train the evaluated models. The GPT-2 XL 1.5B model implemented in the PyTorch framework can be model-sharded across 4 IPUs. As part of the data pre-processing, the implementation uses Nvidia's Megatron library for generating the training samples with a BPE tokenizer and a vocab size of 50272. The Poplar SDK uses the mapping on a Multiple Instruction Multiple Data (MIMD) fashion to leverage the IPUs and overlap computation and communication. We used a local batch size of 1 in FP16, combined with large gradient accumulation step values of 128 and 1024. Such large values help to minimize the communication overhead as it mimics a larger global batch size. We used a replication factor [41], of 1, 2, 4, and 16 to achieve better scaling, especially for smaller GAS values. _Habana Gaudi2:_ We ran the GPT-2 XL model implemented in PyTorch with a sequence length of 1024 with a local batch size of 32 on each HPU device. The training of the GPT-2 XL model on Habana Gaudi2 represents a powerful and cutting-edge combination of software and hardware capabilities. The data format used in training is BF16. The training samples are generated using a BPE tokenizer. _AMD MI250:_ On the AMD MI250 system, we evaluated the performance of GPT-2 with absolute position embeddings trained with a causal language modeling (CLM) objective on the OWT dataset for up to 8 GPUs. We leverage GPT-2 tokenizer based on byte-level Byte-Pair-Encoding. We used the reference PyTorch 2.0 implementation from Hugging Face to realize this. The performance is evaluated for sequence lengths of 1024 on batch sizes of 16 and 32 per GPU as well as GAS values of 1 and 4. ### _GenSLM_ We implemented the GenSLM science application on Nvidia A100, SambaNova SN30, and Cerebras CS-2. The model is implemented using a PyTorch framework on all three systems. It uses the genomic sequence dataset and tokenizes it using a codon-level tokenizer that splits genomes into blocks of 3 nucleic acids. As the GenSLM application includes models with different number of model parameters ranging from 25M to 25B, we used two different model parameter sizes in this exercise. The SN30 implementation for GenSLM is based on the GPT-2 13B parameter model that uses a context length of 1024 with 44 layers, 64 attention heads, an embedding size of 4992, and a vocabulary size of 71. This GPT-2 13B parameter model with a batch size of 32 can be mapped within 4 tiles or equivalently half an RDU. The model used on Cerebras is a GPT-3 XL, a 1.5B parameter model that has 24 hidden layers and 16 attention heads. The model uses a vocabulary size of 70 and an embedding size of 2048. The model is trained for the genomic sequences of sequence length 10240 and a local size of 27. The other model parameters are similar to the GPT-2 XL implementation details as listed above. The GPT3-XL model is scaled across the two CS-2s to provide a global batch size of 54. On Nvidia A100, we use an identical GPT-2 13B model consisting of 40 layers of hidden dimension 5120 and 40 attention heads. ## V Results In this section, we present the experimental results for the three evaluation cases. We start with the transformer micro-benchmark evaluated with three precision in Section V-A. Next, we present the results of the GPT-2 XL 1.5B parameter model, with a focus on scalability in Section V-B1, GAS study in Section V-B3, sequence length analysis in Section V-B2, and sparsity studies in Section V-B4. Later, we detail our experimental results on the GenSLM model on three systems with models of three sizes: 1.5B, 13B, and 25B parameters in SectionV-C. ### _Transformer micro-benchmark_ The results of the transformer micro-benchmark evaluation on a single NVIDIA A100 GPU, SambaNova SN30 RDU, Graphcore Bow IPU, Habana Gaudi2, and AMD MI2507 are shown in Figure 3 and 4, which illustrate the throughput of forward and backward passes in three precision formats: FP32, FP16, and BF16. Mi250 has a higher memory bandwidth (3276.8 GB/s) compared with that (2,039 GB/s) of A100, the higher bandwidth might be used to improve the performance of single precision. But the total efficiency of MI250 is lower than A100. Footnote 7: Evaluation of transformer micro-benchmark on CS-2 is under progress. It is observed that NVIDIA A100 GPU demonstrates the baseline throughput, capitalizing on its advanced tensor cores and parallel processing capabilities. The throughput with FP16 and BF16 precision is 4x higher than FP32. A100 exhibits 2x theoretical performance for single precision compared to half precision. Additional performance improvement could arise due to a reduction in memory access using half-precision and this transformer is memory intensive kernel. SambaNova SN30, with its reconfigurable Dataflow architecture, exhibits impressive performance with BF16 precision, showcasing its potential for handling complex transformer workloads using the half-precision format. Due to the pipelined/fusion execution on RDUs, there is naturally a warmup and cooldown phase as in any pipeline. More samples in a batch lead to longer steady-state behavior and higher effective throughput. Graphcore Bow IPU, powered by IPUs, demonstrates exceptional performance with FP32 and FP16 precision, highlighting its suitability for NLP tasks. Meanwhile, Habana Gaudi2 exhibits robust performance with all three formats, emphasizing its prowess in efficiently executing various transformer computations. In the backward pass, we think it's due to higher Fig. 4: Throughput evaluation of transformer micro-benchmark in the backward pass with various precision Fig. 3: Throughput evaluation of transformer micro-benchmark in the forward pass with various precision utilization of hardware, and that brings higher throughput. AMD MI250, capitalizing on its dedicated Tensor Processing Core array, exhibited remarkable acceleration and consistent throughput in the backward pass. ### _Gpt-2 XL_ For this model, we present throughput for the pre-training of the different configurations of the number of devices as a scaling study. Later we discuss the sensitivity of sequence length and gradient accumulation steps on the model throughput. #### Iv-B1 Scaling study Here we present our findings from scaling the GPT-2 XL model on the different systems. The number of devices used in the scaling study differs from one system to another due to the availability of resources. In particular, we used 64 Nvidia A100 GPUs, 2 CS-2 engines, 64 SambaNova SN30 RDUs, 64 Graphcore Bow IPUs, 4 AMD MI250 GPUs, and 64 Habana Gaudi2 HPUs. Figure 5 shows the impact of model throughput (in log scale) across an increasing number of devices. 8. It is to be noted that the precision used on each system is different and the batch sizes are tuned for this configuration on each system. Footnote 8: On Bow, replica size 4 is not supported with 32 IPUs mesh topology due to an address limitation The speedups across the number of devices on each evaluated system along with the scaling efficiencies are listed in Table II. A striking observation from this study showed that the model trained on 16 SN30 RDUs, 2 CS-2s, and 16 IPUs outperformed the runs on 64 A100s.9 Additionally, Gaudi2 reported the highest scaling efficiency of 104%. This scaling behavior is due to the optimizations from the Synapse software stack that help minimize the overhead in the sharding of the model and data across multiple HPUs. This is followed by Bow Pod64 which attains scaling efficiency of 100%. The superlinear scaling is enabled by the use of the replicated tensor sharding - as the scale is increased, the pressure on DRAM I/O is reduced for weight loading and IPU-Links are leveraged to exchange shards of weight tensors. Additionally, Bow-IPU has 900 MB of SRAM and currently, it does not use DRAM for running this model hence, we would not be able to fit it into a single IPU due to the SRAM size limitation. We use 4 IPUs and run pipeline-parallel, with model layers distributed across the IPUs. Cerebras CS-2's scales at 95.7% efficiency, which demonstrates the efficiency of the weight streaming [42] technique with dedicated nodes, MemoryX for saving all model weights, and SwarmX to stream these weights to the compute engine. It is interesting to note that the SN30 and MI250 have scaling efficiencies around 80% which is higher than the A100 number at 75.8%. Footnote 9: We aim to arrive at a fair comparison across the systems to the greatest possible extent. The results show that all the evaluated accelerators demonstrate increased throughput as the models are run across an increased number of devices. Though computationally expensive, models with a larger number of parameters can be trained better with more devices. As model sizes scale up in order of trillion parameters, it may not be feasible to run them on increased device count, thereby stressing the need to implement new approaches to fit larger models on fewer devices with the goal of improving the time to train a model given a computational budget.10 Footnote 10: Note that the A100 and IPU results begin at 4 devices, and CS-2 stops after 2. #### Iv-B2 Impact of sequence length The need for scaling the sequence length in the large language model tasks has been highlighted in recent works [43] with scaling to more than 1B tokens. Large context lengths are extremely important in synthesizing genomic data as seen in the GenSLM science applications. Hence we study the impact of scaling the sequence lengths for the GPT-2 model and present the results on Nvidia A100, SambaNova SN30, and Cerebras CS-2 and exclude others either due to limited support or a work in progress. For this study, we use the GPT-2 XL 1.5B parameter model on both A100 and Cerebras system and both the Cerebras CS-2 and SambaNova SN30 use the GPT-13B parameter model. As we can see from Table III, Nvidia A100 needs a minimum of 4 devices to fit a dataset of smaller sequence length. On the other hand, SambaNova SN30 and Cerebras CS-2, owing to their large local memory and compute architecture, can fit models with longer sequence lengths on a single compute device. SambaNova SN30 can fit a 13B parameter model with varying sizes of sequence lengths ranging from 1K to 64K, all on a single RDU. The results are presented when we run 8 instances of the model on a node in data parallel mode. We see a predictable decline in the throughput with an increase in the sequence length. Moreover, from a sequence length of 32k, the implementation uses segmented softmax implementation. We see a similar trend of sequence lengths impacting the throughput when the sequence length increases from 1K to 8K for the CS-2 system for a GPT-2 XL model. #### Iv-B3 Impact of gradient accumulation steps Large language models, especially those with billions of parameters, consume a significant amount of memory during training. Gradient accumulation allows for the accumulation of gradients over multiple mini-batches before updating the model's weights. This reduces the memory requirements compared to updating the model after processing each individual mini-batch. This is particularly important when working with hardware with limited memory capacity. In this study, we studied the impact of increasing GAS values on the model performance. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline **System** & \begin{tabular}{l} **minimum** \\ **\#devices** \\ \end{tabular} & \begin{tabular}{l} **maximum** \\ **\#devices** \\ \end{tabular} & \begin{tabular}{l} **scaling efficiency** \\ **improvement** \\ \end{tabular} & \begin{tabular}{l} **speedup** \\ **improvement** \\ \end{tabular} \\ \hline Gaudi2 & 1 & 64 & 104\% & 66.4x \\ \hline Bow & 4 & 64 & 100.1\% & 16x \\ Pod64 & & & & \\ \hline CS-2 & 1 & 2 & 95.7\% & 1.8x \\ \hline SN30 & 1 & 64 & 82.6\% & 52.8x \\ \hline MI250 & 1 & 4 & 80\% & 3.2x \\ \hline A100 & 4 & 64 & 75.8\% & 12.1x \\ \hline \end{tabular} \end{table} TABLE II: Scaling behavior study with the GPT-2 XL model Figure 6(a) shows the sensitivity of the model throughput on GAS values on the A100 and MI250 GPUs. For this study, we test performance for GAS values of 1 and 4 while we keep batch size and sequence length constant at 16 and 1024 respectively. We observe throughput improvement for larger GAS values before it saturates as the number of devices increases. We see that when the GAS value is increased from 1 to 4 on the GPT-2 XL model with micro-batch size 1 and tensor parallel degree 1, the model throughput is increased by 1.74x across 64 A100 GPUs. For MI250 GPUs, the model throughput is increased by 1.2x across 4 GPUs. The increased performance can be accounted for by the capacity to process more samples with increased GAS value. Also, for the A100 GPUs when we vary the GAS values from 1 through 32 for local batch size 32, we observed a 1.74x increase in the model throughput, thus confirming the source for this increase to larger GAS value at a relatively lower batch size. A detailed study on the trade-off between local batch size and GAS values for A100 and MI250s is a work in progress. Figure 6(b) presents the throughput in samples per second for GAS values 1, 4, 8, and 16 on the SambaNova SN30 system. Though these results are provided for 16 instances of model training that trains across a full 8 RDU node, we can see similar behavior when extended to a full 8 racks. We observe that the throughput gain is significant with increasing GAS values when coupled with a smaller local batch size. On the contrary, increasing the GAS values has Fig. 5: GPT2-XL scaling study showing throughput in log scale with an increasing number of devices per accelerator with a sequence length 1K. Fig. 6: Impact of Gradient Accumulation Step (GAS) value on model throughput \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **System (model Size)** & **Seq Length** & **Devices** & \begin{tabular}{c} **Throughput (tokens/s)** \\ \end{tabular} \\ \hline A100 (1.5B) & 1024 & 4 & 134,144 \\ & 2048 & 4 & 124,928 \\ \hline CS-2 (1.5B) & 1024 & 1 & 133,069 \\ & 2048 & 1 & 114,811 \\ & 4096 & 1 & 63,488 \\ & 8192 & 1 & 16,302 \\ \hline CS-2 (13B) & 1024 & 1 & 20,685 \\ & 2048 & 1 & 20,173 \\ & 4096 & 1 & 17,531 \\ & 8192 & 1 & 15,237 \\ & 16384 & 1 & 11,796 \\ & 32768 & 1 & 7537 \\ & 51200 & 1 & 5120 \\ \hline SN30 (13B) & 1024 & 8 & 22,135 \\ & 2048 & 8 & 21,684 \\ & 4096 & 8 & 17,000 \\ & 8192 & 8 & 10,581 \\ & 16384 & 8 & 4936 \\ & 32768 & 8 & 5021 \\ & 65536 & 8 & 1880 \\ \hline \end{tabular} \end{table} TABLE III: Impact of Sequence length on model throughput less or no effect on the throughput, which saturates at larger local batch sizes. This observation can be attributed to the fact that at larger batch sizes, the additional task of loading the gradient in case of the backward pass computations is significantly time-consuming compared to the time saved by reducing the number of optimization steps. Figure 6(c) shows the sensitivity of the model throughput on GAS values on the Graphcore POD16. For this study, we consider a replication factor of 4 indicating that a single instance of the model (that is sharded over 4 IPUs) is replicated 4 times to span an entire POD16 system. The results can be extended to a full rack of POD64. As we can see, Graphcore can support very large GAS values ranging from 16 to 2048. This is made possible by processing multiple batches and aggregating gradients into accumulator tensors without increasing memory usage. The SambaNova SN30 can also technically support significantly large GAS values, though its impact on the model throughput is yet to be investigated. Cerebras gradient accumulation steps (GAS) are determined automatically by the Cerebras software stack to drive the best throughput given the requested global batch size and number of CS-2s. Given the limited user functionality to tune this parameter, we excluded CS-2 in this study. #### V-B4 Sensitivity of weight sparsity on model throughput Sparsity is important for training large language models primarily owing to several factors such as enhanced memory efficiency by storing only non-zero parameters, improved training speed due to a reduction of a number of parameters updated in each optimization step, and effective scalability. It also helps with faster inference as the computations are limited to non-zero weights. Here, we conducted a study of sensitivity of model sparsity on the throughput of the Cerebras CS-2. The sparsity degree reflects the percentage of weights that are pruned, in order to accelerate the training. From Figure 7 it can be observed that the model throughput increases from a dense model (s=0) to a highly sparse model (s=0.9). For the GPT-2 XL model, we observed a throughput speedup of 1.8x to 2x with extreme sparsity (s=0.9) when compared to a completely dense model (s=0) 11. Additionally, the sparsity degree has a higher impact on the throughput improvement with larger models. For the GPT-3 6.7B, a sparsity of 0.9 yields 2.1x over a dense model on 1 CS-2, and for GPT-3 30B, a sparsity of 0.9 yields 3.79x over a dense model on 1 CS-2. Further scaling out, this speedup factor is improved by 7.75x, and 15.49x for the 6.7B model on 8 and 16 CS-2s compared to 1 CS-2. This experiment demonstrated that model sparsity can significantly boost throughput and can potentially help to fit larger models on a relatively smaller number of devices.12 Footnote 11: 2-3x speedups with larger models Additionally, Figure 8 shows the loss curves for a GPT-3 model trained on 3B tokens, highlighting a 15% loss in the model accuracy with sparsity s=0.9 as compared to a dense model. During pre-training, there is a degradation in the training loss proportional to the sparsity value. However, dense finetuning of the model can recover such a gap, as past work by Cerebras on the sparse pre-training and dense fine-tuning framework has demonstrated. A 1.3B GPT-3XL model pre-trained with 75% sparsity resulting in no significant loss in downstream task accuracy while reducing 2.5x pre-training FLOPs [citation]. We are currently exploring the sparse pretraining dense finetuning technique [21] for large LLM models on CS-2. Studies assessing the impact of sparsity on training on the SambaNova SN30 with techniques developed [20] and Nvidia A100s are ongoing and will be included in the final version of the paper. Due to limited or no support of sparsity on the other systems, they were excluded from this study. ### _GenSLM_ Table IV presents the throughput of the GenSLM model, measured in tokens per second, across Nvidia A100, SambaNova SN30, and Cerebras CS-2. We present the throughput with three model sizes: A100 and CS-2 with a 1.5B parameter model, A100, SN30, and CS-2 with a 13B parameter model, and CS-2 with 25B parameter model. In the case of A100, SN30, and CS-2, all implementations utilize the GPT-2 13B Fig. 8: Loss curve for sparse model implementation on CS-2 Fig. 7: Effect of various sparsity levels (0, 0.6, 0.8, and 0.9) on model throughput on CS-2 parameter model and are trained with a sequence length of 1024. Both A100 and the SN30 use a micro-batch size of 32, but since the SN30 can fit the 13B model of 32 micro-batch size on 4 tiles, it can accommodate 16 instances of the model in parallel over 8 devices13. Different batch sizes were used for SN30 and A100 for the same model used in Table III. Footnote 13: GenSLM runs on SN30 are not optimized at the time of writing this paper Our observations reveal that when we consider the same number of compute units or devices, the SN30 demonstrates a remarkable 5.7x throughput improvement over the same number of A100s. We also find that a single Cerebras CS-2 demonstrates a massive 19x speedup for the same model when compared to eight A100s. When we further compare the performance of the 1.5B parameter GenSLM model trained on Nvidia A100 and Cerebras CS-2, an interesting trend emerges. The CS-2 exhibits the capability to handle sequence lengths that are ten times longer than those run on the A100 GPUs, all while achieving a noteworthy 2x speedup improvement over the A100s. This rigorous evaluation underscores the significant contribution of these accelerators in addressing large-scale real-world scientific applications. They play a pivotal role in accelerating the time to accuracy in the realm of large-scale scientific applications. ### _Observations:_ This comprehensive benchmarking evaluation of one of the most pivotal AI workloads has yielded several noteworthy observations. Below, we present a few interesting and valuable insights. * The capacity to accommodate a sizable model within a single device is contingent upon the available computational resources and memory capacity. Even with the usage of powerful computational engines, the employment of innovative techniques aimed at minimizing memory consumption, particularly parameters such as weights and activations, is of paramount significance. * Facilitating the execution of open-source models and streamlining the extension of models from Hugging Face are important to leverage these AI accelerators for a plethora of emerging AI for science applications. * Achieving a fair comparison among AI accelerators presents notable challenges. Discrepancies arise from variations in local/micro-batch sizes, GAS values, the number of model replicas, and related factors. It is imperative to devise methodologies that facilitate a fair comparison. * Notably, increasing GAS values does not invariably translate to performance improvement beyond a certain threshold. This method, combined with a judicious choice of micro-batch size, enables for run at larger batch sizes. * Supporting longer sequence lengths is important to capture context, handle long-range dependencies, and excel in a wide range of tasks. * With upcoming models with trillions of parameters and the requirement to cater to longer sequence lengths, hardware and software design must be tailored to maximize computational capabilities while simultaneously minimizing memory usage. ## VI Conclusions In this paper, we performed an extensive and comprehensive evaluation of generative AI models on non-traditional hardware with a focus on GPT models, an in-depth analysis of a core transformer block, and a science application (GenSLM). Additionally, we explored the scaling behavior of the GPT-2 model to gain insights into its performance characteristics. One of our important observations is the memory limitations inherent to the hardware that restrict the feasibility of the size of models that can fit on a single device. It further requires a distributed implementation, such as data parallel, with an increased number of devices. A near-linear scaling is also observed with an increased number of devices. Furthermore, the adoption of optimizations, such as weight sparsity, helps effectively reduce the communication overhead in distributed implementations. We plan to continue this evaluation with a focus on longer sequence lengths and benchmark representative models for the emerging generative AI for science applications. We also would like to extend this comprehensive benchmarking study with larger models such as 175B parameter models and with generative models with distinct architectures such as Llama. It has been observed that to facilitate effective training of significantly larger models, such as trillion parameters in AI for science applications, leveraging non-traditional AI hardware would be pivotal. Optimizing other metrics besides model throughput, such as power consumption and I/O in training, particularly with increased computation and the utilization of larger data sets, will also be essential. ## VII Acknowledgment This work was supported by the Argonne Leadership Computing Facility, a U.S. Department of Energy (DOE) Office of Science User Facility, and by Laboratory Directed Research and Development (LDRD) funding from Argonne National Laboratory, provided by the Director, Office of Science, of the U.S. DOE under Contract No. DE-AC02-06CH11357. We also extend our sincere thanks to the staff from Cerebras, SambaNova, Graphcore, and Habana who helped us to perform this study. \begin{table} \begin{tabular}{|l|l|l|l|} \hline **System** & **Devices** & **Sequence** & **Throughput (tokens/sec)** \\ \hline A100 (1.5B) & 4 & 1024 & 41,891.84 \\ CS-2 (1.5B) & 2 & 10240 & 173,875.2 \\ \hline A100 (13B) & 8 & 1024 & 1150.20 \\ SN30 (13B) & 8 & 1024 & 6561.22 \\ CS-2 (13B) & 1 & 1024 & 21,811.2 \\ CS-2 (13B) & 1 & 10240 & 14,643.2 \\ \hline CS-2 (25B) & 1 & 10240 & 5324.8 \\ \hline \end{tabular} \end{table} TABLE IV: GenSLM model performance evaluation
2310.10672
Hybrid Quantum-Classical Machine Learning for Sentiment Analysis
The collaboration between quantum computing and classical machine learning offers potential advantages in natural language processing, particularly in the sentiment analysis of human emotions and opinions expressed in large-scale datasets. In this work, we propose a methodology for sentiment analysis using hybrid quantum-classical machine learning algorithms. We investigate quantum kernel approaches and variational quantum circuit-based classifiers and integrate them with classical dimension reduction techniques such as PCA and Haar wavelet transform. The proposed methodology is evaluated using two distinct datasets, based on English and Bengali languages. Experimental results show that after dimensionality reduction of the data, performance of the quantum-based hybrid algorithms were consistent and better than classical methods.
Abu Kaisar Mohammad Masum, Anshul Maurya, Dhruthi Sridhar Murthy, Pratibha, Naveed Mahmud
2023-10-08T05:45:22Z
http://arxiv.org/abs/2310.10672v1
# Hybrid Quantum-Classical Machine Learning for Sentiment Analysis ###### Abstract The collaboration between quantum computing and classical machine learning offers potential advantages in natural language processing, particularly in the sentiment analysis of human emotions and opinions expressed in large-scale datasets. In this work, we propose a methodology for sentiment analysis using hybrid quantum-classical machine learning algorithms. We investigate quantum kernel approaches and variational quantum circuit-based classifiers and integrate them with classical dimension reduction techniques such as PCA and Haar wavelet transform. The proposed methodology is evaluated using two distinct datasets, based on English and Bengali languages. Experimental results show that after dimensionality reduction of the data, performance of the quantum-based hybrid algorithms were consistent and better than classical methods. Quantum Machine Learning, Haar Transform, SVM, Sentiment Analysis. ## I Introduction Sentiment analysis or opinion mining is the process of analyzing digital text to determine the emotions and opinions embedded in the text, and helps shed light on important issues of human opinion [1]. By examining data from sources such as emails, tweets, chat transcripts, reviews, etc., sentiment analysis tools provide valuable insights into public opinions. It is a reliable method for predicting human behavior and has essential use cases such as improving customer service, brand monitoring, and product market research. In recent years, researchers have demonstrated the efficiency of machine learning techniques for the correct classification of sentiments [2]. However, due to the computational limitations of classical machine learning [3], researchers are investigating the possibilities of newer, more efficient paradigms of computing, such as quantum computing. Quantum computers can take advantage of quantum mechanical effects such as superposition and entanglement to provide exponential speedup in specific tasks [4] compared to state-of-the-art classical computers. The rapid advancement of noisy intermediate-scale (NISQ) quantum hardware has driven research into quantum algorithm development for a variety of multidisciplinary fields such as Quantum Machine Learning (QML) [5]. Integrating machine learning with quantum computing can lead to greater benefits such as improved text classification [6]. Recent studies have introduced a quantum support vector machine (QSVM) for conventional data, to improve the performance of the traditional support vector machine (SVM) [7]. A QSVM uses the quantum kernel approach [8], which is essentially a hybrid approach that uses a classical SVM as the classifier. A type of quantum classifier known as the Variational Quantum Classifier (VQC) [9] has also been introduced that uses parameterized quantum circuits. The limited scalability of current NISQ devices makes it challenging to encode high-dimensional data for QML tasks. Dimension reduction techniques can be used to decrease the data dimensionality by identifying the most pertinent features from data and/or demoting the input number of data points. Principal component analysis, also referred to as PCA, is a widely known approach for obtaining the principal features of the data [10]. Another data compression technique widely used in signal processing is the Wavelet Transform, which preserves the spatial locality of data, and compared to other transforms, also provides better computation speeds [11]. In this work, we investigate the use of quantum machine learning techniques for sentiment analysis of textual data, and we propose dimension reduction techniques to reduce the dimensionality of the textual data. We present a methodology combining conventional Natural Language Processing (NLP) techniques with QML models such as QSVM and VQC. The proposed methodology also integrates techniques such as PCA and Wavelet transform to reduce data dimensionality, making it convenient to fit the classical data onto the quantum models and reducing the algorithm execution time. Our methodology is evaluated by employing textual data from two languages: English and Bengali. NLP for English text has been explored, for instance, Zhang et al. [12] investigated English text classification. The structure of the Bengali text is complex and different from other languages, especially in _pre-processing_ and _word embedding_[13] phases. This work presents efficient models for _pre-processing_ and _word embedding_ in support of both Bengali and English language-based text. In addition, we investigate the efficiency of quantum models for feature mapping and classification, and compare their performance with classical methods, when applied for sentiment analysis. An important contribution of this research is the use of Wavelet transform for dimensionality reduction of textual data. Wavelet-based feature extraction enables a meaningful representation of the data with reduced dimensionality, and combined with hybrid quantum-classical ML models, leads to improved sentiment analysis. ## II Related Work In this section, we discuss the state-of-the-art methods (quantum or hybrid) that have been proposed for sentiment analysis. Ganguly et al. used the lambeq toolkit for sentiment analysis [14]. They performed noisy simulations on an IBM backend, achieving 83.33% accuracy. However, they used a very small dataset (130 sentences), and their test accuracy could be improved for noisy quantum simulation by performing more iterations. Morales et al. [15] proposed a framework for quantum natural language processing (QNLP) and implemented their work on near-term quantum computers. In their work, they leveraged variational quantum circuits in QNLP models. Further research and empirical studies are necessary to address the challenges involved and understand the potential benefits and limitations of applying quantum computing in natural language processing. In their paper, Meichanetzidis et al. [16] presented a pipeline for QNLP. The framework used is compositional distributional semantics (DisCoCat), which extends the compositional structure of group grammars. Overall, the paper provides an overview of the proposed QNLP pipeline, but it lacks empirical evidence, comparative analysis, and in-depth discussions on the feasibility and limitations of the approach. Existing quantum sentiment analysis models primarily rely on rule-based techniques to process and analyze sentiment data. These models utilize quantum machine learning algorithms on a small amount of text data to extract sentiment. On the other hand, our hybrid quantum-classical sentiment analysis model follows automatic approaches for combining both quantum and classical computing techniques. Instead of solely relying on quantum algorithms, we leverage the strengths of classical algorithms alongside quantum algorithms to enhance sentiment analysis performance. ## III Proposed Methodology In this work, we present a methodology for performing sentiment analysis of languages by combining classical and quantum machine learning techniques. At first, the input data undergoes pre-processing, then a word embedding algorithm converts pre-processed data to a word vector. After that, we apply dimension reduction algorithms to reduce data dimensionality. The consequent data is then fed to either classical SVM models, or to quantum models, for training and classification of sentiment. The workflow of the proposed methodology is shown in Fig. 1 and we discuss its further details in the rest of this section. ### _Sentiment Analysis Data_ In this work, we have used two datasets for evaluating the methodology. The Bengali dataset [17] consists of news data where each line of text reflects human opinion. Scripts were used to collect the data from Bengali news portals. Furthermore, positive and negative sentiments have been separated into two categories of data. The dataset is structured as two columns of Bengali text and their correlated class and contains a total of 1619 data points and 65 distinct features. Each line of text contains vital words that represent the text context. Another dataset from _"Twitter US Airline Sentiment"_[18] has been used to measure the performance of the implemented model. The dataset has a total of 728 data points and 2027 unique features. For the Bengali and Twitter datasets, two separate word vector models subsequently converted them into two sets of feature vectors. ### _Text Pre-processing_ Pre-processing Bengali text is distinct from other languages. Bengali words have a complex format. Therefore, additional study is needed to develop rule-based grammar systems for the Bengali language and we primarily focused on basic text preparation and automated systems. We have implemented a function for pre-processing the Bengali text which includes techniques such as sentence-to-word [19], regular expression [20], and stop-word removal [21]. The text is processed from raw data using regular expression and then each sentence is split into words. At the same time, the stop words are removed from the list of stop words. Consequently, the processed text is sent to the next stage, i.e., word embedding, see Fig. 1. Fig. 2 gives a model diagram for the text pre-processing phase for Bengali text. For the Twitter dataset, we have applied available pre-processing approaches like tokenizer [19]. To facilitate the training of Fig. 1: Workflow for the proposed hybrid quantum-classical methodology for sentiment analysis. data, the output column must be converted into binary labels. To achieve this, a label encoder based on scikit-learn is utilized to transform the output column into binary labels. ### _Word Embedding_ We have leveraged the count vectorization technique [22] as a method of text data representation. By definition, count vectorization is a type of Boolean concept formulation in document labeling terminology, where documents can be represented via a set of words. In count vectorization, the dimension of the word vectors is determined by the length of the vocabulary. Whenever a new word occurs, the number of words and dimensions are simultaneously incremented by one. A counting vectorizer's fundamental objective is to comprehend and accommodate every single word in the lexicon. Fig. 3 shows a model of the count vectorizer used in our approach. A document phrase matrix has been constructed on the basis of the new lexicon that was developed. In our approach, every data row comprises a collection of words that signifies specific information. All the words within these data rows convey either negative or positive sentiments [23]. ### _Dimension Reduction_ For the proposed sentiment analysis methodology, we integrate two different techniques for data dimensionality reduction: Principal Component Analysis for dataset feature reduction and Haar Wavelet Transform for data compression. Principal Component AnalysisIn our PCA model, linear arrays are used at the beginning to determine the major vocabulary elements from the text datasets. The first principle component shows the initial set of data with the highest variance, while the second principle component shows the rest of the dataset with the most variation. The process continues sequentially, with the subsequent principal component representing the set of data from the rest of the dataset having the highest level of variance [24]. The most influential phrases are subsequently put to feature extraction procedures. \(M\) represents the overall count of principal components, while \(P\) denotes the substantial percentage of principal elements present in each principal component. However, \(P\) is commonly understood as the total count of principal elements present in the \(M\)-dimensional spaces. This represents the highest percentage of variable value pairs. The PCA principles provide clear evidence that \(P\leq M\), as derived from the rules. This supports the data dimension reduction approach for our datasets. Nevertheless, PCA is the most useful data compression method for classical systems where the method depends on the corresponding vector dimension of the text. Haar Wavelet TransformationHaar transform holds significant prominence in the realms of signal and image processing [25]. Haar transform makes it possible to decrease the number of data points while preserving the locality of the data and the relationships between neighboring values. The motivation behind utilizing this compression technique in our approach is to reduce the dimensionality of the data, leading to faster quantum circuit execution time in the consequent stages of our proposed methodology. The inverse of the Haar transform [26] can also be used to get back the original data points after dimension reduction. The Haar transform can be generalized as \(n\)-Dimensional (\(n\)-D) Haar transform, where \(n\) is the dimensionality of the data. Detailed algorithms for 1-D and 2-D Haar Transform can be found in [26]. In this work, we employed a 1-D discrete Haar Transform due to the discrete nature of the dataset and to reduce the dimensionality of the dataset. The 1-D Haar Transform can be performed as a multi-level, decomposable operation [26]. Each level of operation involves dividing the set of data points into two distinct non-overlapping segments. This division is achieved by computing the average and difference between neighboring values. The resulting difference values are generally close to zero or significantly smaller in comparison to the averaging values. Consequently, these negligible difference values can be safely discarded. By discarding them, the transformed dataset retains only the high-amplitude values, effectively reducing the dimension(s) of the original dataset by exactly half. For \(l\) decomposition levels, the dimension(s) will be reduced by \(2^{l}\). ### _Classical SVM Classifier_ SVM is a classical machine learning method that allocates labels to data objects after training on example data [27]. For example, an SVM can be used for negative and positive sentiment analysis on a text-based dataset. The SVM method works by statistical classification where a separate hyperplane and a maximum number of margins between two separated objects work as base points. The linear SVM classifier is considered the most fundamental and quickest SVM, supposing a straight-line separation between classes. It was defined as the most effective technique for text categorization [28]. Our sentiment analysis methodology employs a traditional SVM classifier to accommodate each data point within \(n\)-dimensional spaces. Within these spaces, the feature values are extracted based on the corresponding data point's coordinates. ### _Quantum Feature Mapping_ Quantum computers harness the power of an exponentially large quantum state space, offering great potential for han Fig. 3: Count vectorization approach for word embedding. Fig. 2: Bengali text pre-processing method for sentiment analysis. dling large datasets and mapping them to higher dimensions. Quantum feature mapping serves as a means to non-linearly map classical data onto quantum states, represented as vectors in Hilbert space, \(\left|f(x)\right\rangle\left\langle f(x)\right|\); where \(f\) is a data mapping function and \(x\) is the feature data point [9]. A quantum circuit that is classically hard to simulate is important to gain the quantum advantage, as it provides a mapping of the data that is hard to compute classically. Entanglements between qubits in such circuits, take account of nonlinear interactions between features [9]. In our work, we have employed a generalized second-order Pauli feature mapping, discussed further: Pauli feature mapA generalized method for feature mapping that utilizes Pauli basis gate set [4]. Based on the mapping function \(f\), we design a unitary utilizing Pauli gate set {X, Y, and Z}, and an entanglement scheme which is typically either _linear_ or _full_ entanglement. The full entanglement feature map circuit will have full connectivity between qubits, accounting for \(n(n-1)/2\) feature interactions, where \(n\) is the number of qubits in the circuit. Linear entanglement scheme accounts for interactions with only adjacent qubits. Thus, the full entanglement feature map has a higher circuit depth than the linear entanglement scheme. A classically hard to simulate [29] Pauli feature map unitary for the gate pair of \(Z\) and \(XX\) is shown in equation (1), accounting for two-qubit interaction. \[U_{f(x)}=\left(\exp\left(i\sum_{q_{j},q_{k}}f_{q_{k},\left(x\right)}X_{q} \otimes X_{q_{k}}\right)\right)\times\exp\left(i\sum_{q_{j}\left(x\right)}Z_{ q_{j}}\right)H^{\otimes n}\right)^{r} \tag{1}\] Here \(X\) and \(H\) are conventional NOT and Hadamard gates respectively, \(XX\) is the tensor of two \(X\) gates, and \(r\) is the number of circuit repetitions. The unitary \(U_{f(x)}\) will be applied to \(\left|0\right\rangle\), the initial state of the qubits. The \(H\) gate puts the circuit in the superposition state and phase gate (rotation gate in Pauli basis) manipulates the superposition state based on the chosen function map \(f(x)\). In (1), the exponent involving the tensor operation (\(X\otimes X\)) generates the entangled portion of the feature map circuit, and the remaining exponent terms, contribute to the rest of the non-entangled part of the circuit. ### _Quantum Classifiers_ SVM classifier with quantum kernelSVM can handle linearly separable data points but non-linearly separable data can be classified by employing kernel functions like linear, polynomial, and sigmoid functions, which maps the data to higher dimensions. The expression to calculate the quantum kernel matrix elements is \(K_{i,j}=\mod(\left\langle f(x_{i})\right|^{\dagger}\left|f(x_{j})\right\rangle)^ {2}\). \(K_{i,j}\) is the measure of the distance between each data point with every other in the dataset. After computing the kernel matrix, it can be utilized as input for any standard classical machine-learning model to train a hyperplane. Variational Quantum Classifier (VQC)A quantum algorithm for classification known as variational quantum classifier (VQC) was introduced in [9]. In the VQC algorithm, a variable parameterized quantum circuit, also known as ansatz, is used in conjunction with quantum feature maps for classification. In VQC, the output of the quantum feature map is passed through the ansatz, which typically consists of parameterized rotation gates and CNOT gates [9]. Consequently, measurement of the circuit produces a classical bitstring that is mapped to a binary label, and then matched with the corresponding binary label of the encoded feature set in the feature map. Later this mapping gets utilized by a classical system which tunes the rotation parameters by optimizing the cost function \(\left\langle f(x)\right|U^{\dagger}\hat{M}U\left|f(x)\right\rangle\geq\Delta\)[9], where \(U\) is the random unitary which initializes the parameters of ansatz and \(\hat{M}\) is the measurement typically in a Z basis. The value of \(\Delta\) decides the separation between the labels. In our work, we have used a real amplitude circuit as an ansatz with linear entanglement scheme. The real amplitude circuit is based on the \(Y\)-rotation gate (\(R_{y}\)) and only affects the real components of the quantum state. ## IV Experiment Results And Analysis The experiments to evaluate the proposed hybrid methodology were conducted using qiskit [30]. The experiments were performed on the noisy _qasm_simulator_[31] backend from IBM. For evaluation we used two distinct datasets containing English and Bengali text data. We classified text data into two classes, positive and negative based on the sentiment of the data. To establish a baseline, we first performed sentiment analysis on the datasets using fully-classical SVM models without any dimension reduction. The obtained results are presented in Table I. For the Bengali dataset, the classical accuracy on Test data is 72%, while for the English dataset, the accuracy is 84%. In this work, we have used namely the following quantum-classical methods, the SVM classifier with quantum kernel and the Variational Quantum Classifier (VQC), which is a parameterized quantum circuit classifier. The VQC circuit parameters were optimized using ADAM optimizer [32]. For comparison, we used a third method called the Quantum Support Vector Classifier (QSVC), which is another variant of SVM classifier with a quantum kernel and is directly imported from qiskit's library. The QSVC is a pre-built qiskit function which takes a feature map input and trains an SVM classifier. We implemented quantum circuit-based, classically-hard-to-simulate feature mapping by synthesizing (1). For our experiments, the data mapping function \(f\) used is: \[f_{q_{j},q_{k}}(x)=\left(\pi-x[j]\right)\left(\pi-x[k]\right) \tag{2}\] \[f_{q_{m}}(x)=x[m] \tag{3}\] Equation (2), will be utilized for an entangled qubit pair, \(q_{j}\) and \(q_{k}\), and \(x\) represents the feature data point, while (3) is applicable for non-entangled qubits. The entanglement arrangement we used in the experiments is linear entanglement. In Table II, we present the performance of the implemented quantum-classical methods when using Pauli feature maps. We evaluate their effectiveness on the Bengali dataset reduced to 2, 4, and 6 feature sets after PCA. The test accuracy consistently falls within the range of 71% to 72% for all feature sets. We also measured the training times for each technique, and VQC required the least training time for each feature set compared to other algorithms, see Table II. The results in Table III for the Twitter dataset using Pauli feature maps, respectively demonstrate consistent accuracy for both training (69%-71%) and testing (70%-71%). Notably, when training the feature sets, the QSVC and VQC algorithms performed with higher test and train accuracies compared to the SVM classifier with quantum kernel, see Table III. Furthermore, VQC exhibits shorter training times compared to other algorithms when applied to this dataset also. With the number of features reduced to two using PCA, we applied Haar transform to further compress the number of data points in the datasets. This significantly reduced the training time of the subsequent quantum methods. Up to five levels of Haar decomposition were performed. In addition to the quantum-classical methods, we also evaluate a classical SVM classifer for accuracy comparison with the quantum-classical methods. Table IV show that only one level of decomposition reduced the classical SVM test accuracy to about 58% with further degradation at higher decomposition levels. On the other hand, the quantum-classical algorithms provide better and steady classification accuracy of 71.23%. This implies that the Haar transform in combination with the quantum-classical methods is more effective than when applied for the classical SVM. The VQC algorithm takes less time to train than other methods on average and maintains consistent classification accuracy up to the maximum number of decomposition levels applied. For the Bengali dataset, see Table V, the classical accuracy is constant for up to the maximum decomposition level because the dataset used in the experiment is balanced. Quantum-classical algorithms for the Bengali dataset after \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Number of**} & \multicolumn{3}{c|}{**SVM Classifier with Quantum Kernel**} & \multicolumn{3}{c|}{**QSVC**} & \multicolumn{3}{c|}{**VQC (00 iterations)**} & \multicolumn{3}{c|}{**Classical Accuracy**} \\ \cline{2-10} & **Number of** & \multirow{2}{*}{**Number of**} & \multirow{2}{*}{**Number of**} & \multirow{2}{*}{**Time Ensemble**} & \multirow{2}{*}{**Test (\%)**} & \multirow{2}{*}{**Time Ensemble**} & \multirow{2}{*}{**Test (\%)**} & \multirow{2}{*}{**Time Ensemble**} & \multirow{2}{*}{**Test (\%)**} & \multirow{2}{*}{**Time Ensemble**} \\ \cline{2-10} & **Training** & & & & & & & & & & \\ \multirow{-2}{*}{**Levts (HARAR)**} & **Data** & **Accuracy** & **Accuracy** & **Accuracy** & **Accuracy** & **Training** & **Accuracy** & **Training** & **Accuracy** & **Accuracy** \\ \hline \multirow{2}{*}{1} & 1295 & 72.22 & 67.56 & 2534.04 & 72.22 & 67.56 & 2286.49 & 72.22 & 67.57 & 775.32 & 72.22 & 67.56 \\ \cline{2-10} & 1 & 647 & 72.62 & 65.58 & 1178.75 & 72.22 & 65.83 & 122.72 & 65.52 & 367.6 & 72.22 & 65.53 \\ \hline 2 & 323 & 72.22 & 65.94 & 136.57 & 72.22 & 65.94 & 136.39 & 72.22 & 65.94 & 95.08 & 72.22 & 65.94 \\ \hline 3 & 161 & 72.22 & 68.32 & 33.41 & 72.22 & 68.94 & 33.1 & 72.22 & 65.94 & 94.76 & 72.22 & 71.42 \\ \hline 4 & 80 & 72.22 & 73.75 & 8.500 & 72.22 & 73.75 & 8.06 & 72.22 & 71.25 & 23.8 & 72.22 & 80 \\ \hline 5 & 40 & 72.22 & 65.5 & 2.24 & 72.22 & 65.5 & 2.04 & 72.22 & 65.5 & 11.32 & 72.22 & 87.5 \\ \hline \end{tabular} \end{table} TABLE V: Performance comparison using Pauli feature Map and PCA + Haar compression on Bengali Dataset. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Number of**} & \multicolumn{3}{c|}{**SVM Classifier with Quantum Kernel**} & \multicolumn{3}{c|}{**QSVC**} & \multicolumn{3}{c|}{**VQC(100 iterations)**} \\ \cline{2-10} & **Number of** & \multirow{2}{*}{**Number of**} & \multirow{2}{*}{**Test (\%)**} & \multirow{2}{*}{**Train (\%)**} & \multirow{2}{*}{**Time Ensemble**} & \multirow{2}{*}{**Test (\%)**} & \multirow{2}{*}{**Train (\%)**} & \multirow{2}{*}{**Time Ensemble**} & \multirow{2}{*}{**Test (\%)**} & \multirow{2}{*}{**Train (\%)**} \\ \cline{2-10} & **Data** & **Accuracy** & **Accuracy** & **Accuracy** & **Accuracy** & **Accuracy** & **Accuracy** & **Accuracy** & **Training (sec)** \\ \hline 2 & & 71 & 69 & 3,455.40 & 72 & 69 & 4,456 & 72 & 68 & 618.6 \\ \hline 4 & 1619 & 71 & 69 & 6,631.60 & 72 & 69 & 7,216 & 72 & 68 & 1866.6 \\ \hline 6 & & 71 & 69 & 8,007 & 72 & 69 & 8,679 & 72 & 68 & 3,984 \\ \hline \end{tabular} \end{table} TABLE II: Performance comparison of quantum-classical methods using Pauli feature map and PCA on Bengali Dataset. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Number of**} & \multicolumn{3}{c|}{**SVM Classifier with Quantum Kernel**} & \multicolumn{3}{c|}{**QSVC**} & \multicolumn{3}{c|}{**VQC(100 iterations)**} \\ \cline{2-10} & **Number of** & \multirow{2}{*}{**Test (\%)**} & \multirow{2}{*}{**Train (\%)**} & \multirow{2}{*}{**Time Ensemble**} & \multirow{2}{*}{**Test (\%)**} & \multirow{2}{*}{**Train (\%)**} & \multirow{2}{*}{**Time Ensemble**} & \multirow{2}{*}{**Test (\%)**} & \multirow{2}{*}{**Train (\%)**} \\ \cline{2-10} & **Training** & & & & & & & \\ \multirow{-2}{*}{**Levts (HARAR)**} & **Data** & **Accuracy** & **Accuracy** & **Accuracy** & **Accuracy** & **Training** & **Accuracy** & **Accuracy** \\ \hline 0 & 382 & 71.23 & 76.44 & 120.88 & 71.23 & 70.44 & 22.236 & 71.23 & 303.90 & 84.93 & 99.65 \\ \hline 1 & 291 & 71.23 & 67.69 & 409.54 & 72.37 & 67.09 & 100.91 & 71.23 & 67.70 & 155.27 & 58.21 & 100 \\ \hline 2 & 145 & 71.23 & 68.27 & 165.42 & 71.23 & 68.27 & 51.75 & 71.23 & 68.28 & 88.21 & 67.12 & 100 \\ \hline 3 & 72 & 71.23 & 66.66 & 62.55 & 71.23 & 83.11 & 12.15 & 71.23 & 66.67 & 46.56 & 36.37 & 100 \\ \hline 4 & 35 & 71.23 & 62.55 & 28.48 & 71.23 & 68.25 & 3.43 & 71.23 & 62.86 & 23.63 & 54.79 & 100 \\ \hline 5 & 17 & 71.23 & 64.7 & 13.58 & 71.23 & 64.7 & 1.2 & 71.23 & 64.71 & 12.8 & 28.76 & 100 \\ \hline \end{tabular} \end{table} TABLE IV: Performance comparison using Pauli feature Map and PCA + Haar compression on Twitter Dataset. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Number of**} & \multicolumn{3}{c|}{**SVM Classifier with Quantum Kernel**} & \multicolumn{3}{c|}{**QSVC**} & \multicolumn{3}{c|}{**VQC(100 iterations)**} \\ \cline{2-10} & **Number of** & \multirow{2}{*}{**Number of**} & \multirow{2}{*}{**Test (\%)**} & \multirow{2}{*}{**Train (\%)**} & \multirow{2}{*}{**Time Ensemble**} & \multirow{2}{*}{**Test (\%)**} & \multirow{2}{*}{**Train (\%)**} & \multirow{2}{*}{**Time Ensemble**} & \multirow{2}{*}{**Test (\%)**} & \multirow{2}{*}{**Train (\%)**} \\ \cline{2-10} & **Data** & **Accuracy** & **Accuracy** & **Accuracy** & **Accuracy** & **Accuracy** & **Accuracy** & **Training** (sec)** \\ \hline 2 & & 70 & 69 & dimensionality reduction show the same level of performance as classical systems without dimension reduction, achieving an accuracy of 72.22%, see Tables I, and V. This implies that for the Bengali dataset, our quantum-classical methods in combination with dimension reduction was highly effective, leading to efficient sentiment analysis on a large dataset. ## V Conclusion and Future Work Our research demonstrates the potential of quantum-classical hybrid algorithms in sentiment analysis of Bengali and English datasets. We also highlighted the importance of considering language-specific characteristics, particularly in Bengali sentiment analysis. We investigated hybrid quantum-classical algorithms that included quantum feature maps and quantum classifiers. Moreover, we integrated dimension reduction techniques to facilitate encoding classical data on to the quantum models. Moving forward, there are several areas for future work in quantum-classical hybrid sentiment analysis research. Firstly, expanding the analysis to include other languages would provide a more comprehensive understanding of the effectiveness of hybrid techniques across different linguistic contexts. Additionally, exploring error mitigation and noise correction, and the translation of quantum sentiment analysis into practical quantum applications are important areas for further investigation.
2308.14886
On the LS-category of homomorphism of almost nilpotent groups
We prove the equality $\cat(\phi)=\cd(\phi)$ for epimorphisms $\phi:\Gamma\to \Lambda$ between torsion-free, finitely generated almost nilpotent groups $\Gamma$ and $\Lambda$. In addition, we prove the equality $\cat(\phi)=\cd(\phi)$ for homomorphisms $\phi:\Gamma\to \Lambda$ between torsion-free, finitely generated virtually nilpotent groups.
Nursultan Kuanyshov
2023-08-28T20:15:01Z
http://arxiv.org/abs/2308.14886v1
# On the LS-category of homomorphism of almost nilpotent groups ###### Abstract. We prove the equality \(\operatorname{cat}(\phi)=\operatorname{cd}(\phi)\) for epimorphisms \(\phi:\Gamma\to\Lambda\) between torsion-free, finitely generated almost nilpotent groups \(\Gamma\) and \(\Lambda\). In addition, we prove the equality \(\operatorname{cat}(\phi)=\operatorname{cd}(\phi)\) for homomorphisms \(\phi:\Gamma\to\Lambda\) between torsion-free, finitely generated virtually nilpotent groups. Key words and phrases:Lusternik-Schnirelmann category, group homomorphism 2020 Mathematics Subject Classification: Primary 55M30, 20J06; Secondary 55M10, 22E20, 22E40 ## 1. Introduction The _LS-category, denoted as \(\operatorname{cat}(X)\), of a topological space \(X\)_ is defined as the minimum number \(k\) such that \(X\) admits an open cover \(U_{0},U_{1},\dots,U_{k}\) by \(k+1\) contractible sets in X. This concept provides a lower bound on the number of critical points for smooth real-valued functions on closed manifolds [LS], [CLOT]. Since LS-category is a homotopy invariant, it can be extended to discrete groups \(\Gamma\) as \(\operatorname{cat}(\Gamma)=\operatorname{cat}(B\Gamma)\), where \(B\Gamma\) is a classifying space. Computing the LS-category of spaces, even when considering nice spaces such as manifolds, presents significant challenges [DS]. In the case of groups in the 1950s, Eilenberg and Ganea [EG] established the equality between LS-category and cohomological dimension, \(\operatorname{cat}(\Gamma)=\operatorname{cd}(\Gamma)\), for discrete groups \(\Gamma\). We recall the _cohomological dimension \(\operatorname{cd}(\Gamma)\) of a group \(\Gamma\)_ is defined as the supremum of \(k\) such that \(H^{k}(\Gamma,M)\neq 0\), where \(M\) is a \(\mathbb{Z}\Gamma\)-module [Br]. Dranishnikov and Rudyak [DR] showed that \(\operatorname{cd}(\Gamma)=\max\{k\mid(\beta_{\Gamma})^{k}\neq 0\}\), where \(\beta_{\Gamma}\in H^{1}(\Gamma,I(\Gamma))\) is the Berstein-Schwarz class of \(\Gamma\) [Be]. For a map \(f:X\to Y\), _the LS-category of the map, denoted as \(\operatorname{cat}(f)\),_ is the minimum number \(k\) such that \(X\) can be covered by \(k+1\) open sets \(U_{0},U_{1},\dots,U_{k}\) with nullhomotopic restrictions \(f|_{U_{i}}:U_{i}\to Y\) for all \(i\). We define the LS-category \(\operatorname{cat}(\phi)\) for a group homomorphism \(\phi:\Gamma\to\Lambda\) as \(\operatorname{cat}(f)\), where the map \(f:B\Gamma\to B\Lambda\) induces the homomorphism \(\phi\) on fundamental groups. Mark Grant [Gr] introduced the cohomological dimension \(\operatorname{cd}(\phi)\) of a group homomorphism \(\phi:\Gamma\to\Lambda\) as the maximum of \(k\) such that there exists a \(\mathbb{Z}\Lambda\)-module \(M\) with a non-zero induced homomorphism \(\phi\colon H^{k}(\Lambda,M)\to H^{k}(\Gamma,M)\). In light of the universality of the Berstein-Schwarz class [DR], we have \(\operatorname{cd}(\phi)=\max\{k\mid\phi(\beta_{\Lambda})^{k}\neq 0\}\). Combining this with the cup-length lower bound for LS-category, we obtain the inequality \(\operatorname{cd}(\phi)\leqslant\operatorname{cat}(\phi)\) for all group homomorphisms. Considering the Eilenberg-Ganea equality \(\operatorname{cd}(\Gamma)=\operatorname{cat}(\Gamma)\), a natural conjecture arises: **Conjecture 1.1**.: For any group homomorphism \(\phi:\Gamma\to\Lambda\), it is always the case that \(\operatorname{cat}(\phi)=\operatorname{cd}(\phi)\). Jamie Scott [Sc] investigated this conjecture for geometrically finite groups and proved it for monomorphisms of any groups, as well as for homomorphisms of free and free abelian groups. However, Tom Goodwillie [Gr] provided a counterexample presenting an epimorphism \(\phi:G\to\mathbb{Z}^{2}\) from an infinitely generated group \(G\) with \(\operatorname{cd}(\phi)=1\), thus disproving the conjecture. In our joint work with Dranishnikov [DK], we reduced the conjecture from arbitrary homomorphisms to epimorphisms and presented a counterexample with an epimorphism between geometrically finite groups. However, we also managed to prove the conjecture for epimorphisms between finitely generated, torsion-free nilpotent groups. It is worth exploring whether the torsion-free restriction can be removed from our result. While dealing with torsion, we investigated a related question: whether it is possible to have a homomorphism of torsion groups \(\phi:\Gamma\to\Lambda\) with \(\operatorname{cd}(\phi)<\infty\). In our recent paper [Ku], we provide a negative answer to this question. In this paper, we prove the conjecture for finitely generated, torsion-free almost nilpotent groups. We recall an almost nilpotent group is the extension group of the infinite cyclic group by the torsion-free nilpotent groups. More explicitly, one can think that it is the semidirect product of the torsion-free nilpotent group with infinite cyclic group \(\mathbb{Z}\ltimes_{\phi}\Gamma\), where \(\phi:\mathbb{Z}\to Aut(\Gamma)\) is a one-parameter group. **Theorem 1.2**.: _(Theorem 3.5) Let \(f:\mathbb{Z}\ltimes_{\phi}\Gamma\to\mathbb{Z}\ltimes_{\nu}\Lambda\) be epimorphism between torsion-free almost nilpotent groups such that \(f|_{\mathbb{Z}}=Id.\) Then \(\operatorname{cat}(f)=\operatorname{cd}(f)=\operatorname{cd}(\Lambda)+1.\)_ Furthermore, we confirm that the conjecture holds for finitely generated, torsion-free virtually nilpotent groups. **Theorem 1.3**.: _(Theorem 4.2) Let \(f:M^{m}\to N^{n}\) be a map of infranilmanifold \(M\) that induces epimorphisms \(\phi:\pi_{1}(M)\to\pi_{1}(N).\) Then \(\operatorname{cat}(\phi)=\operatorname{cd}(\phi).\) In particular, \(\operatorname{cat}(\phi)=\operatorname{cd}(\phi)=n.\)_ ## 2. Preliminaries We will begin by recalling some classical theorems in group theory and Lie group theory. Discrete groups will be denoted by Greek letters, while Lie groups will be denoted by Latin letters. **Lemma 2.1**.: _Let \(\Gamma\) be a group and let \(\Gamma^{\prime}\) be a subgroup of finite index. Then there exists a normal subgroup \(\Gamma^{\prime\prime}\) of \(\Gamma\) such that \(\Gamma^{\prime\prime}\) is of finite index in \(\Gamma\) and \(\Gamma^{\prime\prime}\leq\Gamma^{\prime}\)._ Proof.: Let X be the set of left cosets of \(\Gamma^{\prime}.\) Consider \(\phi:\Gamma\to Sym(X)\) given by \(\phi(g)(a\Gamma^{\prime})=(ga)\Gamma^{\prime}.\) Then \(\phi\) is a homomorphism. Take now \(\Gamma^{\prime\prime}=\ker\phi.\) Then \(\Gamma^{\prime\prime}\) is a normal subgroup of \(\Gamma\) contained in \(\Gamma^{\prime}\). Finally, \(\Gamma/\Gamma^{\prime\prime}\) is isomorphic to a subgroup of \(Sym(X)\), which has order \(n!\), where \(n=[\Gamma:\Gamma^{\prime}].\) Thus, \([\Gamma:\Gamma^{\prime\prime}]\) is finite. A group \(\Gamma\) is called a _virtually nilpotent_ if it has a nilpotent subgroup \(\Gamma^{\prime}\) of finite index. By Lemma 2.1, we can always assume that the finite index subgroup is indeed a finite index normal subgroup. ### Nilpotent groups The upper central series of a group \(\Gamma\) is a chain of subgroups \[e=Z_{0}\leq Z_{1}\leq....\leq Z_{n}\leq....\] where \(Z_{1}=Z(\Gamma)\) is the center of the group, and \(Z_{i+1}\) is the preimage under the canonical epimorphism \(\Gamma\to\Gamma/Z_{i}\) of the center of \(\Gamma/Z_{i}\). A group \(\Gamma\) is _nilpotent_ if \(Z_{n}=\Gamma\) for some \(n.\) The least such \(n\) is called _the nilpotency class_ of \(\Gamma,\) denoted \(\mathrm{nil}(\Gamma)\). Note that the groups with the nilpotency class one are exactly abelian groups. The lower central series of a group \(\Gamma\) is a chain of subgroups \[\Gamma=\gamma_{0}(\Gamma)\geqslant\gamma_{1}(\Gamma)\geqslant\gamma_{2}( \Gamma)\geqslant...\] defined as \(\gamma_{i+1}(\Gamma)=[\gamma_{i}(\Gamma),\Gamma]\). It's known that for nilpotent groups \(\Gamma\) the nilpotency class \(\mathrm{nil}(\Gamma)\) equals the least \(n\) for which \(\gamma_{n}(\Gamma)=1.\) **Proposition 2.2**.: _Let \(\phi:\Gamma\to\Gamma^{\prime}\) be an epimorphism. Then \(\phi(Z(\Gamma))\subset Z(\Gamma^{\prime})\) and \(\phi(\gamma_{i}(\Gamma))=\gamma_{i}(\Gamma^{\prime})\) for all \(i\)._ Proof.: Straightforward (see for example [B], Theorem 5.1.3). ### Construction of infranilmanifold, nilmanifold, and solvmanifold _A infranilmanifold_ is a closed manifold diffeomorphic to the orbit space \(G/\Gamma\) of a simply-connected nilpotent Lie group G of the action of a discrete torsion-free subgroup \(\Gamma\) of the semidirect product \(G\rtimes K\) where K is a maximal subgroup of \(Aut(G).\) If \(\Gamma\) lies in the \(G\) factor, then the infranilmanifold is called _a nilmanifold_. Every infranilmanifold \(G/\Gamma\) is finitely covered by the nilmanifold \(G/\Gamma\cap G.\) Let \(\mathfrak{g}\) be the Lie algebra of a simply-connected nilpotent Lie group G. It is well-known that the exponential map \(exp:\mathfrak{g}\to G\) is a global diffeomorphism and the quotient map \(G\to G/\Gamma\) is the universal covering map. Hence, every infranilmanifold and nilmanifold are the Eilenberg-MacLane spaces \(K(\Gamma,1).\) By the Mal'cev Theorem [Ma2] every torsion-free, finitely generated nilpotent group \(\Gamma\) can be realized as the fundamental group of some nilmanifold. The corresponding simply-connected nilpotent Lie group \(G\) is obtained as the Mal'cev completion of \(\Gamma\). Moreover, \(\Gamma\) is a lattice in G. In this paper, a lattice in \(G\) is a cocompact discrete subgroup. A _solvmanifold_ is a closed manifold diffeomorphic to the quotient space \(G/\Gamma\) of a simply-connected solvable Lie group G by discrete cocompact subgroup \(\Gamma\) of G. It is known that every solvmanifold \(G/\Gamma\) can be naturally fibered over a torus with a nilmanifold as fiber: \(N/\Gamma_{N}=(N\Gamma)/\Gamma\to G/\Gamma\to G/(N\Gamma)=T^{k}\) where \(N\) is the nilradical of G and \(\Gamma_{N}:=\Gamma\cap N\) is a lattice [Ra]. This is known as the Mostow fiber bundle. From the Mostow fiber bundle, we obtain that the fundamental group of a solvmanifold \(G/\Gamma\) fits into the short exact sequence: \[1\to\Gamma_{N}\to\Gamma\to\mathbb{Z}^{k}\to 1.\] ### Solvable group of a special type: R type A Lie Group \(G\) is _completely solvable_ if the corresponding Lie algebra \(\mathfrak{g}\) satisfies the property that any adjoint linear operator \(\mathrm{ad}S:\mathfrak{g}\to\mathfrak{g},\)\(S\in\mathfrak{g}\) has only real eigenvalues. In particular, any nilpotent Lie group is completely solvable [Ma1]. **Theorem 2.3**.: _([Sa]) Let \(G\) and \(H\) be simply-connected completely solvable Lie groups. Suppose G contains a lattice \(\Gamma.\) For any homomorphism \(\phi:\Gamma\to H,\) there exists a unique extension to a homomorphism \(\bar{\phi}:G\to H.\)_ ### Berstein-Schwarz cohomology class The Berstein-Schwarz class of a discrete group \(\Gamma\) is the first obstruction \(\beta_{\Gamma}\) to a lift of \(B\Gamma=K(\Gamma,1)\) to the universal covering \(E\Gamma\). Note that \(\beta_{\Gamma}\in H^{1}(\Gamma,I(\Gamma))\) where \(I(\Gamma)\) is the augmentation ideal of the group ring \(\mathbb{Z}\Gamma\) [Be],[Sch]. **Theorem 2.4** (Universality [DR],[Sch]).: _For any cohomology class \(\alpha\in H^{k}(\Gamma,L)\), there is a homomorphism of \(\Gamma\)-modules \(I(\Gamma)^{k}\to L\) such that the induced homomorphism for cohomology takes \((\beta_{\Gamma})^{k}\in H^{k}(\Gamma,I(\Gamma)^{k})\) to \(\alpha\), where \(I(\Gamma)^{k}=I(\Gamma)\otimes\cdots\otimes I(\Gamma)\) and \((\beta_{\Gamma})^{k}=\beta_{\Gamma}\smile\cdots\smile\beta_{\Gamma}\)._ ### Main Lemma The main results of this paper rely on the following: **Lemma 2.5**.: _[_Dk_]_ _For every locally trivial bundle of closed aspherical manifolds \(f:M^{m}\to N^{n}\) with compact connected fiber \(F\) the induced homomorphism_ \[f^{*}:H^{n}(N;\mathbb{Z}\Gamma)\to H^{n}(M;\mathbb{Z}\Gamma)\] _is nonzero, where \(\Gamma=\pi_{1}(N)\)._ In the paper, we use the notation \(H^{*}(\Gamma,A)\) for the cohomology of a group \(\Gamma\) with coefficient in \(\Gamma\)-module \(A\). The cohomology groups of a space \(X\) with the fundamental group \(\Gamma\) we denote as \(H^{*}(X;A)\). Thus, \(H^{*}(\Gamma,A)=H^{*}(B\Gamma;A)\) where \(B\Gamma=K(\Gamma,1)\). ## 3. Homomorphisms of almost nilpotent group In this section, we prove the conjecture to special types of solvable groups, namely to almost nilpotent groups. We recall that _an almost nilpotent Lie group_ is a non-nilpotent Lie group with a codimension 1 nilpotent normal subgroup. N is nilradical of a simply-connected Lie group G if the Lie algebra of N, \(\mathfrak{n}\), is the largest nilpotent ideal contained in the Lie algebra of G, \(\mathfrak{g}\). **Lemma 3.1**.: _Let G be the simply connected Lie group, and let N be a nilradical of G. If \(dimG/N=1,\) then the fundamental group \(\Gamma\) of solvmanifold \(G/\Gamma\) is given by semidirect product \(\Gamma=\mathbb{Z}\ltimes_{\phi}\mathbb{Z}_{N}.\) Further, the simply connected Lie group G has the form \(G=R\ltimes_{\bar{\phi}}N.\)_ Proof.: Since \(\Gamma\) is a lattice in G, we define \(\Gamma_{N}:=\Gamma\cap N.\) It is known \(\Gamma_{N}\) is a lattice in N [Ra]. We have the following commutative diagram with exact horizontal rows such that \(G/N=R^{s}\) and \(\Gamma/\Gamma_{N}=\mathbb{Z}^{s}.\) Since \(dimG/N=1,\) we get \(s=1.\) Since \(\Gamma/\Gamma_{N}=\mathbb{Z}\) is free, then the lower in the diagram admits a section \(s:\mathbb{Z}\to\Gamma.\) Hence \(\Gamma\cong\mathbb{Z}\ltimes_{\phi}\Gamma_{N}.\) Since \(\Gamma\subset G\), we can uniquely extend the section \(s:\mathbb{Z}\to\Gamma\) to a map \(\bar{s}:R=G/N\to G\) by Theorem 2.3. We claim that \(\bar{s}\) is a section to \(\bar{\pi}:G\to G/N=R.\) This is because \(\bar{s}\circ\bar{\pi}\) is extension map of the homomorphism \(s\circ\pi:\Gamma/\Gamma_{N}\to\Gamma/\Gamma_{N}\) by the Saito theorem (Theorem 2.3) again. Since \(s\circ\pi=Id|_{\Gamma/\Gamma_{N}}\) and \(\bar{I}d\) is the extension of the identity homomorphism \(Id:\Gamma/\Gamma_{N}\to\Gamma/\Gamma_{N}\), the uniqueness of the Saito theorem gives us \(\bar{s}\circ\bar{\pi}=\bar{I}d\). It follows that \(G=R\ltimes_{\bar{\phi}}N\) and \(\bar{\pi}|_{\Gamma}=\pi\) **Lemma 3.2**.: _Let \(\Gamma=\mathbb{Z}\ltimes_{\phi}\Gamma_{N}\) and \(\Lambda=\mathbb{Z}\ltimes_{\nu}\Lambda_{N^{\prime}}\) be finitely generated, torsion-free almost nilpotent groups. Any epimorphism \(f:\mathbb{Z}\ltimes_{\phi}\Gamma_{N}\to\mathbb{Z}\ltimes_{\nu}\Lambda_{N^{\prime}}\) such that \(f|_{\mathbb{Z}}=Id:\mathbb{Z}\to\mathbb{Z}\) can be extended uniquely to a homomorphism \(\bar{f}:G\to H,\) where \(G:=R\ltimes_{\bar{\phi}}N\) and \(H=R\ltimes_{\bar{\nu}}N^{\prime}.\)_ Proof.: We have the following commutative diagram where the rows are short exact sequence Since \(\Lambda\subset H,\) the epimorphism \(f:\Gamma\to\Lambda\) extends the uniquely \(\bar{f}:G\to H\) by the Saito theorem. Similarly \(Id:\mathbb{Z}\to\mathbb{Z}\) extends to a map \(\bar{Id}:R\to R.\) Since \(Id=\pi_{\Lambda}\circ f\circ s_{\Gamma},\) we get \(\bar{Id}=\bar{\pi_{\Lambda}}\circ\bar{f}\circ\bar{s_{\Gamma}}\) by applying the Saito theorem twice. By construction, \(\bar{f}\) restricted to discrete group \(\Gamma\) brings the epimorphism \(f:\Gamma\to\Lambda.\) **Lemma 3.3**.: _Let \(\Gamma=\mathbb{Z}\ltimes_{\phi}\Gamma_{N}\) and \(\Lambda=\mathbb{Z}\ltimes_{\nu}\Lambda_{N^{\prime}}\) be finitely generated, torsion-free almost nilpotent groups. Then every epimorphism \(f:\mathbb{Z}\ltimes_{\phi}\Gamma_{N}\to\mathbb{Z}\ltimes_{\nu}\Lambda_{N^{ \prime}}\) such that \(f|_{\mathbb{Z}}=Id:\mathbb{Z}\to\mathbb{Z}\) can be realized as a locally trivial bundle of solvmanifolds with the fiber a nilmanifold._ Proof.: By Lemma 3.2, the kernel of epimorphism \(f:\mathbb{Z}\ltimes_{\phi}\Gamma_{N}\to\mathbb{Z}\ltimes_{\nu}\Lambda_{N^{ \prime}}\) such that \(f|_{\mathbb{Z}}=Id:\mathbb{Z}\to\mathbb{Z}\) equals the kernel of epimorphism \(f|:\Gamma_{N}\to\Lambda_{N^{\prime}}.\) Similarly, \(ker(\bar{f})=ker(\bar{f}|_{N}).\) Let us denote \(K:=ker(\bar{f}|_{N})\) and \(\pi:=ker(f|_{\Gamma_{N}}).\) Note that \(\pi\) is a torsion-free, finitely generated nilpotent group since it is a subgroup of finitely generated, torsion-free nilpotent group \(\Gamma_{N}.\) By Mal'cev Theorem [Ma2], \(\pi\) can be realized the fundamental group of a nilmanifold \(K/\pi,\) where \(K\) is a simply connected, nilpotent Lie group, i.e. the Mal'cev completion of the torsion-free nilpotent group \(\pi\). Since \(G,H,\) and \(K\) are simply connected completely solvable Lie groups, we can apply the Saito theorem to obtain the following commutative diagram Here \(G=R\ltimes_{\bar{\phi}}N\) and \(H=R\ltimes_{\bar{\nu}}N^{\prime}\) are connected, simply connected almost nilpotent Lie groups whereas \(N\) and \(N^{\prime}\) are their nilradicals. We claim that the solvmanifold \(G/\Gamma\) is a total space of a locally trivial bundle of solvmanifold \(H/\Lambda\) with a fiber nilmanifold \(K/\pi\). Since G is the principal \(K\)-bundle over \(H\) [Co] and the projection \(p_{\Lambda}:H\to H/\Lambda\) is the universal covering map, we can pick a sufficiently small neighborhood \(U\) of a point \(x\in H/\Lambda\) such that \(p_{\Lambda}^{-1}(U)\) is evenly covered by \(\{\bar{U}_{\lambda}\}_{\lambda\in\Lambda}\). In addition, we require that for each \(\lambda\), \(\bar{U}_{\lambda}\times K\) is a local trivialization of the fiber bundle \(K\to G\to H\). Let \[X:=\coprod_{\lambda\in\Lambda}\bar{U}_{\lambda}\times K.\] Note that the action of the group \(\Gamma\) on \(X\) translates the summands. The orbit space \(X/\Gamma\) is homeomorphic to \[(X/\pi)/\Lambda=\left(\left(\coprod_{\lambda\in\Lambda}\bar{U}_{\lambda} \times K\right)/\pi\right)/\Lambda=\left(\coprod_{\lambda\in\Lambda}\bar{U}_{ \lambda}\times K/\pi\right)/\Lambda\cong U\times K/\pi.\] Hence, the preimage \(\hat{f}^{-1}(U)\), is homeomorphic to \(U\times K/\pi\). Since the point \(x\in H/\Lambda\) is arbitrary, we get the solvmanifold \(G/\Gamma\) is the locally trivial fiber bundle over \(H/\Lambda\) with fiber \(K/\pi\). **Corollary 3.4**.: _[_DK_]_ _Let \(\Gamma\) and \(\Lambda\) be finitely generated, torsion-free nilpotent groups. Then every epimorphism \(\phi:\Gamma\to\Lambda\) can be realized as a locally trivial bundle of nilmanifolds with the fiber a nilmanifold._ **Theorem 3.5**.: _Let \(f:\mathbb{Z}\ltimes_{\phi}\Gamma_{N}\to\mathbb{Z}\ltimes_{\nu}\Lambda_{N^{ \prime}}\) be an epimorphism between torsion-free almost nilpotent groups such that \(f|_{\mathbb{Z}}=Id.\) Then_ \[\operatorname{cat}(f)=\operatorname{cd}(f)=\operatorname{cd}(\Lambda_{N^{ \prime}})+1.\] Proof.: It is clear by dimensional reason that \[\operatorname{cd}(f)\leq\operatorname{cat}(f)\leq\operatorname{cd}(\Lambda_{N ^{\prime}})+1.\] We prove that \(\operatorname{cd}(f)=\operatorname{cd}(\Lambda_{N^{\prime}})+1.\) By Lemma 2.5, it suffices to show that \(f\) can be realized as a fiber bundle over a closed aspherical manifold with compact fiber. Indeed, by Lemma 3.3 the map \(f\) can be realized as a locally trivial fiber bundle over solvmanifold \((R\rtimes_{\bar{\nu}}N^{\prime})/(\mathbb{Z}\rtimes_{\nu}\Lambda_{N^{\prime}})\) with fiber nilmanifold \(K/\pi\). This finishes the proof of the theorem. ## 4. Homomorphism of virtually nilpotent groups Let \(f:M\to N\) be a map between aspherical manifolds with the fundamental groups \(\Gamma\) and \(\Lambda\) respectively. We denote by \(\phi:\Gamma\to\Lambda\) the induced homomorphism \(\phi:=f_{*}:\pi_{1}(M)\to\pi_{1}(N).\) **Lemma 4.1**.: _There are no maps from a nilmanifold to a infranilmanifold that induces an epimorphisms of the fundamental groups._ Proof.: Suppose there is a map \(f\) from nilmanifold \(N\) to infranilmanifold \(M\) that induces an epimorphism \(\phi:\pi_{1}(N)\to\pi_{1}(M).\) Since \(\pi_{1}(N)\) is a nilpotent group, the image \(\phi(\pi_{1}(N))\) is a nilpotent group by Proposition 2.2. Since \(\phi\) is surjective, \(\pi_{1}(M)\) must be nilpotent. This is a contradiction since \(\pi_{1}(M)\) is a virtually nilpotent group. **Theorem 4.2**.: _Let \(f:M^{m}\to N^{n}\) be a map between infranilmanifolds that induces an epimorphism \(\phi:\pi_{1}(M)\rightarrow\pi_{1}(N).\) Then \(\operatorname{cat}(\phi)=\operatorname{cd}(\phi)=n.\)_ Proof.: Using the well-known inequalities on \(\operatorname{cat}\)[CLOT] we obtain: \[\operatorname{cat}(\phi)\leqslant\min\{\operatorname{cat}(M),\operatorname{ cat}N\}\leqslant\dim N=n.\] Since \(\operatorname{cd}(\phi)\leqslant\operatorname{cat}(\phi)\), to obtain the equality \(\operatorname{cd}(\phi)=\operatorname{cat}(\phi)\), it suffices to show that \(\operatorname{cd}(\phi)=n\). We consider two steps: _Step 1._ Let \(B\Lambda\) be a nilmanifold. Since \(\Gamma\) is a virtually nilpotent group, there exists nilpotent subgroup \(\Gamma^{\prime}\) such that the index \(|\Gamma:\Gamma^{\prime}|\) is finite. By Lemma 2.1, we can assume \(\Gamma^{\prime}\) to be normal. We have the following commutative diagrams: where \(h:=f|_{B\Gamma^{\prime}}\). Since the map \(h:B\Gamma^{\prime}\rightarrow\Lambda\) induces an epimorphism \(h_{*}:\Gamma^{\prime}\rightarrow\Lambda\), by Corollary 3.4 there is a fiber bundle of nilmanifolds \(B\Gamma^{\prime}\to B\Lambda\) with a compact fiber. By Lemma 2.5, \(\operatorname{cd}(h_{*})=n\). We claim that that \(\operatorname{cd}(\phi)=n\) since \(\operatorname{cd}(h_{*})=n\). Let us pick a element \(a\in H^{n}(B\Lambda;\mathbb{Z}\Lambda)\) with \(h^{*}(a)\neq 0\). Suppose the contrary that \(\operatorname{cd}(\phi)<n\), then \(f^{*}(a)=0.\) This is a contradiction, since \[0=i^{*}(f^{*}(a))=h^{*}(a)\neq 0.\] Thus, we obtain \(\operatorname{cd}(\phi)=n\). _Step 2._ Let \(B\Lambda\) be a pure infranilmanifold. In view of Lemma 4.1, we consider \(B\Gamma\) is also a pure infranilmanifold. Since \(\Lambda\) is virtually nilpotent group, there exists a nilpotent subgroup \(\Lambda^{\prime}\) such that the index \(|\Lambda:\Lambda^{\prime}|\) is finite. Therefore, the induce and the co-induced modules coinside, i.e., \[Ind^{\Lambda}_{\Lambda^{\prime}}M=Coind^{\Lambda}_{\Lambda^{\prime}}M\] where \(M\) is a \(Z\Lambda^{\prime}\)-module. By the Shapiro Lemma [Br][proposition 6.2, p. 73], we have the following isomorphisms \[H^{*}(B\Lambda^{\prime};\mathbb{Z}\Lambda^{\prime})\cong H^{*}(B\Lambda;Coind^ {\Lambda}_{\Lambda^{\prime}}\Lambda^{\prime})=H^{*}(B\Lambda;Ind^{\Lambda}_{ \Lambda^{\prime}}\mathbb{Z}\Lambda^{\prime})=H^{*}(B\Lambda;\mathbb{Z}\Lambda),\] since \(Ind^{\Lambda}_{\Lambda^{\prime}}\mathbb{Z}\Lambda^{\prime}=\mathbb{Z}\Lambda \otimes_{\mathbb{Z}\Lambda^{\prime}}\mathbb{Z}\Lambda^{\prime}\cong\mathbb{Z}\Lambda\). Let \[\alpha:\operatorname{Coind}^{\Lambda}_{\Lambda^{\prime}}\mathbb{Z}\Lambda^{ \prime}=Hom_{\Lambda^{\prime}}(\mathbb{Z}\Lambda,\mathbb{Z}\Lambda^{\prime}) \rightarrow\mathbb{Z}\Lambda^{\prime}\] denote the canonical \(\mathbb{Z}\Lambda^{\prime}\)-homomorphism, defined for \(g:\mathbb{Z}\Lambda\rightarrow\mathbb{Z}\Lambda^{\prime}\) as \(\alpha(g)=g(1)\). We define the coefficient homomorphism \(\beta\) as composition of coefficient homomorphisms \[\beta:{\mathbb{Z}}\Lambda\cong{\mathbb{Z}}\Lambda\otimes_{{\mathbb{Z}}\Lambda^{ \prime}}{\mathbb{Z}}\Lambda^{\prime}\cong Hom_{{\mathbb{Z}}\Lambda^{\prime}}({ \mathbb{Z}}\Lambda,{\mathbb{Z}}\Lambda^{\prime})\stackrel{{\alpha}}{ {\to}}{\mathbb{Z}}\Lambda^{\prime}.\] One can define \(\beta\) explicitly: Given \(\gamma\in\Lambda\), \(\beta(\gamma)=\gamma\) if \(\gamma\in\Lambda^{\prime}\) and zero otherwise. We get the following commutative diagram where \(f^{*}(B\Lambda^{\prime})\) is pull-back of maps \(f:B\Gamma\to B\Lambda\) and \(i:B\Lambda^{\prime}\to B\Lambda\). Since the map \(f:B\Gamma\to B\Lambda\) induces an epimorphism of the fundamental groups of manifolds, the pull-back space \(f^{*}(B\Lambda^{\prime})\) is path-connected. Furthermore, \(f^{*}(B\Lambda^{\prime})\) is an infranilmanifold since \(B\Lambda^{\prime}\to B\Lambda\) is a regular covering. Hence, we get that \[f^{\prime*}:H^{n}(B\Lambda^{\prime};{\mathbb{Z}}\Lambda^{\prime})\to H^{n}(f^{ *}(B\Lambda^{\prime});{\mathbb{Z}}\Lambda^{\prime})\] is nonzero homomorphism by _Step 1_, so \(\operatorname{cd}(f^{\prime})=n\). We claim \[f^{*}:H^{n}(B\Lambda;{\mathbb{Z}}\Lambda)\to H^{n}(B\Gamma;{\mathbb{Z}}\Lambda)\] is a nonzero homomorphism. Suppose the contrary and let us pick the element \(b\in H^{n}(B\Lambda^{\prime};{\mathbb{Z}}\Lambda^{\prime})\) with \(f^{\prime*}(b)\neq 0.\) Since \[\beta_{*}j^{*}:H^{*}(B\Lambda;{\mathbb{Z}}\Lambda)\to H^{*}(B\Lambda^{\prime}; {\mathbb{Z}}\Lambda^{\prime})\] is an isomorphism by the Shapiro Lemma, there exists a nonzero element \(a\in H^{n}(B\Lambda;{\mathbb{Z}}\Lambda)\) such that \(\beta_{*}j^{*}(a)=b.\) Then \(f^{\prime*}j^{*}(a)=j^{*}f^{*}(a)=j^{*}(0)=0\) since \(\operatorname{cd}(f)<n.\) This is a contradiction, as we have the commutative diagram \[0\neq f^{\prime*}(b)=f^{\prime*}\beta_{*}j^{*}(a)=\beta_{*}f^{\prime*}j^{*}(b) =\beta_{*}j^{*}f^{*}(b)=\beta_{*}j^{*}(0)=0.\] Hence, we prove the claim, which implies \(\operatorname{cd}(\phi)=n\). ## 5. Acknowledgement The author thanks his advisor Alexander Dranishnikov for extremely helpful discussions and his support.
2302.00670
Stable Target Field for Reduced Variance Score Estimation in Diffusion Models
Diffusion models generate samples by reversing a fixed forward diffusion process. Despite already providing impressive empirical results, these diffusion models algorithms can be further improved by reducing the variance of the training targets in their denoising score-matching objective. We argue that the source of such variance lies in the handling of intermediate noise-variance scales, where multiple modes in the data affect the direction of reverse paths. We propose to remedy the problem by incorporating a reference batch which we use to calculate weighted conditional scores as more stable training targets. We show that the procedure indeed helps in the challenging intermediate regime by reducing (the trace of) the covariance of training targets. The new stable targets can be seen as trading bias for reduced variance, where the bias vanishes with increasing reference batch size. Empirically, we show that the new objective improves the image quality, stability, and training speed of various popular diffusion models across datasets with both general ODE and SDE solvers. When used in combination with EDM, our method yields a current SOTA FID of 1.90 with 35 network evaluations on the unconditional CIFAR-10 generation task. The code is available at https://github.com/Newbeeer/stf
Yilun Xu, Shangyuan Tong, Tommi Jaakkola
2023-02-01T18:57:01Z
http://arxiv.org/abs/2302.00670v2
# Stable target field for reduced variance score estimation in diffusion models ###### Abstract Diffusion models generate samples by reversing a fixed forward diffusion process. Despite already providing impressive empirical results, these diffusion models algorithms can be further improved by reducing the variance of the training targets in their denoising score-matching objective. We argue that the source of such variance lies in the handling of intermediate noise-variance scales, where multiple modes in the data affect the direction of reverse paths. We propose to remedy the problem by incorporating a reference batch which we use to calculate weighted conditional scores as more stable training targets. We show that the procedure indeed helps in the challenging intermediate regime by reducing (the trace of) the covariance of training targets. The new stable targets can be seen as trading bias for reduced variance, where the bias vanishes with increasing reference batch size. Empirically, we show that the new objective improves the image quality, stability, and training speed of various popular diffusion models across datasets with both general ODE and SDE solvers. When used in combination with EDM (Karras et al., 2022), our method yields a current SOTA FID of 1.90 with 35 network evaluations on the unconditional CIFAR-10 generation task. The code is available at [https://github.com/Newbeer/stf](https://github.com/Newbeer/stf) ## 1 Introduction Diffusion models (Sohl-Dickstein et al., 2015; Song and Ermon, 2019; Ho et al., 2020) have recently achieved impressive results on a wide spectrum of generative tasks, such as image generation (Nichol et al., 2022; Song et al., 2021), 3D point cloud generation (Luo and Hu, 2021) and molecular conformer generation (Shi et al., 2021; Xu et al., 2022). These models can be subsumed under a unified framework in the form of Ito stochastic differential equations (SDE) (Song et al., 2021). The models learn time-dependent score fields via score-matching (Hyvarinen and Dayan, 2005), which then guides the reverse SDE during generative sampling. Popular instances of diffusion models include variance-exploding (VE) and variance-preserving (VP) SDE (Song et al., 2021). Building on these formulations, EDM (Karras et al., 2022) provides the best performance to date. We argue that, despite achieving impressive empirical results, the current training scheme of diffusion models can be further improved. In particular, the variance of training targets in the denoising score-matching (**DSM**) objective can be large and lead to suboptimal performance. To better understand the origin of this instability, we decompose the score field into three regimes. Our analysis shows that the phenomenon arises primarily in the intermediate regime, which is characterized by multiple modes or data points exerting comparable influences on the scores. In other words, in this regime, the sources of the noisy examples generated in the course of the forward process become ambiguous. We illustrate the problem in Figure 1(a), where each stochastic update of the score model is based on disparate targets. We propose a generalized version of the denoising score-matching objective, termed the _Stable Target Field_ (**STF**) objective. The idea is to include an additional _reference batch_ of examples that are used to calculate weighted conditional scores as targets. We apply self-normalized importance sampling to aggregate the contribution of each example in the reference batch. Although this process can substantially reduce the variance of training targets (Figure 1(b)), especially in the intermediate regime, it does introduce some bias. However, we show that the bias together with the trace-of-covariance of the STF training targets shrinks to zero as we increase the size of the reference batch. Experimentally, we show that our STF objective achieves new state-of-the-art performance on CIFAR-10 unconditional generation when incorporated into EDM (Karras et al., 2022). The resulting FID score (Heusel et al., 2017) is \(1.90\) with \(35\) network evaluations. STF also improves the FID/Inception scores for other variants of score-based models, _i.e._, VE and VP SDEs (Song et al., 2021), in most cases. In addition, it enhances the stability of converged score-based models on CIFAR-10 and CelebA \(64^{2}\) across random seeds, and helps avoid generating noisy images in VE. STF accelerates the training of score-based models (\(3.6\times\) speed-up for VE on CIFAR-10) while obtaining comparable or better FID scores. To the best of our knowledge, STF is the first technique to accelerate the training process of diffusion models. We further demonstrate the performance gain with increasing reference batch size, highlighting the negative effect of large variance. Our contributions are summarized as follows: **(1)** We detail the instability of the current diffusion models training objective in a principled and quantitative manner, characterizing a region in the forward process, termed _the intermediate phase_, where the score-learning targets are most variable (Section 3). **(2)** We propose a generalized score-matching objective, _stable target field_, which provides more stable training targets (Section 4). **(3)** We analyze the behavior of the new objective and prove that it is asymptotically unbiased and reduces the trace-of-covariance of the training targets by a factor pertaining to the reference batch size in the intermediate phase under mild conditions (Section 5). **(4)** We illustrate the theoretical arguments empirically and show that the proposed STF objective improves the performance, stability, and training speed of score-based methods. In particular, it achieves the current state-of-the-art FID score on the CIFAR-10 benchmark when combined with EDM (Section 6). ## 2 Background on diffusion models In diffusion models, the forward process1 is an SDE with no learned parameter, in the form of: Footnote 1: For simplicity, we focus on the version where the diffusion coefficient \(g(t)\) is independent of \(\mathbf{x}(t)\). \[\mathrm{d}\mathbf{x}=\mathbf{f}(\mathbf{x},t)\mathrm{d}t+g(t)\mathrm{d} \mathbf{w},\] where \(\mathbf{x}\in\mathbb{R}^{d}\) with \(\mathbf{x}(0)\sim p_{0}\) being the data distribution, \(t\in[0,1]\), \(\mathbf{f}\colon\mathbb{R}^{d}\times[0,1]\to\mathbb{R}^{d}\), \(g\colon[0,1]\to\mathbb{R}\), and \(\mathbf{w}\in\mathbb{R}^{d}\) is the standard Wiener process. It gradually transforms the data distribution to a known prior as time goes from 0 to 1. Sampling of diffusion models is done via a corresponding reverse-time SDE (Anderson, 1982): \[\mathrm{d}\mathbf{x}=\left[\mathbf{f}(\mathbf{x},t)-g(t)^{2}\nabla_{\mathbf{x }}\log p_{t}(\mathbf{x})\right]\mathrm{d}\bar{t}+g(t)\mathrm{d}\bar{\mathbf{w}},\] where \(\bar{\cdot}\) denotes time traveling backward from 1 to 0. Song et al. (2021) proposes a probability flow ODE that induces the same marginal distribution \(p_{t}(\mathbf{x})\) as the SDE: \(\mathrm{d}\mathbf{x}=\mathrm{d}\mathbf{x}\). Figure 1: Illustration of differences between the DSM objective and our proposed STF objective. The “destroyed” images (in blue box) are close to each other while their sources (in red box) are not. Although the true score in expectation is the weighted average of \(\mathbf{v}_{i}\), the individual training updates of the DSM objective have a high variance, which our STF objective reduces significantly by including a large reference batch (yellow box). \(\left[\mathbf{f}(\mathbf{x},t)-\frac{1}{2}g(t)^{2}\nabla_{\mathbf{x}}\log p_{t}( \mathbf{x})\right]\mathrm{d}\tilde{t}\). Both formulations progressively recover \(p_{0}\) from the prior \(p_{1}\). We estimate the score of the transformed data distribution at time \(t\), \(\nabla_{\mathbf{x}}\log p_{t}(\mathbf{x})\), via a neural network, \(\mathbf{s}_{\theta}(\mathbf{x},t)\). Specifically, the training objective is a weighted sum of the denoising score-matching (Vincent, 2011): \[\min_{\theta}\ \mathbb{E}_{t\sim q_{t}(t)}\lambda(t)\mathbb{E}_{\mathbf{x} \sim p_{0}}\mathbb{E}_{\mathbf{x}(t)\sim p_{t|0}(\cdot|\mathbf{x})}\left[\| \mathbf{s}_{\theta}(\mathbf{x}(t),t)-\nabla_{\mathbf{x}(t)}\log p_{t|0}( \mathbf{x}(t)|\mathbf{x})\|_{2}^{2}\right], \tag{1}\] where \(q_{t}\) is the distribution for time variable, _e.g._, \(\mathcal{U}[0,1]\) for VE/VP (Song et al., 2021b) and a log-normal distribution for EDM Karras et al. (2022), and \(\lambda(t)=\sigma_{t}^{2}\) is the positive weighting function to keep the time-dependent loss at the same magnitude (Song et al., 2021b), and \(p_{t|0}(\mathbf{x}(t)|\mathbf{x})\) is the transition kernel denoting the conditional distribution of \(\mathbf{x}(t)\) given \(\mathbf{x}\)2. Specifically, diffusion models "destroy" data according to a diffusion process utilizing Gaussian transition kernels, which result in \(p_{t|0}(\mathbf{x}(t)|\mathbf{x})=\mathcal{N}(\mathbf{\mu}_{t},\sigma_{t}^{2} \mathbf{I})\). Recent works (Xu et al., 2022b; Rissanen et al., 2022) have also extended the underlying principle from the diffusion process to more general physical processes where the training objective is not necessarily score-related. Footnote 2: We omit “\((0)\)” from \(\mathbf{x}(0)\) when there is no ambiguity. ## 3 Understanding the training target in score-matching objective The vanilla denoising score-matching objective at time \(t\) is: \[\ell_{\text{DSM}}(\theta,t)=\mathbb{E}_{p_{0}(\mathbf{x})}\mathbb{E}_{p_{t|0} (\mathbf{x}(t)|\mathbf{x})}[\|\mathbf{s}_{\theta}(\mathbf{x}(t),t)-\nabla_{ \mathbf{x}(t)}\log p_{t|0}(\mathbf{x}(t)|\mathbf{x})\|_{2}^{2}], \tag{2}\] where the network is trained to fit the individual targets \(\nabla_{\mathbf{x}(t)}\log p_{t|0}(\mathbf{x}(t)|\mathbf{x})\) at \((\mathbf{x}(t),t)\) - the "influence" exerted by clean data \(\mathbf{x}\) on \(\mathbf{x}(t)\). We can swap the order of the sampling process by first sampling \(\mathbf{x}(t)\) from \(p_{t}\) and then \(\mathbf{x}\) from \(p_{0|t}(\cdot|\mathbf{x}(t))\). Thus, \(\mathbf{s}_{\theta}\) has a closed form minimizer: \[\mathbf{s}_{\mathbf{x}}^{\text{s}}(\mathbf{x}(t),t)=\mathbb{E}_{p_{0|t}( \mathbf{x}|\mathbf{x}(t))}[\nabla_{\mathbf{x}(t)}\log p_{t|0}(\mathbf{x}(t)| \mathbf{x})]=\nabla_{\mathbf{x}(t)}\log p_{t}(\mathbf{x}(t)). \tag{3}\] The score field is a conditional expectation of \(\nabla_{\mathbf{x}(t)}\log p_{t|0}(\mathbf{x}(t)|\mathbf{x})\) with respect to the posterior distribution \(p_{0|t}\). In practice, a Monte Carlo estimate of this target can have high variance (Owen, 2013; Elvira and Martino, 2021). In particular, when multiple modes of the data distribution have comparable influences on \(\mathbf{x}(t)\), \(p_{0|t}(\cdot|\mathbf{x}(t))\) is a multi-mode distribution, as also observed in Xiao et al. (2022). Thus the targets \(\nabla_{\mathbf{x}(t)}\log p_{t|0}(\mathbf{x}(t)|\mathbf{x})\) vary considerably across different \(\mathbf{x}\) and this can strongly affect the estimated score at \((\mathbf{x}(t),t)\), resulting in slower convergence and worse performance in practical stochastic gradient optimization (Wang et al., 2013). To quantitatively characterize the variations of individual targets at different time, we propose a metric - the average trace-of-covariance of training targets at time \(t\): \[V_{\text{DSM}}(t) =\mathbb{E}_{p_{t}(\mathbf{x}(t))}\left[\mathrm{Tr}(\mathrm{Cov}_ {p_{0|t}(\mathbf{x}|\mathbf{x}(t))}(\nabla_{\mathbf{x}(t)}\log p_{t|0}( \mathbf{x}(t)|\mathbf{x})))\right]\] \[=\mathbb{E}_{p_{t}(\mathbf{x}(t))}\mathbb{E}_{p_{0|t}(\mathbf{x }|\mathbf{x}(t))}\left[\|\nabla_{\mathbf{x}(t)}\log p_{t|0}(\mathbf{x}(t)| \mathbf{x}))-\nabla_{\mathbf{x}(t)}\log p_{t}(\mathbf{x}(t))\|_{2}^{2}\right]. \tag{4}\] We use \(V_{\text{DSM}}(t)\) to define three successive phases relating to the behavior of training targets. As shown in Figure 2(a), the three phases partition the score field into near, intermediate, and far regimes (Phase 1\(\sim\)3 respectively). Intuitively, \(V_{\text{DSM}}(t)\) peaks in the intermediate phase (Phase 2), where multiple distant modes in the data distribution have comparable influences on the same noisy perturbations, resulting in unstable targets. In Phase 1, the posterior \(p_{0|t}\) concentrates around one single mode, thus low variation. In Phase 3, the targets remain similar across modes since \(\lim_{t\to 1}p_{t|0}(\mathbf{x}(t)|\mathbf{x})\approx p_{1}\) for commonly used transition kernels. We validate this argument empirically in Figure 2(b), which shows the estimated \(V_{\text{DSM}}(t)\) for a mixture of two Gaussians as well as a subset of CIFAR-10 dataset (Krizhevsky et al., 2009) for a more Figure 2: **(a)**: Illustration of the three phases in a two-mode distribution. **(b)**: Estimated \(V_{\text{DSM}}(t)\) for two distributions. We normalize the maximum value to 1 for illustration purposes. realistic setting. Here we use VE SDE, _i.e_., \(p_{t|0}(\mathbf{x}(t)|\mathbf{x})=\mathcal{N}\left(\mathbf{x},\sigma_{m}^{2}( \frac{\sigma_{M}}{\sigma_{m}})^{2t}\mathbf{I}\right)\) for some \(\sigma_{m}\) and \(\sigma_{M}\)(Song et al., 2021b). \(V_{\text{DSM}}(t)\) exhibits similar phase behavior across \(t\) in both toy and realistic cases. Moreover, \(V_{\text{DSM}}(t)\) reaches its maximum value in the intermediate phase, demonstrating the large variations of individual targets. We defer more details to Appendix C. ## 4 Treating score as a field The vanilla denoising score-matching approach (Equation 3) can be viewed as a Monte Carlo estimator, _i.e_., \(\nabla_{\mathbf{x}(t)}\log p_{t}(\mathbf{x}(t))=\mathbb{E}_{p_{0|t}(\mathbf{x} |\mathbf{x}(t))}[\nabla_{\mathbf{x}(t)}\log p_{t|0}(\mathbf{x}(t)|\mathbf{x})] \approx\frac{1}{n}\sum_{i=1}^{n}\nabla_{\mathbf{x}(t)}\log p_{t|0}(\mathbf{x} (t)|\mathbf{x}_{i})\) where \(\mathbf{x}_{i}\) is sampled from \(p_{0|t}(\cdot|\mathbf{x}(t))\) and \(n=1\). The variance of a Monte Carlo estimator is proportional to \(\frac{1}{n}\), so we propose to use a larger batch (\(n\)) to counter the high variance problem described in Section 3. Since sampling directly from the posterior \(p_{0|t}\) is not practical, we first apply importance sampling with the proposal distribution \(p_{0}\). Specifically, we sample a large reference batch \(\mathcal{B}_{L}=\{\mathbf{x}_{i}\}_{i=1}^{n}\sim p_{0}^{n}\) and get the following approximation: \[\nabla_{\mathbf{x}(t)}\log p_{t}(\mathbf{x}(t))\approx\frac{1}{n}\sum_{i=1}^{ n}\frac{p_{0|t}(\mathbf{x}_{i}|\mathbf{x}(t))}{p_{0}(\mathbf{x}_{i})}\nabla_{ \mathbf{x}(t)}\log p_{t|0}(\mathbf{x}(t)|\mathbf{x}_{i}).\] The importance weights can be rewritten as \(p_{0|t}(\mathbf{x}|\mathbf{x}(t))/p_{0}(\mathbf{x})=p_{t|0}(\mathbf{x}(t)| \mathbf{x})/p_{t}(\mathbf{x}(t))\). However, this basic importance sampling estimator has two issues. The weights now involve an unknown normalization factor \(p_{t}(\mathbf{x}(t))\) and the ratio between the prior and posterior distribution can be large in high dimensional spaces. To remedy these problems, we appeal to self-normalization techniques (Hesterberg, 1995) to further stabilize the training targets: \[\nabla_{\mathbf{x}(t)}\log p_{t}(\mathbf{x}(t))\approx\sum_{i=1}^{n}\frac{p_ {t|0}(\mathbf{x}(t)|\mathbf{x}_{i})}{\sum_{j=1}^{n}p_{t|0}(\mathbf{x}(t)| \mathbf{x}_{j})}\nabla_{\mathbf{x}(t)}\log p_{t|0}(\mathbf{x}(t)|\mathbf{x}_{ i}). \tag{5}\] We term this new training target in Equation 5 as _Stable Target Field (STF)_. In practice, we sample the reference batch \(\mathcal{B}_{L}=\{\mathbf{x}_{i}\}_{i=1}^{n}\) from \(p_{0}^{n}\) and obtain \(\mathbf{x}(t)\) by applying the transition kernel to the "first" training data \(\mathbf{x}_{1}\). Taken together, the new STF objective becomes: \[\ell_{\text{STF}}(\theta,t)=\mathbb{E}_{\{\mathbf{x}_{i}\}_{i=1} ^{n}\sim p_{0}^{n}}\mathbb{E}_{\mathbf{x}(t)\sim p_{t|0}(\cdot|\mathbf{x}_{i})} \\ \left[\left\|\mathbf{s}_{\theta}(\mathbf{x}(t),t)-\sum_{k=1}^{n} \frac{p_{t|0}(\mathbf{x}(t)|\mathbf{x}_{k})}{\sum_{j=1}^{n}p_{t|0}(\mathbf{x}( t)|\mathbf{x}_{j})}\nabla_{\mathbf{x}(t)}\log p_{t|0}(\mathbf{x}(t)|\mathbf{x}_{k}) \right\|_{2}^{2}\right]. \tag{6}\] When \(n=1\), STF reduces to the vanilla denoising score-matching (Equation 2). When \(n>1\), STF incorporates a reference batch to stabilize training targets. Intuitively, the new weighted target assigns larger weights to clean data with higher influence on \(\mathbf{x}(t)\), _i.e_., higher transition probability \(p_{t|0}(\mathbf{x}(t)|\mathbf{x})\). Similar to our analysis in Section 3, we can again swap the sampling process in Equation 6 so that, for a perturbation \(\mathbf{x}(t)\), we sample the reference batch \(\mathcal{B}_{L}=\{\mathbf{x}_{i}\}_{i=1}^{n}\) from \(p_{0|t}(\cdot|\mathbf{x}(t))p_{0}^{n-1}\), where the first element involves the posterior, and the rest follow the data distribution. Thus, the minimizer of the new objective (Equation 6) is (derivation can be found in Appendix B.1) \[\mathbf{s}_{\text{STF}}^{\mathbf{x}}(\mathbf{x}(t),t)=\mathbb{E}_{\mathbf{x}_{1 }\sim p_{0|t}(\cdot|\mathbf{x}(t))}\mathbb{E}_{\{\mathbf{x}_{i}\}_{i=2}^{n} \sim p_{0}^{n-1}}\left[\sum_{k=1}^{n}\frac{p_{t|0}(\mathbf{x}(t)|\mathbf{x}_{k} )}{\sum_{j}p_{t|0}(\mathbf{x}(t)|\mathbf{x}_{j})}\nabla_{\mathbf{x}(t)}\log p _{t|0}(\mathbf{x}(t)|\mathbf{x}_{k})\right]. \tag{7}\] Note that although STF significantly reduces the variance, it introduces bias: the minimizer is no longer the true score. Nevertheless, in Section 5, we show that the bias converges to \(0\) as \(n\rightarrow\infty\), while reducing the trace-of-covariance of the training targets by a factor of \(n\) when \(p_{0|t}\approx p_{0}\). We further instantiate the STF objective (Equation 6) with transition kernels in the form of \(p_{t|0}(\mathbf{x}(t)|\mathbf{x})=\mathcal{N}(\mathbf{x},\sigma_{t}^{2}\mathbf{I})\), which includes EDM (Karras et al., 2022), VP (through reparameterization) and VE (Song et al., 2021b): \[\mathbb{E}_{\mathbf{x}_{1}\sim p_{0|t}(\cdot|\mathbf{x}(t))}\mathbb{E}_{\{ \mathbf{x}_{i}\}_{i=2}^{n}\sim p_{0}^{n-1}}\left[\left\|\mathbf{s}_{\theta}( \mathbf{x}(t),t)-\frac{1}{\sigma_{t}^{2}}\right\|_{k=1}^{n}\frac{\exp\left(- \frac{\|\mathbf{x}(t)-\mathbf{x}_{k}\|_{2}^{2}}{2\sigma_{t}^{2}}\right)}{\sum_{j }\exp\left(-\frac{\|\mathbf{x}(t)-\mathbf{x}_{k}\|_{2}^{2}}{2\sigma_{t}^{2}} \right)}(\mathbf{x}_{k}-\mathbf{x}(t))\right\|_{2}^{2}\right].\] To aggregate the time-dependent STF objective over \(t\), we sample the time variable \(t\) from the training distribution \(q_{t}\) and apply the weighting function \(\lambda(t)\). Together, the final training objective for STF is \(\mathbb{E}_{t\sim q_{t}(t)}\left[\lambda(t)\ell_{\text{STF}}(\theta,t)\right]\). We summarize the training process in Algorithm 1. The small batch size \(|\mathcal{B}|\) is the same as the normal batch size in the vanilla training process. We defer specific use cases of STF objectives combined with various popular diffusion models to Appendix A. ``` Input: Training iteration \(T\), Initial model \(\mathbf{s}_{\theta}\), dataset \(\mathcal{D}\), learning rate \(\eta\). for\(t=1\dots T\)do Sample a large reference batch \(\mathcal{B}_{L}\) from \(\mathcal{D}\), and subsample a small batch \(\mathcal{B}=\left\{\mathbf{x}_{i}\right\}_{i=1}^{|\mathcal{B}|}\) from \(\mathcal{B}_{L}\) Uniformly sample the time \(\{t_{i}\}_{i=1}^{|\mathcal{B}|}\sim q_{t}(t)^{|\mathcal{B}|}\) Obtain the batch of perturbed samples \(\{\mathbf{x}_{i}(t_{i})\}_{i=1}^{|\mathcal{B}|}\) by applying the transition kernel \(p_{t|0}\) on \(\mathcal{B}\) Calculate the stable target field of \(\mathcal{B}_{L}\) for all \(\mathbf{x}_{i}(t_{i})\): \(\mathbf{v}_{\mathcal{B}_{L}}(\mathbf{x}_{i}(t_{i}))=\sum_{\mathbf{x}\in \mathcal{B}_{L}}\frac{p_{t|0}(\mathbf{x}_{i}(t_{i})|\mathbf{x})}{\sum_{ \mathbf{y}\in\mathcal{B}_{L}}p_{t|0}(\mathbf{x}_{i}(t_{i})|\mathbf{y})}\nabla \mathbf{x}_{\mathbf{x}_{i}(t_{i})}\log p_{t|0}(\mathbf{x}_{i}(t_{i})|\mathbf{ x})\) Calculate the loss: \(\mathcal{L}(\theta)=\frac{1}{|\mathcal{B}|}\sum_{i=1}^{|\mathcal{B}|}\lambda(t _{i})\|\mathbf{s}_{\theta}(\mathbf{x}_{i}(t_{i}),t_{i})-\mathbf{v}_{\mathcal{ B}_{L}}(\mathbf{x}_{i}(t_{i}))\|_{2}^{2}\) Update the model parameter: \(\theta=\theta-\eta\nabla\mathcal{L}(\theta)\) endfor return\(\mathbf{s}_{\theta}\) ``` **Algorithm 1** Learning the stable target field ## 5 Analysis In this section, we analyze the theoretical properties of our approach. In particular, we show that the new minimizer \(\mathbf{s}_{\text{STF}}^{*}(\mathbf{x}(t),t)\) (Equation 7) converges to the true score asymptotically (Section 5.1). Then, we show that the proposed STF reduces the trace-of-covariance of training targets propositional to the reference batch size in the intermediate phase, with mild conditions (Section 5.2). ### Asymptotic behavior Although in general \(\mathbf{s}_{\text{STF}}^{*}(\mathbf{x}(t),t)\neq\nabla_{\mathbf{x}(t)}\log p _{t}(\mathbf{x}(t))\), the bias shrinks toward \(0\) with a increasing \(n\). In the following theorem we show that the minimizer of STF objective at \((\mathbf{x}(t),t)\), _i.e._, \(\mathbf{s}_{\text{STF}}^{*}(\mathbf{x}(t),t)\), is asymptotically normal when \(n\to\infty\). **Theorem 1**.: _Suppose \(\forall t\in[0,1],0<\sigma_{t}<\infty\), then_ \[\sqrt{n}\left(\mathbf{s}_{\text{STF}}^{*}(\mathbf{x}(t),t)-\nabla_{\mathbf{x} (t)}\log p_{t}(\mathbf{x}(t))\right)\xrightarrow{d}\mathcal{N}\left(\mathbf{0 },\frac{\text{Cov}(\nabla_{\mathbf{x}(t)}p_{t|0}(\mathbf{x}(t)|\mathbf{x}))}{p _{t}(\mathbf{x}(t))^{2}}\right) \tag{8}\] We defer the proof to Appendix B.2. The theorem states that, for commonly used transition kernels, \(\mathbf{s}_{\text{STF}}^{*}(\mathbf{x}(t),t)-\nabla_{\mathbf{x}(t)}\log p_{t} (\mathbf{x}(t))\) converges to a zero mean normal, and larger reference batch size \((n)\) will lead to smaller asymptotic variance. As can be seen in Equation 8, when \(n\to\infty\), \(\mathbf{s}_{\text{STF}}^{*}(\mathbf{x}(t),t)\) highly concentrates around the true score \(\nabla_{\mathbf{x}(t)}\log p_{t}(\mathbf{x}(t))\). ### Trace of Covariance We now highlight the small variations of the training targets in the STF objective compared to the DSM. As done in Section 3, we study the trace-of-covariance of training targets in STF: \[V_{\text{STF}}(t)=\mathbb{E}_{p_{t}(\mathbf{x}(t))}\left[\text{Tr}\left(\text{ Cov}_{p_{0|t}(\cdot|\mathbf{x}(t))p_{0}^{n-1}}\left(\sum_{k=1}^{n}\frac{p_{t|0}( \mathbf{x}(t)|\mathbf{x}_{k})}{\sum_{j}p_{t|0}(\mathbf{x}(t)|\mathbf{x}_{j}) }\nabla_{\mathbf{x}(t)}\log p_{t|0}(\mathbf{x}(t)|\mathbf{x}_{k})\right)\right) \right].\] In the following theorem we compare \(V_{\text{STF}}\) with \(V_{\text{DSM}}\). In particular, we can upper bound \(V_{\text{STF}}(t)\) by **Theorem 2**.: _Suppose \(\forall t\in[0,1],0<\sigma_{t}<\infty\), then_ \[V_{\text{STF}}(t)\leq\frac{1}{n-1}\left(V_{\text{DSM}}(t)+\frac{\sqrt{3}d}{ \sigma_{t}^{2}}\sqrt{\mathbb{E}_{p_{t}(\mathbf{x}(t))}D_{f}\left(p_{0}(\mathbf{ x})\parallel p_{0|t}(\mathbf{x}|\mathbf{x}(t))\right)}\right)+O\left(\frac{1}{n^{2}} \right),\] _where \(D_{f}\) is an f-divergence with \(f(y)=\begin{cases}(1/y-1)^{2}&(y<1.5)\\ 8y/27-1/3&(y\geq 1.5)\end{cases}\). Further, when \(n\gg d\) and \(p_{0|t}(\mathbf{x}|\mathbf{x}(t))\approx p_{0}(\mathbf{x})\) for all \(\mathbf{x}(t)\), \(V_{\text{STF}}(t)\leq\frac{V_{\text{DSM}}(t)}{n-1}\)._ We defer the proof to Appendix B.3. The second term that involves \(f\)-divergence \(D_{f}\) is necessary to capture how the coefficients, _i.e._, \(p_{t|0}(\mathbf{x}(t)|\mathbf{x}_{k})/\sum_{j}p_{t|0}(\mathbf{x}(t)|\mathbf{x} _{j})\) used to calculate the weighted score target, vary across different samples \(\mathbf{x}(t)\). This term decreases monotonically as a function of \(t\). In Phase 1, \(p_{0|t}(\mathbf{x}|\mathbf{x}(t))\) differs substantially from \(p_{0}(\mathbf{x})\) and the divergence term \(D_{f}\) dominates. In contrast to the upper bound, both \(V_{\text{STF}}(t)\) and \(V_{\text{DSM}}(t)\) have minimal variance at small values of \(t\) since the training target is always dominated by one \(\mathbf{x}\). The theorem has more relevance in Phase 2, where the divergence term decreases to a value comparable to \(V_{\text{DSM}}(t)\). In this phase, we empirically observe that the ratio of the two terms in the upper bound ranges from 10 to 100. Thus, when we use a large reference batch size (in thousands), the theorem implies that STF offers a considerably lower variance (by a factor of 10 or more) relative to the DSM objective. In Phase 3, the second term vanishes to 0, as \(p_{t}\approx p_{t|0}\) with large \(\sigma_{t}\) for commonly used transition kernels. As a result, STF reduces the average trace-of-covariance of the training targets by at least \(n-1\) times in the far field. Together, we demonstrate that the STF targets have diminishing bias (Theorem 1) and are much more stable during training (Theorem 2). These properties make the STF objective more favorable for diffusion models training with stochastic gradient optimization. ## 6 Experiments In this section, we first empirically validate our theoretical analysis in Section 5, especially for variance reduction in the intermediate phase (Section 6.1). Next, we show that the STF objective improves various diffusion models on image generation tasks in terms of image quality (Section 6.2). In particular, STF achieves state-of-the-art performance on top of EDM. In addition, we demonstrate that STF accelerates the training of diffusion models (Section 6.3), and improves the convergence speed and final performance with an increasing reference batch size (Section 6.3). ### Variance reduction in the intermediate phase The proposed Algorithm 1 utilizes a large reference batch to calculate the stable target field instead of the individual target. In addition to the theoretical analysis in Section 5, we provide further empirical study to characterize the intermediate phase and verify the variance reduction effects by STF. Apart from \(V(t)\), we also quantify the average divergence between the posterior \(p_{0|t}(\cdot|\mathbf{x}(t))\) and the data distribution \(p_{0}\) at time \(t\) (introduced in Theorem 2): \(D(t)=\mathbb{E}_{p_{t}(\mathbf{x}(t))}\left[D_{f}\left(p_{0|t}(\mathbf{x}| \mathbf{x}(t))\parallel p_{0}(\mathbf{x})\right)\right]\). Intuitively, the number of high-density modes in \(p_{0|t}(\cdot|\mathbf{x}(t))\) grows as \(D(t)\) decreases. To investigate their behaviors, we construct two synthetic datasets: (1) a 64-dimensional mixture of two Gaussian components (**Two Gaussians**), and (2) a subset of 1024 images of CIFAR-10 (**CIFAR-10-4096**). Figure 3(a) and Figure 3(b) show the behaviors of \(V_{\text{DSM}}(t)\) and \(D(t)\) on Two Gaussian and CIFAR-10-4096. In both settings, \(V_{\text{DSM}}(t)\) reaches its peak in the intermediate phase (Phase 2), while \(D(t)\) gradually decreases over time. These results agree with our theoretical understanding from Section 3. In Phase 2 and 3, several modes of the data distribution have noticeable influences on the scores, but only in Phase 2 are the influences much more distinct, leading to high variations of the individual target \(\nabla_{\mathbf{x}(t)}\log p_{t|0}(\mathbf{x}(t)|\mathbf{x}),\mathbf{x}\sim p _{0|t}(\cdot|\mathbf{x}(t))\). Figure 3: **(a, b)**: \(V_{\text{DSM}}(t)\) and \(D(t)\) versus \(t\). We normalize the maximum values to \(1\) for illustration purposes. **(c, d)**: \(V_{\text{STF}}(t)\) with a varying reference batch size \(n\). Figure 3(c) and Figure 3(d) further show the relationship between \(V_{\text{STF}}(t)\) and the reference batch size \(n\). Recall that when \(n=1\), STF degenerates to individual target and \(V_{\text{STF}}(t)=V_{\text{DSM}}(t)\). We observe that \(V_{\text{STF}}(t)\) decreases when enlarging \(n\). In particular, the predicted relation \(V_{\text{STF}}(t)\lessapprox V_{\text{DSM}}(t)/(n-1)\) in Theorem 2 holds for the two Gaussian datasets where \(D_{f}\) is small. On the high dimensional dataset CIFAR-10-4096, the stable target field can still greatly reduce the training target variance with large reference batch sizes \(n\). ### Image generation We demonstrate the effectiveness of the new objective on image generation tasks. We consider CIFAR-10 (Krizhevsky et al., 2009) and CelebA \(64\times 64\)(Yang et al., 2015) datasets. We set the reference batch size \(n\) to \(4096\) (CIFAR-10) and \(1024\) (CelebA \(64^{2}\)). We choose the current state-of-the-art score-based method EDM (Karras et al., 2022) as the baseline, and replace the DSM objective with our STF objective during training. We also apply STF to two other popular diffusion models, VE/VP SDEs (Song et al., 2021). For a fair comparison, we directly adopt the architectures and the hyper-parameters in Karras et al. (2018) and Song et al. (2021) for EDM and VE/VP respectively. In particular, we use the improved NCSN++/DDPM++ models (Karras et al., 2022) in the EDM scheme. To highlight the stability issue, we train three models with different seeds for VE on CIFAR-10. We provide more experimental details in Appendix D.1. **Numerical Solver.** The reverse-time ODE and SDE in scored-based models are compatible with any general-purpose solvers. We use the adaptive solver RK45 method (Dormand and Prince, 1980; Song et al., 2021) (RK45) for VE/VP and the popular DDIMI solver (Song et al., 2021) for VP. We adopt Heun's 2nd order method (Heun) and the time discretization proposed by Karras et al. (2022) for EDM. For SDEs, we apply the predictor-corrector (PC) sampler used in (Song et al., 2021). We denote the methods in a objective-sampler format, _i.e._, **A-B**, where \(\textbf{A}\in\{\text{DSM, STF}\}\) and \(\textbf{B}\in\{\text{RK45, PC, DDIM, Heun}\}\). We defer more details to Appendix D.2. **Results.** For quantitative evaluation of the generated samples, we report the FID scores (Heusel et al., 2017) (lower is better) and Inception (Salimans et al., 2016) (higher is better). We measure the sampling speed by the average NFE (number of function evaluations). We also include the results of several popular generative models (Karras et al., 2020; Ho et al., 2020; Song and Ermon, 2019; Xu et al., 2022) for reference. Table 1 and Table 2 report the sample quality and the sampling speed on unconditional generation of CIFAR-10 and CelebA \(64^{2}\). Our main findings are: **(1) STF achieves new state-of-the-art FID scores for unconditional generation on CIFAR-10 benchmark.** As shown in Ta \begin{table} \begin{tabular}{l c c c} \hline \hline Methods & Inception \(\uparrow\) & FID \(\downarrow\) & NFE \(\downarrow\) \\ \hline StyleGAN2-ADA (Karras et al., 2020) & 9.83 & \(2.92\) & \(1\) \\ DDPM (Ho et al., 2020) & 9.46 & \(3.17\) & \(1000\) \\ NCSNv2 (Song and Ermon, 2020) & 8.40 & \(10.87\) & \(1161\) \\ PFGM (Xu et al., 2022) & 9.68 & \(2.48\) & \(104\) \\ \hline \hline **VE** (Song et al., 2021) & & & \\ \hline DSM - RK45 & 9.27 & 8.90 & \(264\) \\ STF (**ours**) - RK45 & 9.52 \(\uparrow\) & \(5.51\) \(\downarrow\) & \(200\) \\ DSM - PC & 9.68 & \(2.75\) & \(2000\) \\ STF (**ours**) - PC & 9.86 \(\uparrow\) & \(2.66\) \(\downarrow\) & \(2000\) \\ \hline \hline **VP** (Song et al., 2021) & & & \\ \hline DSM - DDIM & 9.20 & 5.16 & \(100\) \\ STF (**ours**) - DDIM & 9.28 \(\uparrow\) & \(5.06\) \(\downarrow\) & \(100\) \\ DSM - RK45 & 9.46 & \(2.90\) & \(140\) \\ STF (**ours**) - RK45 & 9.43 \(\downarrow\) & \(2.99\) \(\uparrow\) & \(140\) \\ \hline \hline **EDM** (Karras et al., 2022) & & & \\ \hline DSM - Heun, NCSN++ & 9.82 & \(1.98\) & \(35\) \\ STF (**ours**) - Heun, NCSN++ & **9.93 \(\uparrow\)** & **1.90 \(\downarrow\)** & \(35\) \\ DSM - Heun, DDPM++ & 9.78 & \(1.97\) & \(35\) \\ STF (**ours**) - Heun, DDPM++ & \(9.79\) \(\uparrow\) & \(1.92\) \(\downarrow\) & \(35\) \\ \hline \hline \end{tabular} \end{table} Table 1: CIFAR-10 sample quality (FID, Inception) and number of function evaluation (NFE). ble 1, The STF objective obtains a FID of \(1.90\) when incorporated with the EDM scheme. To the best of our knowledge, this is the lowest FID score on the unconditional CIFAR-10 generation task. In addition, the STF objective consistently improves the EDM across the two architectures. (**2) The STF objective improves the performance of different diffusion models.** We observe that the STF objective improves the FID/Inception scores of VE/VP/EDM on CIFAR-10, for most ODE and SDE samplers. STF consistently provides performance gains for VE across datasets. Remarkably, our objective achieves much better sample quality using ODE samplers for VE, with an FID score gain of \(3.39\) on CIFAR-10, and \(2.22\) on Celeba \(64^{2}\). For VP, STF provides better results on the popular DDIM sampler, while suffering from a slight performance drop when using the RK45 sampler. **(3) The STF objective stabilizes the converged VE model with the RK45 sampler.** In Appendix E.1, we report the standard deviations of performance metrics for converged models with different seeds on CIFAR-10 with VE. We observe that models trained with the STF objective give more consistent results, with a smaller standard deviation of used metrics. We further provide generated samples in Appendix F. One interesting observation is that when using the RK45 sampler for VE on CIFAR-10, the generated samples from the STF objective do not contain noisy images, unlike the vanilla DSM objective. ### Accelerating training of diffusion models The variance-reduction techniques in neural network training can help to find better optima and achieve faster convergence rate (Wang et al., 2013; Defazio et al., 2014; Johnson and Zhang, 2013). In Figure 4, we demonstrate the FID scores every 50k iterations during the course of training. Since our goal is to investigate relative performance during the training process, and because the FID scores computed on 1k samples are strongly correlated with the full FID scores on 50k sample (Song and Ermon, 2020), we report FID scores on 1k samples for faster evaluations. We apply ODE samplers for FID evaluation, and measure the training time on two NVIDIA A100 GPUs. For a fair comparison, we report the average FID scores of models trained by the DSM and STF objective on VE versus the wall-clock training time (h). The STF objective achieves better FID scores with the same training time, although the calculation of the target field by the reference batch introduces slight overhead (Algorithm 1). In Figure 4(a), we show that the STF objective drastically accelerates the training of diffusion models on CIFAR-10. The STF objective achieves comparable FID scores with \(3.6\times\) less training time (25h versus 90h). For Celeba \(64^{2}\) datasets, the training time improvement is less significant than on CIFAR-10. Our hypothesis is that the STF objective is more effective when there are multiple well-separated modes in data distribution, _e.g._, the ten classes in CIFAR-10, where the DSM objective suffer from relatively larger variations in the intermediate phase. In addition, the converged models have better final performance when pairing with the STF on both datasets. \begin{table} \begin{tabular}{c c c} \hline \hline Method/NFEs & FID \(\downarrow\) & NFE \(\downarrow\) \\ \hline _Celeba_\(64^{2}\)**-_RK5_** & \\ \hline VE (DSM) & \(7.56\) & \(260\) \\ VE (STF) & \(\mathbf{5.34}\) & \(266\) \\ \hline _Celeba_\(64^{2}\)**-_PC_** & \\ \hline VE (DSM) & \(9.13\) & \(2000\) \\ VE (STF) & \(\mathbf{8.28}\) & \(2000\) \\ \hline \hline \end{tabular} \end{table} Table 2: FID and NFE on Celeba \(64^{2}\) Figure 4: FID and generated samples throughout training on (a) CIFAR-10 and (b) Celeba \(64^{2}\). Figure 5: FID scores in the training course with varying reference batch size. ### Effects of the reference batch size According to our theory (Theorem 2), the upper bound of the trace-of-covariance of the STF target decreases proportionally to the reference batch size. Here we study the effects of the reference batch size \((n)\) on model performances during training. The FID scores are evaluated on \(1k\) samples using the RK45 sampler. As shown in Figure 5, models converge faster and produce better samples when increasing \(n\). It suggests that smaller variations of the training targets can indeed speed up training and improve the final performances of diffusion models. ## 7 Related work **Different phases of diffusion models.** The idea of diffusion models having different phases has been explored in prior works though the motivations and definitions vary (Karras et al., 2022; Choi et al., 2022). Karras et al. (2022) argues that the training targets are difficult and unnecessary to learn in the very near field (small \(t\) in our Phase 1), whereas the training targets are always dissimilar to the true targets in the intermediate and far field (our Phase 2 and Phase 3). As a result, their solution is sampling \(t\) with a log-normal distribution to emphasize the relevant region (relatively large \(t\) in our Phase 1). In contrast, we focus on reducing large training target variance in the intermediate and far field, and propose STF to better estimate the true target (cf. Karras et al. (2022)). Choi et al. (2022) identifies a key region where the model learns perceptually rich contents, and determines the training weights \(\lambda(t)\) based on the signal-to-noise ratio (SNR) at different \(t\). As SNR is monotonically decreasing over time, the resulting up-weighted region does not match our Phase 2 characterization. In general, our proposed STF method reduces the training target variance in the intermediate field and is complementary to previous improvements of diffusion models. **Importance sampling.** The technique of importance sampling has been widely adopted in machine learning community, such as debiasing generative models (Grover et al., 2019), counterfactual learning (Swaminathan & Joachims, 2015) and reinforcement learning (Metelli et al., 2018). Prior works using importance sampling to improve generative model training include reweighted wake-sleep (RWS) (Bornschein & Bengio, 2014) and importance weighted autoencoders (IWAE) (Burda et al., 2015). RWS views the original wake-sleep algorithm (Hinton et al., 1995) as importance sampling with one latent variable, and proposes to sample multiple latents to obtain gradient estimates with lower bias and variance. IWAE utilizes importance sampling with multiple latents to achieve greater flexibility of encoder training and tighter log-likelihood lower bound compared to the standard variational autoencoder (Kingma & Welling, 2013; Rezende et al., 2014). **Variance reduction for Fisher divergence.** One popular approach to score-matching is to minimize the Fisher divergence between true and predicted scores (Hyvarinen & Dayan, 2005). Wang et al. (2020) links the Fisher divergence to denoising score-matching (Vincent, 2011) and studies the large variance problem (in \(O(1/\sigma_{t}^{4})\)) of the Fisher divergence when \(t\to 0\). They utilize a control variate to reduce the variance. However, this is typically not a concern for current diffusion models as the time-dependent objective can be viewed as multiplying the Fisher divergence by \(\lambda(t)=\sigma_{t}^{2}\), resulting in a finite-variance objective even when \(t\to 0\). ## 8 Conclusion We identify large target variance as a significant training issue affecting diffusion models. We define three phases with distinct behaviors, and show that the high-variance targets appear in the intermediate phase. As a remedy, we present a generalized score-matching objective, _Stable Target Field_ (STF), whose formulation is analogous to the self-normalized importance sampling via a large reference batch. Albeit no longer an unbiased estimator, our proposed objective is asymptotically unbiased and reduces the trace-of-covariance of the training targets, which we demonstrate theoretically and empirically. We show the effectiveness of our method on image generation tasks, and show that STF improves the performance, stability, and training speed over various state-of-the-art diffusion models. Future directions include a principled study on the effect of different reference batch sampling procedures. Our presented approach is uniformly sampling from the whole dataset \(\{\mathbf{x}_{i}\}_{i=2}^{n}\sim p_{0}^{n-1}\), so we expect that training diffusion models with a reference batch of more samples in the neighborhood of \(\mathbf{x}_{1}\) (the sample from which \(\mathbf{x}(t)\) is perturbed) would lead to an even better estimation of the score field. Moreover, the three-phase analysis can effectively capture the behaviors of other physics-inspired generative models, such as PFGM (Xu et al., 2022b) or the more advanced PFGM++ (Xu et al., 2023). Therefore, we anticipate that STF can enhance the performance and stability of these models further. ### Acknowledgements We are grateful to Benson Chen for reviewing an early draft of this paper. We would like to thank Hao He and the anonymous reviewers for their valuable feedback. YX and TJ acknowledge support from MIT-DSTA Singapore collaboration, from NSF Expeditions grant (award 1918839) "Understanding the World Through Code", and from MIT-IBM Grand Challenge project. ST and TJ also acknowledge support from the ML for Pharmaceutical Discovery and Synthesis Consortium (MLPDS).
2304.04798
Quantum communication networks with optical vortices
Quantum communications bring a paradigm change in internet security by using quantum resources to establish secure keys between parties. Present-day quantum communications networks are mainly point-to-point and use trusted nodes and key management systems to relay the keys. Future quantum networks, including the quantum internet, will have complex topologies in which groups of users are connected and communicate with each-other. Here we investigate several architectures for quantum communication networks. We show that photonic orbital angular momentum (OAM) can be used to route quantum information between different nodes. Starting from a simple, point-to-point network, we will gradually develop more complex architectures: point-to-multipoint, fully-connected and entanglement-distribution networks. As a particularly important result, we show that an $n$-node, fully-connected network can be constructed with a single OAM sorter and $n-1$ OAM values. Our results pave the way to construct complex quantum communication networks with minimal resources.
S. Suciu, G. A. Bulzan, T. A. Isdraila, A. M. Palici, S. Ataman, C. Kusko, R. Ionicioiu
2023-04-10T18:08:01Z
http://arxiv.org/abs/2304.04798v2
# Quantum communication networks with optical vortices ###### Abstract Quantum communications bring a paradigm change in internet security by using quantum resources to establish secure keys between parties. Present-day quantum communications networks are mainly point-to-point and use trusted nodes and key management systems to relay the keys. Future quantum networks, including the quantum internet, will have complex topologies in which groups of users are connected and communicate with each-other. Here we investigate several architectures for quantum communication networks. We show that photonic orbital angular momentum (OAM) can be used to route quantum information between different nodes. Starting from a simple, point-to-point network, we will gradually develop more complex architectures: point-to-multipoint, fully-connected and entanglement-distribution networks. As a particularly important result, we show that an \(n\)-node, fully-connected network can be constructed with a single OAM sorter and \(n-1\) OAM values. Our results pave the way to construct complex quantum communication networks with minimal resources. quantum networks, OAM, QKD ## I Introduction Quantum computers pose a threat to present-day internet security, due to their ability to efficiently break public-key cryptography. One way to mitigate this _quantum apocalypse_ is to deploy large-scale quantum communication networks. Current quantum communications are usually point-to-point, using trusted nodes and key management systems to establish secret keys between remote nodes. Future quantum networks, including the quantum internet, will require to handle complex network topologies. Such networks will need to connect users situated in different locations and/or domains. Consequently, in such networks it will be important to route a quantum state \(|\psi\rangle_{q}\) between different locations. Most of the information we exchange everyday is encoded in photons and carried by optical fibres. The data capacity of a single optical fiber depends on the spectral bandwidth over which low-loss signal transmission can be achieved, on the one hand, and on our ability to use this bandwidth through suitable coding and decoding schemes, on the other. Due to the constant increase of worldwide data traffic, nonlinear effects [1] impose limits on the capacity of optical fibers. To address this capacity-crunch, space division multiplexing (SDM) using multi-core [2; 3] and multimode [4] fibers have been developed. In the quest for larger data capacity, another solution is to use an extra degree of freedom, different from wavelength [5; 6]. A good candidate for the extra degree of freedom is the orbital angular momentum (OAM) of the photon [7; 8; 9]. The phase front of an OAM beam is helical, with quantized angular momentum \(l\hbar\), \(l\in\mathbb{Z}\). Photons carrying OAM have been used for different applications, such as object identification [10], enhanced phase sensitivity [11], imaging [12; 13] and metrology [14; 15]. Classical and quantum communication with OAM states have both been demonstrated in fiber [16; 17; 6]. Long-distance, high-dimensional QKD using OAM in both optical fibers [6] and free-space [18] have recently enjoyed a renewed interest. This is due to several benefits brought by high-dimensional systems: reduced overall complexity of a quantum circuit via \(d\)-level gates [19; 20], increased raw-key rates [21; 22], robustness to noise [23; 24; 25] and hacking attacks [26]. Hybrid states of OAM and polarization have also been used in QKD protocols, in both fiber and free-space [27; 28; 29]. OAM multiplexing can offer an alternative to the development of wireless communications [30], because unlike wavelength-division multiplexing (WDM), it can generate orthogonal channels [30] in a line-on-site channel environment. Due to this increased interest in both classical and quantum applications of OAM, dedicated optical fibers [31] and multiplexing and demultiplexing techniques [32; 33; 34; 35] have been maturing recently. Thus new methods to route information encoded in OAM are needed. In contrast to wavelength, it is relatively easy to change OAM using passive optical elements like spiral phase-plates (SPPs) [36]. This makes OAM an attractive degree-of-freedom for network routing. In this paper we discuss several architectures for quantum communication networks which use OAM for routing quantum states \(|\psi\rangle_{q}\) around the network. The paper is structured as follows: in Section II we describe the quantum sorter [34], which is the main element in OAM multiplexing/demultiplexing (MUX/DEMUX). In Section III we show OAM implementations of several topolo gies for quantum communications networks: point-to-point, point-to-multipoint, fully connected, and fully-connected entanglement-distribution networks with a central network provider. Finally, we conclude the article in Section IV. ## II Quantum Sorter A central element of all the networks discussed here is the \(d\)-dimensional quantum sorter \(U_{d}\), and its inverse \(U_{d}^{\dagger}\)[34]; \(U_{d}\) (\(U_{d}^{\dagger}\)) is a unitary operator which acts as a demultiplexer (multiplexer). In quantum information parlance, the sorter \(U_{d}\) is a controlled-\(X_{d}\) gate between the observable to be sorted and the path degree of freedom. As shown in Ref. [34], the sorter is universal in the sense that it can (de)multiplex any degree of freedom: wavelength, spin, radial angular momentum, OAM etc. Moreover, it has a theoretical efficiency of 100%. Standard telecom networks use wavelength as the extra DoF for multiplexing/demultiplexing. In this article we focus on the OAM degree-of-freedom as a tool for MUX/DEMUX. The action of the sorter \(U_{d}\) and its inverse \(U_{d}^{\dagger}\) is: \[U_{d}\;(DEMUX): |l\rangle_{\text{com}}|k\rangle_{\text{path}}\xrightarrow{U_{d}} |l\rangle_{\text{com}}|k\oplus l\rangle_{\text{path}}\] \[U_{d}^{\dagger}\;(MUX): |l\rangle_{\text{com}}|k\rangle_{\text{path}}\xrightarrow{U_{d}^{ \dagger}}|l\rangle_{\text{com}}|k\ominus l\rangle_{\text{path}} \tag{1}\] with \(\oplus/\ominus\) addition/substraction \(\bmod d\). Here both OAM and path DoFs are qudits, i.e., \(d\)-dimensional quantum systems. Thus, if photons with different OAM \(l\) are incident on port 0 of the \(U_{d}\) gate (DEMUX), they will exit on output \(l\), i.e., they will be sorted on different outputs according to their OAM value. The \(U_{d}^{\dagger}\) gate (MUX) works in reverse. The addition and subtraction \(\bmod d\) result in a cyclic property which can be better understood for \(l=\pm d\). In this case \(|l\rangle_{oam}\) will be sorted on path \(|0\rangle_{\text{path}}\), like \(l=0\). This cyclic property will play a crucial role in the design of various network architectures for routing quantum states using OAM. The cyclic property is also used in the construction of the \(X_{d}\) gate [37]. The \(X_{d}\) gate is a basic building block for qudit tomography [38; 39] and for general qudit protocols. Another application of the quantum sorter is in the generation of high-dimensional entangled states between an observable and the path DoF. Hybrid quantum gates are a hot topic under active development [40; 41; 42]. Accessing a larger alphabet allows us to encode more information, resulting in higher channel capacity and better robustness to noise. ## III OAM-assisted quantum communication networks In this section we start with a simple architecture and then gradually build more complex networks. All networks discussed here can be used for QKD, either in prepare-and-measure (BB84) or in entanglement-based protocols (E91, BBM92). The only difference is in the equipment available to users. The networks can also be used to route quantum information as part of a larger protocol. What we denote as "senders" and "receivers" can represent anything from sources and detectors to other networks or protocols. The scale can also vary form waveguides in computer chips to optical fibers between cities or ground-to-satellite links. For simplicity, in the following we use only positive OAM values. One can substitute the OAM \(|l_{\text{max}}-n\rangle_{\text{com}}\mapsto|-n-1\rangle_{\text{com}}\), where \(l_{\text{max}}\) is the largest OAM used in the protocol, and \(n\in\{0,\ 1,\...,\ \lfloor\frac{l_{\text{max}}}{2}\rfloor\}\) thus halving the maximum OAM values required. ### Point-to-point architecture Point-to-point networks are a simple case in which pairs of users are connected by their own quantum channel. In practice this results in a messy and convoluted network of cables. To reduce the number of cables needed, especially for long-distance communication, individual signals are in practice multiplexed into the same channel. For example, different labs from two cities can share the same channel for intercity transmission. In Figure 1 two pairs Alice-Charlie and Bob-Diana share a single long-range quantum channel (instead of dedicated channels for each pair). Each pair has assigned a unique OAM Figure 1: Two point-to-point pairs using a common channel. Signals from each pair are multiplexed into the common (long-range) channel and demultiplexed at the destination. The information can be recovered and separated because each sender-receiver pair has allocated a unique OAM value. value. The pairs are indexed by consecutive numbers which represents their assigned OAM, input port at the multiplexer and output port at the demultiplexer. For example, if Bob wants to send a quantum state \(|\psi\rangle_{q}\) to Diana, he uses input port \(|1\rangle_{\text{path}}\) with OAM \(|1\rangle_{\text{oam}}\). This state is input into the multiplexer which redirects it to port \(|0\rangle_{\text{path}}\) of the long-range channel: \[|\psi\rangle_{q}|1\rangle_{\text{oam}}|1\rangle_{\text{path}}\xrightarrow{MUX}|\psi \rangle_{q}|1\rangle_{\text{oam}}|0\rangle_{\text{path}}\] Diana recovers \(|\psi\rangle_{q}\) on output port \(|1\rangle_{\text{path}}\) of the demultiplexer at her end: \[|\psi\rangle_{q}|1\rangle_{\text{oam}}|0\rangle_{\text{path}}\xrightarrow{ DEMUX}|\psi\rangle_{q}|1\rangle_{\text{oam}}|1\rangle_{\text{path}}\] The quantum state \(|\psi\rangle_{q}\) can be any internal degree of freedom (different from OAM). Usually we use polarization to encode the quantum state \(|\psi\rangle_{q}=|\psi\rangle_{\text{pol}}=\alpha|H\rangle+\beta|V\rangle\). In Appendix A we discuss an example of an OAM-assisted BB84 protocol in polarization. Since the multiplexer and demultiplexer are modelled by a unitary operation, the protocol also works in reverse. We can reverse the direction in Fig. 1 such that Charlie and Diana are now the senders and everything works similarly. This is true for all communication protocols discussed here. ### Point-to-multipoint architecture A point-to-multipoint architecture is a natural extension from the point-to-point one. Instead of linking pairs of users, a point-to-multipoint network links a group of users with one or more other groups. However, members of the same group cannot communicate with each-other. The logical network is a bipartite graph. The simple setup with one multiplexer and one demultiplexer works in this case, but only if the numbers of senders and receivers are coprime (see Appendix B for a proof). Expanding on the previous example, different labs from two cities can now share not only a transmission line, but also choose to which lab from the other city to send data. Figure 2 shows an example for two senders and three receivers. Alice (Bob) can communicate with any receiver (Charlie, Diana, Eve) using an even (odd) OAM value. For the general case, suppose we have a set of \(d_{s}\) senders and \(d_{r}\) receivers, with \(d_{s}\), \(d_{r}\) relatively coprime. In this case any sender-receiver pair has associated a unique OAM, thus the receiver can distinguish between different senders. This value is determined by solving a system of congruence relations: \[l_{sr}\equiv s\;(\bmod d_{s})\] \[l_{sr}\equiv r\;(\bmod d_{r})\] where \(d_{s}\) and \(d_{r}\) are, respectively, the dimension of the multiplexer (sender) and demultiplexer (receiver). In order for a sender \(s\in\{0,\ldots,d_{s}-1\}\) to communicate with a receiver \(r\in\{0,\ldots,d_{r}-1\}\), they use the OAM value \(l_{sr}\) given by \[l_{sr}=pd_{s}+s=qd_{r}+r \tag{2}\] see Appendix B. The total number of OAM values is \(d_{s}d_{r}\). Although in practice we can always choose the dimensions of the multiplexer and demultiplexer to be coprime (e.g., by embedding them into a larger set), this can be an issue for more complex networks. The coprimality constraint can be eliminated by modifying the demultiplexer as in Fig. 3. We call this a _group demultiplexer_ since it splits an input channel into \(d_{s}\cdot d_{r}\) outputs and then groups them back together into \(d_{r}\) channels. In Fig. 4 we use the group demultiplexer to create a more general network. Now any sender \(s\) can transmit to any receiver \(r\) by an appropriate OAM value \(l_{sr}\): \[l_{sr}=s+rd_{s} \tag{3}\] A step-by-step analysis of this protocol is given in Appendix C; here we only give the main result: \[|\psi\rangle_{q}|l_{sr}\rangle_{\text{oam}}|s\rangle_{\text{path}}\xrightarrow{ \text{network}}|\psi\rangle_{q}|l_{sr}\rangle_{\text{oam}}|r\rangle_{\text{ path}}\] This ensures that quantum information, encoded in the state Figure 2: Two groups share a single, long-range channel to communicate with members of the other group. To ensure that each pair has assigned a unique OAM value, the dimensions of multiplexer and demultiplexer must be coprime; here \(d_{s}=2\) and \(d_{r}=3\). \(|\psi\rangle_{q}\), is routed along the nework from sender \(s\) to receiver \(r\). In the following schemes a group demultiplexer can be replaced by a simple demultiplexer, provided that: (i) the number of senders and receivers are co-prime, and (ii) the OAM values satisfy the congruence relations discussed above. Also, any network can work in reverse, i.e., receivers become senders, multiplexers become demultiplexers (and vice-versa) and group demultiplexers become group multiplexers. In point-to-multipoint networks, a group communicates with multiple other groups. A useful use-case scenario is a network connecting multiple cities: labs in one city communicate to labs in multiple cities. However, one network connects just one city with others. This creates a physical star-network topology, the logical network topology remains the same. Each sender forms a logical star-network topology with all receivers, yet as a group the point-to-multipoint logic is unchanged. In Fig. 5 we split de group demultiplexer at the receivers end into two. In large networks it will be useful to put a group multiplexer at the senders and have simple demultiplexers at the receivers. This helps to reduce the costs, since using group multiplexers scales as \(d_{s}\cdot d_{r}\). Compared to the previous protocol, we now use other outputs of the multiplexer to communicate with different groups. Everything remains the same, except from an offset of the OAM value, which depends on the receiver group: \[l_{sgr}=s+rd_{s}\ominus g \tag{4}\] where \(g\) is the group number (i.e., the output port of the multiplexer). Consider the example in Fig. 5, where Bob intends to communicate to Eve; we have sender \(s=1\) transmitting to receiver \(r=0\) from group \(g=1\), with a multiplexer of size \(d_{s}=2\). Thus their OAM value is \(0\). Notice that this OAM value is no longer unique, since Alice uses the same OAM to communicate to Charlie. This architecture helps to reduce the OAM bandwidth, i.e., the number of OAM values required. Both schemes, Fig. 4 and Fig. 5, have \(d_{s}=2\) senders and \(d_{r}=4\) receivers. However, in Fig. 4 we need \(d_{s}d_{r}=6\) OAM values, whereas in Fig. 5 we need only \(d_{r}=4\) values. In both cases any sender can communicate with any receiver. A variation of this architecture is to use a group multiplexer at the senders side and only demultiplexers at the receivers side. In this case the OAM value is \[l_{sgr}=r+sd_{r}\ominus gd_{s} \tag{5}\] Figure 4: General point-to-multipoint protocol for an arbitrary number of senders and receivers. On the receiver side, the demultiplexer has been replaced by a group demultiplexer (dashed outline). With this change we eliminate the coprimality condition and the OAM assignment is simplified. Figure 3: Group demultiplexer. Top: the action of a normal demultiplexer (left) compared to the group demultiplexer (right). Both are \(n\)-in, \(n\)-out devices, but the group demultiplexer distributes the output OAM values differently. Bottom: The group demultiplexer (dashed contours) is equivalent to a network of normal demultiplexers. ### Fully-connected networks Finally, we generalize the previous schemes to a fully-connected network, in which any two users can communicate with each other. In this case all nodes are both senders and receivers. In the previous point-to-multipoint protocol, this will work for a reasonable numbers of users, as the size of the group demultiplexer scales as \(O(n^{2})\). Surprisingly however, a single MUX/DEMUX is enough to create a fully-connected network with \(n\) nodes, see Fig. 6. Here the senders can view the OAM value as an indexing list: 0 for themselves, 1 for the next user (mod \(d\)), 2 for the second over (mod \(d\)) and so on. Now each node is both a sender and a receiver. Thus a fully-connected network with \(n\) users requires only a single \(n\)-dimensional quantum sorter (acting as a MUX/DEMUX) and \(n\) OAM values. In fact, since the OAM value \(\ell=0\) is used to connect a node to itself, we need only \(n-1\) OAM values. At first sight it looks like each node needs two different channels to connect to the network, one for sending and one for receiving. However, using two circulators, one at the user side and the other at the MUX/DEMUX side, a node can use a single channel for both sending and receiving, see Fig. 7. This design is the most general possible, connecting all network nodes. It can be used as a fully-connected network in prepare-and-measure QKD. ### Entanglement-distribution networks So far we have discussed networks for prepare-and-measure QKD protocols, such as BB84. Another important class of Figure 5: General point-to-multipoint for multiple groups. Since the multiplexer is a unitary transformation, it has the same number of input and output ports. By using the other available outputs, one group can communicate to several other groups (situated at different locations). Figure 6: A fully-connected, \(n\)-node quantum network; each node is both a sender and a receiver. A single \(n\)-dimensional sorter (MUX/DEMUX) routes information between all the nodes. \begin{table} \begin{tabular}{c|c c c c} \hline \hline sender receiver & A & B & C & D \\ \hline A & 0 & 1 & 2 & 3 \\ B & 3 & 0 & 1 & 2 \\ C & 2 & 3 & 0 & 1 \\ D & 1 & 2 & 3 & 0 \\ \hline \hline \end{tabular} \end{table} Table 1: OAM assignments for the fully-connected network in Fig. 6. Each row is obtained from the one above by a circular right-shift. QKD protocols are entanglement-based ones, e.g., E91 or BBM92. Although entanglement-based protocols are more secure than prepare-and-measure ones, they are also more difficult to implement, as they require to distribute entanglement between nodes. Multi-user entanglement-distribution networks have been experimentally demonstrated for wavelength multiplexing [43, 44]. Similar passive-switching networks with a central node can be designed for OAM. Entanglement-distribution networks can be actively- or passively switched. In an active network, the central node (the source) generates pairs of polarization-entangled photons. An active switch then assigns the correct OAM values \(r\) and \(s\) to the two photons and then distributes the photons to the corresponding nodes \(r\) and \(s\). For example, the actively-switched entanglement distribution scheme in Ref. [43] can be translated directly in the OAM domain with a general point-to-multipoint network, as in Fig. 8. Here "Alice" and "Bob" are two entangled photons that are distributed based on their assigned OAM value. Notice that even though the network is not fully connected in the prepare-and-measure regime, it becomes so for entanglement distribution. It is fully connected, i.e., any two users can share an entangled pair. A passive-switching network for entanglement distribution has been experimentally demonstrated in Ref. [44]. A passively-switched network works in a similar way to the choice of measurement basis in BB84, where a beam-splitter chooses randomly the basis. In the case of a passive OAM network, two Fourier gates \(F_{d}\) (which generalize the beam-splitter for \(d>2\)) put the signal and idler photon into two distinct multi-path interferometers, one for signal and one for idler, see Fig. 9. For the signal photon we have: \[F_{d}|0\rangle_{\text{path}}=\frac{1}{\sqrt{d}}\sum_{i=0}^{d-1}|i\rangle_{ \text{path}}\] Each path \(i\) of the interferometer has an \(i\)-th order spiral phase plate (SPP) and changes the OAM \(|0\rangle_{\text{oam}}{\rightarrow}|i\rangle_{\text{oam}}\) \[\text{SPP}^{i}|0\rangle_{\text{oam}}|i\rangle_{\text{path}}=|i\rangle_{ \text{oam}}|i\rangle_{\text{path}}\] The paths (channels) are then multiplexed into a common exit path. Subsequently, the two photons are input into path 0 (the signal) and, respectively 1 (the idler) of a final OAM demultiplexer, which distributes the two photons to the final users, Fig. 9. The final quantum state of the two photons is (for simplicity we omit the polarization part): \[\frac{1}{d}\sum_{i=0}^{d-1}\sum_{j=0}^{d-1}|i,j\odot 1\rangle_{\text{oam}}|i,j \rangle_{\text{path}} \tag{6}\] In this case entanglement distribution between nodes is done randomly, according to the OAM values of signal and idler (via post-selection). Similar to other passively-switched networks [44], the pairs of nodes (randomly) receiving the entangled pair are identified by coincidences in their detectors. ## IV Discussion and Conclusions The development of quantum communication networks and the advent of the future quantum internet is contingent on the ability to route quantum information in networks with complex topologies. In the scenario investigated here, a set of users send quantum states \(|\psi\rangle_{q}\), entangled or not, to another set of receivers. Similar to the classical case, the scarcity of certain resources, like long-distance optical cables, means that signals between different nodes use a common communication channel. This implies that we need to multiplex/demultiplex quantum signals from/to different users Figure 8: A fully-connected network for entanglement distribution using an active central provider. If in the general point to multipoint network one group sends entangled pairs (or multipartite entangled states) to the other, the receiving group becomes a fully-connected network for entanglement-based QKD. Figure 7: Circulators on the node side (a), and DEMUX side (b), allow each node to be both a transmitter Tx and a receiver Rx. Each node is connected to the (central) DEMUX by a single fiber, and thus to rest of the network. (nodes). In order to achieve this, here we use OAM to route the quantum state \(|\psi\rangle_{q}\) between different users. In this article we have discussed several network architectures for pairwise communication between multiple parties. Starting from a simple, one-to-one network, we then developed one-to-many and fully-connected networks for distributing quantum states. We have shown that a fully-connected network with \(n\) nodes can be achieved with minimal resources: a single quantum sorter acting as a MUX/DEMUX, connected to all \(n\) nodes. Moreover, this fully-connected network requires only \(n-1\) OAM values. Finally, we have developed a novel entanglement-distribution protocol which has several advantages compared to the current wavelength-based networks. The central element of all the networks discussed here is the quantum sorter which acts as a MUX/DEMUX. The quantum sorter has a cyclic property which was used extensively in building a multitude of network architectures. These networks can be used to distribute quantum states between nodes, both in prepare-and-measure protocols (BB84) and in entanglement-based ones (E91, BBM92). The protocols described here can be implemented either in optical fibers or in free-space. As future applications, we envisage our protocols to be used in a wide-range of communication tasks, such as terrestrial networks (intra- and inter-city), satellite-to-satellite or satellite-to-ground. ###### Acknowledgements. The authors acknowledge support from a grant of the Romanian Ministry of Research and Innovation, PCCDI-UEFISCDI, project number PN-III-P1-1.2-PCCDI-2017-0338/79PCCDI/2018, within PNCDI III. R.I. acknowledges support from PN 19060101/2019-2022. S.A. acknowledges support by the Extreme Light Infrastructure Nuclear Physics (ELI-NP) Phase II, a project co-financed by the Romanian Government and the European Union through the European Regional Development Fund and the Competitiveness Operational Programme (1/07.07.2016, COP, ID 1334).
2303.17025
Deep convolutional neural networks to restore single-shot electron microscopy images
State-of-the-art electron microscopes such as scanning electron microscopes (SEM), scanning transmission electron microscopes (STEM) and transmission electron microscopes (TEM) have become increasingly sophisticated. However, the quality of experimental images is often hampered by stochastic and deterministic distortions arising from the instrument or its environment. These distortions can arise during any stage of the imaging process, including image acquisition, transmission, or visualization. In this paper, we will discuss the main sources of distortion in TEM and S(T)EM images, develop models to describe them and propose a method to correct these distortions using a convolutional neural network. We demonstrate the effectiveness of our approach on a variety of experimental images and show that it can significantly improve the signal-to-noise ratio resulting in an increase in the amount of quantitative structural information that can be extracted from the image. Overall, our findings provide a powerful framework for improving the quality of electron microscopy images and advancing the field of structural analysis and quantification in materials science and biology.
I. Lobato, T. Friedrich, S. Van Aert
2023-03-29T21:09:05Z
http://arxiv.org/abs/2303.17025v1
# Deep convolutional neural networks to restore single-shot electron microscopy images ###### Abstract State-of-the-art electron microscopes such as scanning electron microscopes (SEM), scanning transmission electron microscopes (STEM) and transmission electron microscopes (TEM) have become increasingly sophisticated. However, the quality of experimental images is often hampered by stochastic and deterministic distortions arising from the instrument or its environment. These distortions can arise during any stage of the imaging process, including image acquisition, transmission, or visualization. In this paper, we will discuss the main sources of distortion in TEM and S(T)EM images, develop models to describe them and propose a method to correct these distortions using a convolutional neural network. We demonstrate the effectiveness of our approach on a variety of experimental images and show that it can significantly improve the signal-to-noise ratio resulting in an increase in the amount of quantitative structural information that can be extracted from the image. Overall, our findings provide a powerful framework for improving the quality of electron microscopy images and advancing the field of structural analysis and quantification in materials science and biology. ## Introduction The quality of modern electron microscopes, such as scanning electron microscopes (SEM), scanning transmission electron microscopes (STEM), and transmission electron microscopes (TEM), has greatly improved. However, the quality of the experimental images produced by these instruments is often compromised by stochastic and deterministic distortions arising from the instrument or its environment [1, 2, 3]. These distortions can occur during the acquisition, transmission, or reproduction of the image. Despite technical improvements in the design of high-performance electron microscopes [1, 2, 3, 4], the presence of these distortions in the recorded images may hinder the extraction of quantitative information from the samples under study [5]. In TEM, images are acquired in a single shot using parallel acquisition. Here, the main sources of distortions are the detector noise, which is a combination of counting noise associated with the uncertainty of photon/electron detection, dark current noise resulting from statistical variation in the number of thermally generated electrons within the detector, and readout noise resulting from the electronics that amplifies and digitizes the charge signal. Other sources of distortions for TEM include X-ray noise, which is produced by X-rays that saturate one or more nearby pixels as they pass through the detector [6, 7], and dead pixel noise, which is caused by permanently damaged pixels on the sensor and often appears as black spots in the recorded images. In S(T)EM, images are formed pixel by pixel by scanning a convergent electron beam across the sample and detecting the scattered, back-scattered or secondary electrons at each point. The main sources of distortions are the detector noise, which is a combination of shot noise hitting the scintillator, Gaussian noise resulting from the photomultiplier tube (PMT) [8], and readout noise from the electronics that amplifies and digitizes the electron signals. Unlike TEM imaging, the serial nature of SEM and STEM can introduce additional distortions into the resulting images due to time delays between measurements. At high doses, the main source of nonlinear distortion is the probe's fly-back time, where data collection pauses until scanning on the next line resumes. This produces a net two-dimensional random displacement of the pixel row known as horizontal and vertical scan distortion. These nonlinear distortions can often be corrected using iterative algorithms that require a series of images [9, 10] or a single image with a high-resolution periodic structure [11, 12]. Moreover, S(T)EM images obtained through high-speed scans (dwell time \(<1\mu s\)[13]) may display a non-uniform scan speed along individual scan lines resulting in a smearing effect that produces another type of nonlinear distortion. While these distortions can be partly compensated for periodic structures [13], they cannot be fully compensated for arbitrary specimens. Other types of distortion include row-line noise, which is caused by the detector's non-response over a few pixels, and X-ray noise, which is produced by X-rays hitting the detector. These distortions can reduce the signal-to-noise ratio (SNR) and limit the amount of retrievable information about the electron-specimen interaction. Moreover, they can cause translation, shear, rotation, expansion, or contraction of the entire image. Although increasing the beam current or acquisition time can improve the SNR, it can also increase other types of distortion, such as shear or rotation. Moreover, it is unsuitable for beam-sensitive materials or for dynamic imaging requiring a short exposure time for each frame. Lowering the electron dose can also decrease the quality of the recorded images and limit the reliability of structural information extracted from them. Various algorithms have been developed to improve the SNR of electron microscopy (EM) images, including spatial filters such as median filters, Gaussian filters, Bragg filters, and Wiener filters [14, 15, 16]. More complex methods for denoising EM images include non-linear iterative Wiener filtering algorithms [17] and block matching [18, 19] although they can be computationally intensive. Another option for improving the SNR is to average a series of registered frames, using either rigid [20] or non-rigid [9, 10] registration methods. However, these methods require a high overall electron dose and repeated recordings of the material. In addition, EM images often exhibit a combination of different types of distortions due to several factors including the instrument environment, scan instabilities, scan speed, and dose. Therefore, there is a need for image restoration algorithms specifically designed for single-shot EM images. In recent years, machine learning methods based on artificial neural networks, particularly convolutional neural networks (CNNs), have become the state-of-the-art approach for various tasks such as image classification [21], image segmentation [22], image denoising [23], image restoration [24], image deconvolution [25], and image super-resolution [26]. These methods, which involve adjusting the weight connections between neurons during training, have been made possible by the development of techniques such as the Rectified Linear Unit (ReLU) [27], dropout regularization [28], batch normalization [29], and improvements in GPU technology. While CNN-based approaches have achieved strong performance in denoising specific types of EM images, they are limited by their reliance on small simulated or experimental datasets and incomplete modelling of the various types of noise present in experimental EM data [30, 31, 32, 33, 34]. To the best of our knowledge, there is currently no algorithm that can effectively compensate for all types of distortion in a single-shot EM image without requiring retraining and regardless of the sample being studied. In this study, we use a machine learning approach to restore EM images using a Concatenated Grouped Residual Dense Network (CGRDN) and a combination of loss functions and a generative adversarial network (GAN) [35]. This approach not only learns an end-to-end mapping between distorted and undistorted EM images, but also a loss function to train this mapping. Since we only have access to distorted data experimentally, we generate undistorted and distorted EM images by applying all distortions that can be corrected on single-shot EM images. By training the neural network to produce an undistorted output regardless of the level and combination of distortions in the input, it implicitly learns to detect and repair the distortions. This approach demonstrates impressive results for restoring both periodic and non-periodic specimens with different combinations of severe distortions. Importantly, the results show that both peak positions and intensities in atomic resolution images can be reliably determined. In addition, the restoration time is only of the order of seconds for a 2kx2k image. ## Results and Discussion Electron microscopy techniques, namely SEM, STEM, and TEM, exhibit distinct sources of noise and variations in their features at both low and high resolution. Hence, we have trained our network architecture on six diverse datasets comprising low-resolution (LR) and high-resolution (HR) images for each microscopy modality. Our findings indicate that the best performance is achieved by training separate networks for LR and HR features, particularly at low doses, where the network can utilize the specific feature distribution acquired during the training phase. Detailed implementation and training information is provided in the supplementary material. Our study mainly focuses on HR-STEM, a widely used technique for the analysis and quantification of atomic structures. ### Ablation study and comparison to state-of-the-art algorithms To improve the performance of a neural network, it is important to choose the right values for the hyperparameters. These values can affect the network's ability to minimize errors, run quickly, and fit within certain hardware constraints. In our case, we want the network to be able to process images of size \(1024\times 1024\) in less than one second, and we want to be able to run it on an Nvidia Volta GPU card with 12GB of memory. To find the best hyperparameters for our needs, we perform an ablation study. This involves varying the network architecture and some of its hyperparameters and measuring their effect on the \(\mathcal{L}_{1}\) error (see "Loss function" section). Since our hardware constraints limit the maximum number of residual dense blocks (RDB), grouped residual dense blocks (GRDB), and batch size to 4, we will keep these values constant at their maximum value. All other parameters of our generator are defined in the "Network architecture" section and will be kept constant unless otherwise specified. A grid search is used to find the optimal values for the learning rate and loss weighting parameters. In the first part of this ablation study, we focus on the performance of the network when the number of convolutional layers \(n_{lay}\) within the RDB increases. Figure 1 shows the reduction of the \(\mathcal{L}_{1}\) error when the number of layers and network parameters increases. This is expected since a deeper network can improve the performance of the model by increasing the number of parameters and allowing the model to learn more complex features. We would like to highlight that our hardware constraints only allow us to use a maximum of 9 layers for \(n_{lay}\). Nonetheless, we observed that the \(\mathcal{L}_{1}\) error starts to reach a plateau for \(n_{lay}=9\), indicating that increasing the number of layers may not lead to substantial performance improvements. Furthermore, we compared the performance of three different image denoising architectures: the Grouped Residual Dense Network (GRDN) [23], the Multi-resolution U-Net (MR-UNET) [31], and our proposed architecture, CGRDN. We assessed the performance of these architectures using the well-known peak signal-to-noise ratio (PSNR), which is defined as: \[PSNR=10\log_{10}\left(\frac{MAX^{2}}{MSE}\right), \tag{1}\] where \(MAX\) denotes the maximum possible pixel value of the images, and \(MSE\) represents the mean squared error between the distorted and undistorted images. However, it is important to note that PSNR only measures the pixel-wise differences between the original and reconstructed images and does not account for other crucial factors such as visual perception and structural similarity. The GRDN architecture was previously ranked first in terms of PSNR and structure similarity index in the NTIRE2019 Image Denoising Challenge. The MR-UNET extends the functionality of the decoder in a U-Net [36] by adding additional convolutional layers to the hidden layers in order to produce coarse outputs that match low-frequency components. The results of our comparison are summarized in Table 1, which shows the number of parameters and the resulting PSNR for each architecture and show that the GRDN and CGRDN are more efficient architectures because they require approximately 7 times fewer parameters than the MR-UNET, while still achieving a higher PSNR. It is interesting to note that our CGRDN architecture achieved a higher PSNR than the GRDN, while only requiring an additional 20,000 parameters. We also compared the performance of our image restoration network to the Block-matching and 3D filtering (BM3D) [18] algorithm in terms of PSNR. BM3D is a widely used technique for removing noise from images through a process called denoising. It segments the image into overlapping blocks and identifies similar patterns among them to estimate the original image and reduce noise. BM3D has demonstrated effectiveness in denoising images with high levels of noise and serves as a benchmark for image denoising algorithms in image processing. The average PSNR of BM3D and our network on the validation dataset was 30.45 dB and 36.96 dB, respectively. These results demonstrate that our network outperforms BM3D by \begin{table} \begin{tabular}{l l l} \hline Method & \# parameters & PSNR \\ \hline MR-UNET [31] & 51.7M & 36.70dB \\ GRDN [23] & 7.02M & 36.90dB \\ CGRDN this work & 7.04M & 36.96dB \\ \hline \end{tabular} \end{table} Table 1: PSNR denoising performance comparison of different network architectures. Figure 1: Ablation study of the CGRDN architecture based on \(\mathcal{L}_{1}\) metric as a function of the size of the model. The number of layers \(n_{lay}\) is indicated next to each data point. a significant margin of 6.51 dB. Figure 2 illustrates the performance of our network and BM3D on two randomly generated, high-resolution STEM images with standard experimental noise values. These images were simulated using the procedure outlined in the "Data generation" section. The figure displays the original distorted images (a)&(e) and undistorted images (d)&(h), as well as the denoised output from BM3D (b)&(f) and the restored output from our network (c)&(g). These results demonstrate that our image restoration network significantly enhances image quality, as measured by PSNR. However, it is noteworthy that PSNR is not always a reliable indicator of image quality since it merely measures pixel-wise differences between original and reconstructed images and overlooks other critical factors such as visual perception and structural similarity. Hence, it is crucial to employ various image quality metrics, along with PSNR, to obtain a more comprehensive evaluation of the performance of image restoration techniques. ### Atomic structure quantification While the CNN was trained to restore images of a wide variety of imaging modes, STEM is of particular interest since it is routinely used for the quantification of atomic structures [37, 38, 39] in terms of atomic column positions and their corresponding scattering cross sections (SCS), which allows us to study the impact of the proposed image restoration method quantitatively. The probe position integrated scattering cross section, short SCS, in atomic resolution STEM images is defined as the integrated intensity of an atomic column, which is typically modelled as a 2D gaussian function. Since the SCS scales with the atomic number \(\approx Z^{1.7}\)[40, 41] and mostly increases monotonically with thickness for large collection angles, it is routinely used for atom counting. The evaluation of the effect of image restoration on the quantitative assessments of STEM images is done in three complementary approaches, using MULTEM [42, 43] to create multislice simulations and the StatSTEM software for all model fittings [39]. All evaluations are based on 100 distortion/noise realisations for each dose setting. 1. We demonstrate the effect of image denoising with an idealised setup in analogy to the study conducted in reference [39], where the precision of the determination of the location and SCS of an atomic column was determined over a wide range of signal-to-noise-ratios (SNRs) using pure Poisson noise. This setting allows the comparison to the theoretical limits of variance for unbiased estimators, the so-called Cramer-Rao-Lower Bounds(CRLBs). The simulated STEM dataset is a bulk Pt crystal in [001] orientation and contains STEM images over 75 depth sections with unit cell spacing in z-direction. 2. A more practical example, that includes crystal irregularities, is chosen to determine the impact of a combination of noise, scan-line-distortions and fast-scan distortion. In this case, we evaluate the mean absolute error (MAE) for atomic Figure 2: CNN restoration results compared with BM3D in terms of PSNR for two random simulated STEM specimens using standard experimental noise values. column positions and the mean absolute percentage error (MPE) for the SCSs of atomic columns, as well as the variance of these measurements. This serves to show in particular the independence of the approach on the structural periodicity for atomic-resolution STEM images. 3. For a simulated Pt-nanoparticle it is demonstrated that distortion correction yields not only a more accurate localisation of atomic columns but also enables more reliable atom counting. The simulation settings for all samples are tabulated in the supplementary information. The results of the first study are shown in figure 3. Examples of the underlying STEM images are given for the extremes of SNRs (i.e. smallest thickness and lowest dose and largest thickness and highest dose) for raw and restored images in panels (e), (f), (g) and (h). Comparing figure 3(e) and (f) it can be seen visually that even at a very low dose, the CNN can recover the underlying structure faithfully. This effect is measurable both in terms of the precision with which atomic columns can be located, as well as in SCS measurement precision, and is particularly pronounced in the low dose range as illustrated in figure 3(a) and (b). As the dose increases the precision of the structural measurements of raw and restored data converge eventually (figure 3(c-d)). An interesting observation is that the theoretical precision limit given by the CRLB, can be overcome employing image restoration. This makes a strong point for using image restoration for quantitative studies, like atom counting or strain measurements in general. The restoration results in the first example arguably benefit from the underlying perfect crystal symmetry, which is why we test the CNN also for imperfect structures. The Pt-bulk model depicted in figure 4(a) is in \([112]\) zone axis orientation, six unit cells thick and contains a unit edge dislocation of Burgers vector \(b=1/2[110]\) in the \((111)\) glide plane; a dislocation commonly observed in fcc metals [44]. The structure was created using the Atoms software, which determines atom positions Figure 3: Precision of atomic column position and SCS-measurements over a series of Pt-bulk samples with a thickness varying from 2-75 atoms together with their 95% confidence intervals. (a) Precision of the atomic column locations for a dose of 5e2 \(e/\AA^{2}\). (b) Precision of SCS measurements for a dose of 5e2 \(e/\AA^{2}\). (c) Precision of atomic column locations for a dose of 5e4 \(e/\AA^{2}\). (d) Precision of SCS measurements for a dose of 5e4 \(e/\AA^{2}\). (e) Example of a raw STEM image at z=2 and dose=5e2 \(e/\AA^{2}\). (f) Example of a restored STEM image at z=2 and dose=5e2 \(e/\AA^{2}\). (g) Example of a raw STEM image at z=75 and dose=5e4 \(e/\AA^{2}\). (h) Example of a restored STEM image at z=75 and dose=5e4\(e/\AA^{2}\). corresponding to the displacement fields predicted by the elastic theory of dislocations [45]. The simulated HAADF STEM images were subjected to varying noise levels from \(5e2~{}e/\AA^{2}\) to \(5e4~{}e/\AA^{2}\), and further corrupted by scan-line distortions as outlined in the "S(T)EM noise model" section. Example reconstructions for raw images at doses of \(5e2~{}e/\AA^{2}\) and \(5e4~{}e/\AA^{2}\) (figure 4(b) and (c)) are shown in figure 4(d) and (e), respectively. In the low-dose raw image individual atomic columns are hardly recognisable. Without the prior knowledge of the atomic column positions, any attempt of model fitting would have to overcome the challenge of performing reliable peak finding first, which is a factor not considered here. The reconstruction of this image (figure 4(d)) on the other hand shows very clear peaks. A burgers circuit is superimposed on the image to highlight that despite the poor separation of columns in the raw image, the dislocation with its correct burgers vector \(b\) is maintained, which means that the structure as a whole is retrieved correctly, albeit the individual column positions may not be fully accurate as can be seen in the mean absolute position error of the columns around the center of the dislocation (columns within the red circle in figure 4(a)) for low doses shown in figure 4(f). However, the error drops rapidly with increasing dose and shows a clear improvement against raw images. The position accuracy is therefore not only a result of denoising but also the result of the accurate correction of scan-line and fast-scan distortions. The comparatively high accuracy for the raw image fitting at low doses can be attributed to the fact that correct initial column positions are given for the fitting procedure. Since the column can hardly be located in the noisy images, the fitting algorithm on average does not move the position much away from this initial position. The CNN on the other hand reconstructs a clearly visible atomic column, but the available information in the underlying image is insufficient for accurate positioning. However, the proper retrieval of the dislocated atomic column at higher doses shows that the CNN is not by default just picking up on periodicity, but faithfully recovers the atomic structure also in the presence of non-periodic features in atomic resolution STEM images. Also the SCS measurements improve in accuracy by the restoration, which would translate directly into improvements for atom counting studies. An example of such an atom counting scenario is presented in figure 5. These results were obtained from a simulated spherical Pt nanoparticle with 11 unit cells in diameter in [100] zone axis orientation under the same distortion and noise parameters as given in the previous example. Atom counts were obtained by matching retrieved SCS values against simulated library values[46]. The improvement in column position measurements over all dose settings again indicates the proper correction of scan-line and fast-scan distortions. The improvement of SCS measurement accuracies, especially at low-dose conditions greatly decreases the chance of miscounting atoms in the structure, which in turn may be very beneficial e.g. for the reconstruction of 3D information from atom-counts [47, 48]. ### Experimental image restorations One of the main advantages of our image restoration method is that the training data is generated using realistic physical models of the noise found in various microscopy modalities, as well as for an appropriate range of values for the noise model parameters, as detailed in the "Methods" section. This methodology allows for the direct application of our network to experimental data, without requiring additional training for a particular specimen or microscope settings. Figure 6 illustrates the effectiveness of our approach on diverse types of random experimental microscopy images. The top row of this figure Figure 4: (a) Schematic of the Pt structure in [112] zone axis with a unit edge dislocation of Burgers vector \(b=1/2[110]\) in the \((111)\) glide plane. (b) Corrupted raw HAADF STEM image with a dose of \(5e2e/\AA^{2}\). (c) Corrupted raw image with a dose of \(5e5e/\AA^{2}\). (d) Restored image with a dose of \(5e2e/\AA^{2}\). (e) Restored image with a dose of \(5e5e/\AA^{2}\). (f) Quantification results for the atomic column positions and scattering cross sections of the atomic columns around the center of the edge dislocation (marked with red circles in panel (a)). shows raw experimental images for HR-STEM, LR-STEM, HR-TEM, LR-TEM, LR-TEM, HR-SEM, and LR-SEM. The bottom row shows the corresponding restored versions of these images. These results show that the trained networks have excellent performance on experimental data and can effectively handle a wide range of microscopy images with varying resolution and noise levels. It is important to note that in this study, "high resolution" refers to images with round and symmetrical features, while "low resolution" refers to images with a variety of different features. Additional examples of restored experimental images for each microscopy modality can be found in the github repository [https://github.com/Ivanlh20/r_em](https://github.com/Ivanlh20/r_em). The importance of using realistic physical models of the noise to generate distorted data, along with selecting the correct range of values for the noise model parameters, is demonstrated in Figure 7. This figure illustrates how these factors can impact the accuracy of the restored image. Figures 7 (a) and (b) show two experimental STEM images that were acquired using a Fei Titan\({}^{3TM}\) S/TEM microscope. The images were obtained using fast scanning with dwell times of \(0.2\mu s\) and \(0.05\mu s\), respectively. The importance of accurately modelling fast scan distortion is evident from figures 7 (f) and (g). In these figures, our network architecture was trained using a model, which was not sufficient to completely compensate for the spread of pixel intensities along the scanning direction (see Equation 48 in the "S(T)EM noise model" section). If the dwell time decreases, these image artifacts become more pronounced, as shown in figure 7 (g). While the manufacturer recommends using dwell times larger than \(0.5\mu s\) to avoid image artifacts, correctly modelling fast scan distortion allows us to fully compensate for these artifacts, as shown in figures 7 (k) and (l). The study of beam-sensitive materials and dynamic imaging will greatly benefit from the compensation of this distortion. Figure 7 (c) shows a registered STEM image that contains interpolation noise. The interpolation process changes the dominant noise distribution, which can impact the restoration process, especially at Figure 5: Quantification results for a spherical Pt nanoparticle with a diameter of 11 unit cells in [100] orientation. The values are based on all 333 atomic columns for 100 noise realisations. (a) The mean absolute error of the estimated atomic column positions. (b) The mean absolute percentage error of the fitted scattering cross sections, which are being used to estimate atom counts in each column. (c) The fraction of atomic columns with correctly estimated atom counts. Figure 6: Experimental image restoration for various microscopy modalities. The top row illustrates the raw experimental images, while the bottom row displays the restored versions. Images (a), (b), (c), and (d) were obtained from reference [49], and images (e) and (f) were sourced from reference [50]. low doses, as shown in Figure 7 (h) where some atomic columns appear blurred. However, this issue can be addressed by including this type of noise in the training dataset, as explained in the "Methods" section. The effect of including this noise in the training dataset on the restored image can be seen in figure 7 m), where all atomic columns become clearly visible. Figure 7 (d) exhibits a STEM image with strong Y-jitter distortion. The impact of an incorrect range of values for this distortion during data generation on the restored image can be seen in figure 7 (i), where some atomic columns appear split. After retraining the data with newly generated data containing the proper range of Y-jitter distortion, the neural network can correctly compensate for this image artifact, as shown in figure 7 (n). In Figure 7 (e), an experimental STEM image of a nanoparticle taken using a gas cell holder is shown [51]. The dominant sources of noise in this image are detector noise and fast scan noise. Figure 7 (j) shows a restored STEM image produced by our network architecture that was trained using a dataset generated with Poisson noise as the only source of STEM detector noise (as described by Equation 45 in the "S(T)EM noise model" section). However, this restored image exhibits strong artifacts despite using an accurate model for fast scan noise (as described by Equation 47 in the "S(T)EM noise model" section). After retraining our network architecture with a new dataset that includes the correct STEM detector noise (as described by Equation 46 in the "S(T)EM noise model" section), the restored image in Figure 7 (o) shows a significant reduction in artifacts. Nonetheless, it is worth mentioning that some of the remaining artifacts in the image could be attributed to other sources of distortion not accounted for in our data modelling, such as the gas holder effect, charging artifacts, and residual electronic noise. Another example that highlights the importance of properly modeling noise and distortion sources can be seen in Figure 8. In this figure, we compare the reconstruction performance of our CNN, AtomSegNet [33], and Noise2Void-NN (N2V) [53], which was retrained on the presented experimental image itself. The sample is a \(BaHfO_{3}\) nanoparticle (figure 8-3) embedded in a superconducting \(REBa_{2}Cu_{3}O_{7-\delta}\) (REBCO) matrix[54, 55] (figure 8-3), which was grown on a \(SrTiO_{3}\) substrate (figure 8-3). While all three networks successfully remove the noise from the image, there are notable differences in the reconstruction results. In region 1, the N2V reconstruction recovers all the weaker intensities of the \(Ti+O\) columns to some degree, which Figure 7: Raw STEM images alongside the results of a restoration process employing inaccurate and accurate models of the noise. The top row shows the original STEM images, while the second and third rows show the restored versions of the images trained with distorted data based on inaccurate and accurate noise models, respectively. Images (a)-(c) were obtained from our experimental datasets, whereas (d) and (e) were obtained from references [52] and [51], respectively. is not the case for the AtomSegNet reconstruction. There, some of the columns blur or even disappear. Our CNN reliably recovers all atomic columns with superior contrast to the other two methods. Similar improvements are evident also in region 2 but most notably in region 3. This region at the top of the image is also degraded, presumably by either FIB damage or carbon contamination. In both N2V and AtomSegNet reconstructions, features tend to blur into diagonal streaks, while our CNN recovers clearly distinguishable atomic columns and, given that the \(BaHfO_{3}\) nanoparticle grew epitaxially on the \(SrTiO_{3}\) substrate, that is indeed what would be expected [56]. Considering the N2V network is a generic denoising network, the results are quite remarkable, albeit the additional training step is somewhat inconvenient from a user perspective. However, this example illustrates that the CNN presented in this work does not only benefit from the latest advances in deep learning, but also from the development of accurate, physically meaningful models of all distortions specific to HAADF-STEM. This CNN is shown to be accurate, not only in perceived contrast enhancement, but also in a quantitative way which boosts the accuracy and precision of atomic structure determination in ADF-STEM studies. Figure 8: Comparison of different CNN-restoration approaches on an experimental HAADF-STEM dataset of a \(BaHfO_{3}\) nanoparticle (4) embedded in a superconducting \(REBa_{2}Cu_{3}O_{7-\delta}\) (REBCO) matrix (2), which was epitaxially grown on a \(SrTiO_{3}\) substrate(1). Images were acquired on a non-probe-corrected Titan microscope with 300 keV at KIT Karlsruhe. The data is descibed in detail in references [54] and [55] ## Methods In single-shot EM image restoration, the goal is to estimate an undistorted image \(y\) from a distorted image \(x\). To achieve this, we train a generator \(G\) using a deep neural network approach, which learns to estimate the corresponding undistorted image \(y\) for a given input \(x\). During the training procedure, a loss function is minimised to evaluate the quality of the results. Traditionally, pixel-wise losses such as \(\mathcal{L}_{1}\) or \(\mathcal{L}_{2}\) have been used to obtain quantitative results for the image restoration problem [57]. However, these losses often lead to blurred images that do not look realistic. To address this, we propose a conditional generative adversarial network (cGAN) that trains both a generator and a discriminator. The generator \(G\) maps the distorted image \(x\) to the undistorted image \(y_{g}=G(x)\), and the discriminator is trained to differentiate between real and generated images [58]. We use pixel-wise losses to ensure quantitative results while restricting the GAN discriminator to model high-frequency details, resulting in sharper and more realistic restored images. Our training is supervised, which requires input pairs of distorted and undistorted EM images. However, in practice, we only have access to distorted EM data. To overcome this, we can partially address the problem by collecting time-series EM images and using an average procedure based on rigid and non-rigid registration to generate an undistorted image. However, the combination of high-speed scans, jitter, and low-dose leads to highly correlated distortions [13]. Furthermore, long exposure to the electron beam can result in charging, beam damage, atom hopping and rotation of the specimen under study, which can further hamper the average procedure. Therefore, the only solution is to train the GAN using synthetic pairs of undistorted/distorted EM images. ### Network architecture A GAN [59] is a powerful framework that encourages predictions to be realistic and thus to be close to the undistorted data distribution. A GAN consists of a generator (G) and discriminator (D) playing an adversarial game. A generator learns to produce output that looks realistic to the discriminator, while a discriminator learns to distinguish between real and generated data. The models are trained together in an adversarial manner such that improvements in the discriminator come at the cost of a reduced capability of the generator and vice versa. The GAN involves the generation of conditional data, which is fed to the generator and/or the discriminator [35]. The generator and discriminator architectures proposed here are adapted from those described in [60] and [58], respectively. The details of these architectures are discussed in the following sections. **Generator architecture** Our generator architecture, called Concatenated Grouped Residual Dense Network (CGRDN), is shown in Fig. 9. This network architecture is an extension of the GRDN for image denoising [23], which was ranked first for real image denoising in terms of the PSNR and the structure similarity index measure in the NTIRE2019 Image Denoising Challenge [61]. The GRDB architecture is shown in Fig. 9(a). The building module of this architecture is the residual dense block (RDB) [60], which is shown in Fig. 9(b). The original GRDN architecture can be conceptually divided into three parts. The first part consists of a convolutional layer followed by a downsampling layer based on a convolutional stride, the middle part is built by cascading GRDBs and the last part consists of an upsampling layer based on transposed convolution followed by a convolutional block attention module (CBAM) [62] and a convolutional layer. The GRDN also includes the global residual connection between the input and the last convolutional layer. In the original version of the GRDN [23], residual connections are applied in three different levels (global residual connection, semi-global residual connection in GRDB, and local residual connection in each RDB). However, in the version submitted for the NTIRE2019 Image Denoising Challenge [61], residual connections for every 2 GRDBs were included. Although it has been demonstrated that one architecture developed for a certain image restoration task also performs well for other restoration tasks [60, 63, 58, 64], an architecture for a given task will be data dependent. When applied to EM data, we found out that 2 modifications of GRDN are necessary in order to best handle the nature of our data, which involves different types and levels of distortions with high correlation between pixels: 1. The cascading of the GRDN is replaced by feature concatenation, feature fusion, and a semiglobal residual connection. This allows us to exploit hierarchical features in a global way, which is important for highly correlated pixels that extend over a large area of the image. 2. The CBAM, which is included in [60] is removed from our network. The reason for this is the use of large image sizes (256x256) for training, which reduces its gain [23]. ### Discriminator architecture The purpose of the discriminator network is to judge the quality of the output data resulting from the generator network. For our discriminator, we use the 70x70 convolutional patch discriminator described in [58] with some minor modifications. The zero-padding layers were removed and batch normalization layers [29] were replaced by instance normalization layers (IN) [65]. Figure 10 shows the structure of the discriminator network. The result of the network is the non-transformed output \(C(y)\) or \(C(y_{g})\) of dimensions \(32x32\). Some benefits of the discriminator architecture shown in Fig. 10 include that it is fully convolutional and it only penalizes structure at the scale of image patches. Furthermore, we enhance our discriminator based on the relativistic GAN, which has been shown to improve the data quality and stability of GANs at no computational cost [66]. Different from the standard discriminator, which estimates the probability that input data is real, a relativistic discriminator predicts the probability that real data \(y\) is relatively more realistic than generated data \(y_{g}=G(x)\). If we denote our relativistic average patch discriminator as \(D_{Rap}(x)\), then the output of the discriminator can be written as: \[D_{Rap}\left(y,y_{g}\right)= \sigma\left(C(y)-\mathbb{E}_{y_{g}}\left\{C(y_{g})\right\}\right) \tag{2}\] \[D_{Rap}\left(y_{g},y\right)= \sigma\left(C(y_{g})-\mathbb{E}_{y}\left\{C(y)\right\}\right) \tag{3}\] where \(\sigma\) is the sigmoid function and \(\mathbb{E}_{x_{1},...,x_{n}}\left\{.\right\}\) is an operator representing the expectation value computed on the variables \(x_{1},...x_{n}\). In the next section, these functions will be used in the definition of the loss functions. ### Loss function The loss function is the effective driver of the network's learning. Its goal is to map a set of parameter values of the network onto a scalar value, which allows candidate solutions to be ranked and compared. In our case, the discriminator and adversarial losses are based on the relativistic average GAN loss defined in [66]. We design our generator loss function as a sum of different contributions in such a manner that it keeps the quantitative information of the image at the pixel level and produces perceptually correct and realistic images. The different contributions of these loss functions are described in the following sections. \(\mathcal{L}_{1}\) **loss** Pixel-wise losses are advantageous to keep quantitative information of the ground truth image. In this work, we used the Figure 10: Patch discriminator architecture. Figure 9: Concatenated Grouped Residual Dense Network (CGRDN) architecture for EM image restoration. (a) Overall architecture, (b) GRDB architecture used in (a), (c) RDB architecture used in (b). loss, which as compared to the \(\mathcal{L}_{2}\) loss yields less blurred results [57]. The \(\mathcal{L}_{1}\) loss can be written as: \[\mathcal{L}_{1} = \mathbb{E}_{y,y_{g}}\left\{w_{y}\left\|y-y_{g}\right\|\right\}, \tag{4}\] \[w_{y} = 1/\max\left(\sigma_{\min},\text{Std}_{y}\left\{y\right\}\right) \tag{5}\] where \(w_{y}\) is a weighting factor that gives equal importance to each example regardless of its contrast, \(\sigma_{\min}\) is a small value to limit the maximum scaling factor, and \(\text{Std}_{x_{1},...x_{n}}\left\{.\right\}\) is an operator that represents the standard deviation calculated on the variables \(x_{1},...x_{n}\). \(\mathcal{L}_{2}\) **loss** Due to the design of our architecture, which is learning the residual difference between the distorted and undistorted image and based on the fact that distorted images can have few outliers in the distribution of pixel intensities (i.e. X-rays hitting the EM detector, saturation of the detector, low dose and dead-pixels), the output of the generator will show a strong correlation at those pixel positions. For this reason, we also used the \(\mathcal{L}_{2}\) loss which strongly penalized the outliers: \[\mathcal{L}_{2}=\mathbb{E}_{y,y_{g}}\left\{w_{y}\left\|y-y_{g}\right\|^{2}\right\} \tag{6}\] **Multi-local whitening transform loss** Local contrast normalisation (LCN) is a method that normalises the image on local patches on a pixel basis [67]. A special case of this method is the whitening transform which is obtained by subtracting the mean and dividing by the standard deviation of a neighborhood from a particular pixel: \[y_{ij}^{S}=\left(y_{ij}-\mathbb{E}_{\hat{S}}\left\{y_{i,j}\right\}\right)/ \max\left(\sigma_{\min},\text{Std}_{\hat{S}}\left\{y_{i,j}\right\}\right), \tag{7}\] where \(\hat{S}\) is a local neighbourhood around the pixel \(i,j\) of window size \(S\). The whitening transform makes the image patches less correlated with each other and can highlight image features that were hidden in the raw image due to its low local contrast. This effect can be seen in Fig. 11a), which shows a simulated ADF-STEM image of a random nanoparticle on a carbon support. The edge of the nanoparticle shows low contrast due to its reduced thickness, resulting in lower intensity values. Based on this observation, we introduce a multi-local whitening transform (MLWT) loss which pays more attention to fine details independent of the intensity value. Specifically, the generated and the ground truth image are local whitening transforms corresponding to different window sizes of \(2x2\), \(4x4\), \(8x8\), and \(16x16\) pixels. Using different windows sizes for the calculation of the whitening transform, we ensure that the relevant features present in the image are highlighted independently of its pixel size. Figs. 11(b)-(e) show an enhancement of the edge of the nanoparticle as well as the carbon support after applying the whitening transform to Fig. 11(a) by using different window sizes. Then, we calculate the average \(\mathcal{L}_{1}\) loss for these 4 images: \[\mathcal{L}_{mlwt}=\frac{1}{4}\sum_{S=2,4,8,16}\mathbb{E}_{y^{S},y^{S}_{g}}\left\{\left\|y^{S}-y^{S}_{g}\right\|\right\}. \tag{8}\] **Fourier space loss** In electron microscopy, Fourier space contains crucial information about the sample and any distortions that may be difficult to discern in real space. To address this issue, we introduce the \(\mathcal{L}\gamma\) loss in the 2D Fourier transform of the difference between the generated data \(y_{g}\) and the ground truth image \(y\).Nevertheless, it is noted that high-frequency information typically possesses Figure 11: a) Undistorted ADF STEM image of a nanoparticle on a carbon support. Images are generated by applying the whitening transform to (a) by using different window sizes of (b) 2, (c) 4, (d) 8 and (e) 16. smaller values than low-frequency information. Consequently, to accentuate the high-frequency information, we apply a power transform to the aforementioned difference and define the loss function as follows: \[\mathcal{L}\text{fs-}\gamma=\mathbb{E}y,y_{g}\left[|\mathcal{F}(y-y_{g})|^{7} \right], \tag{9}\] Here, \(\mathcal{F}\) symbolises the 2D Fourier transform, and \(\gamma\) is a parameter in the range \((0.0,1.0]\). In our investigation, we utilise \(\gamma=0.125\). **Constraint losses** Some important parameters for EM quantification are the total intensity and the standard deviation of the images. The reason for this is that they carry information about physical quantities of the sample or microscope, such as the number of atoms, defocus and spatial and temporal incoherence [68, 69]. Therefore, we encourage that the restored images have to minimize the above quantities, resulting in the following two loss functions: \[\mathcal{L}_{mean} = \left\|\mathbb{E}_{y}\left\{y\right\}-\mathbb{E}_{y_{g}}\left\{ y_{g}\right\}\right\|, \tag{10}\] \[\mathcal{L}_{std} = \left\|\text{Std}_{y}\left\{y\right\}-\text{Std}_{y_{g}}\left\{ y_{g}\right\}\right\|. \tag{11}\] **Adversarial loss** The job of the relativistic adversarial loss is to fool the discriminator which can be expressed as: \[\mathcal{L}_{Adv}=-\mathbb{E}_{x,y}\left\{\log\left(1-D_{Rap}(y,y_{g})\right) \right\}-\mathbb{E}_{\gamma_{g}}\left\{\log\left(D_{Rap}(y_{g},y)\right) \right\}, \tag{12}\] with \(D_{Rap}(y,y_{g})\) and \(D_{Rap}(y_{g},y)\) defined in equations 2 and 3, respectively. This definition is based on the binary cross entropy between the ground truth and the generated images. Different from the conventional adversarial loss, in which \(y\) is not used, our generator benefits from \(y\) and \(y_{g}\) in the adversarial training. **Generator loss** Our total generator loss function can be written as: \[\mathcal{L}_{G} = \mathcal{L}_{pixel-wise}+\lambda_{Adv}\mathcal{L}_{Adv}, \tag{13}\] \[\mathcal{L}_{pixel-wise} = \lambda_{1}\mathcal{L}_{1}+\lambda_{2}\mathcal{L}_{2}+\lambda_{ mult}\mathcal{L}_{mult}+\lambda_{fs-\gamma}\mathcal{L}_{fs-\gamma}+\lambda_{ mean}\mathcal{L}_{mean}+\lambda_{std}\mathcal{L}_{std}, \tag{14}\] where \(\mathcal{L}_{pixel-wise}\) is our pixel-wise loss function, \(\lambda_{1}\), \(\lambda_{2}\), \(\lambda_{mult}\), \(\lambda_{fs-\gamma}\), \(\lambda_{mean}\), \(\lambda_{std}\) and \(\lambda_{Adv}\) are the weighting parameters to balance the different loss terms. **Discriminator loss** Symmetrically to the relativistic adversarial loss, the relativistic discriminator is trying to predict the probability that real data is relatively more realistic than generated data, and it can be expressed as: \[\mathcal{L}_{D}=-\mathbb{E}_{x,y}\left\{\log\left(D_{CRap}(x,y,y_{g})\right) \right\}-\mathbb{E}_{x,y_{g}}\left\{\log\left(1-D_{CRap}(x,y_{g},y)\right) \right\}. \tag{15}\] ### Data generation While it is possible to fully describe the electron-specimen interaction and image formation in an electron microscope, generating realistic EM image simulations for specimens on a support with sizes of a few nanometers is too time-consuming even with the most powerful GPU implementations of the multislice method [42, 43]. However, our goal is to train a neural network to correct EM distortions without the need to know the specific specimen or microscope settings. Therefore, we only need to generate undistorted images that closely mimic the appearance of real EM data, while the EM distortions must be accurately modelled. The generated undistorted images should also include physical parameters of the specimen and microscope settings, such as atomic sizes, atomic distances, atomic vibrations, lattice parameters, and relative intensities of atomic species, as well as acceleration voltage, aberrations, magnification, detector sensitivity, detector angles, and the transfer function of the detection system. #### Specim generation In order to optimise the simulation process, we generate a specimen that fully covers the extended simulated box size \(\hat{I}^{e}_{xyz}\). This is an expanded version of the required simulation box size \(\hat{I}_{xyz}\). The calculation of \(\hat{I}_{xyz}\) starts by randomly selecting a pixel size \(\text{d}r\) within the range \([0.025,0.90]\text{A}\). By using the required image size \((n_{x},n_{y})\), \(n_{z}=\max(n_{x},n_{y})\) and \(\text{d}r\), the required simulation box size can be expressed as \(\hat{I}_{xyz}=\{n_{x}\text{d}r,n_{y}\text{d}r,n_{z}\text{d}r\}\). From these values, an extended number of pixels \(n^{e}_{i}=n_{i}+round(d_{ext}/dr)\) and an extended simulation box size \(\hat{I}^{e}_{xyz}=\{n^{e}_{i}\text{d}r,n^{e}_{i}\text{d}r,n^{e}_{i}\text{d}r\}\) are obtained, where \(d_{ext}\) is the maximum correlation distance for a given value of scanning distortions. The specimen generation is divided in 3 steps. The first step of specimen generation involves randomly selecting a specimen type from the following options: crystalline specimen, amorphous specimen, or individual points. If the selected specimen is crystalline, the generation process starts by randomly choosing up to 16 unique atomic types with atomic number \(Z\) in the range \([1,103]\). The crystallographic space group is randomly chosen from a range \([1,230]\). The lattice parameters and the angles of the chosen space group are selected randomly from a range \([3.1,25.0]\)A and \([45^{\circ},120^{\circ}]\), respectively. Atomic positions of the asymmetric unit cells are generated randomly within the volume that is allowed by their space-group symmetry. This specimen generation process is subject to a physical constraint: after applying the space group symmetry to the atomic positions on the asymmetric unit cells, the minimum distance between the atoms in the unit cell must be within the range \([0.95,7.0]\)A. If this requirement is not met, the generation process is restarted. The generation of amorphous specimens is based on randomly choosing only one atomic number \(Z\) from the range \([1,103]\). The atomic positions of amorphous specimens are generated by randomly placing atoms within the extended simulation box, subject to the requirement that the minimum distance between atoms is within the range \([0.95,1.6]\)A. This process continues until the desired density within the range \([2.0,7.0]g/cm^{3}\) is achieved. In contrast, the generation of individual points starts by randomly choosing a number of points within a given range of positive integers. The 3D positions of the particles are then generated randomly within the extended simulation box, subject to the requirement that the minimum distance between particles is within the range \([1,20]dr\). This option is also used to generate low-resolution images. The second step begins by randomly choosing between a specimen orientation along the zone axis or a random orientation. The probability of choosing a zone axis orientation is 0.75. If the specimen is crystalline, the zone axis orientation is randomly chosen from the first eight main zone axes, and a small random mistilt angle is generated for the chosen orientation using a normally distributed random number with a standard deviation of \(5^{\circ}\). For non-crystalline specimens, a random 3D orientation is generated. To prevent alignment of crystalline specimens along the \(xy\) directions, an additional random rotation is applied along the \(z\) axis. For a given generated orientation, the specimen is oriented and cropped in the \(xy\) plane so that it fits within the extended simulated box. This is followed by a random generation of a wedge on the specimen with a probability of 0.75. The wedge can be generated on the top, bottom, or both surfaces of the specimen, each with a probability of occurrence of 0.33. The wedge orientation is generated randomly in the \(xy\) plane, and its angle is chosen randomly from the range \([5^{\circ},45^{\circ}]\). Shapes can be applied to the specimen with a probability of 0.5. To avoid any preference for the three different types of shapes, the probability of occurrence for each type is set to 0.33. The first type of shape is a polygon rod, for which the number of cross-section vertices sliced along its length is randomly chosen from the range \([3,15]\). The rod is also placed and oriented randomly. The radius of the polygon is chosen randomly from the range \([0.01,0.5]\max(\hat{l}_{xyz})\). The second shape is a convex polyhedron, for which the radius and the number of vertices are chosen randomly from the ranges \([0.01,0.5]\max(\hat{l}_{xyz})\) and \([4,20]\), respectively. The third shape is a hard shape, in which all atoms on one side of a randomly generated \(3d\) plane parallel to the \(z\) orientation are removed. The application of a chosen shape can be used to either remove or keep the atoms of the specimen, with a probability of keeping the atoms of 0.5. Defects are generated randomly with a probability of 0.8. The process starts by randomly selecting a number of atoms, \(n_{sel}\), within the specimen. This number is chosen randomly from the range \([0,n_{max}]\), where \(n_{max}\) is equal to the number of atoms in the specimen multiplied by 0.25 and rounded to the nearest whole number. The positions of the selected atoms are randomly changed with a probability of 0.5. This is done by adding a normally distributed random number with a standard deviation equal to the atomic radius to the position of each selected atom. The final step of specimen generation adds a support layer with a probability of 0.95. The support layer can be either crystalline or amorphous, each with a probability of 0.5. The thickness of the support layer is chosen randomly from the range \([1,30]\)nm. The process described above for crystalline and amorphous specimen generation is used for the support layer, with the exception of shape generation. Finally, the generated atoms are added to the specimen. _Undistorted data generation_ **High/medium resolution electron microscopy data** can be synthesized as a linear superposition of the projected signal of each atom in the specimen at a given orientation. Moreover, each projected atomic signal can be modelled as a two-dimensional radial symmetric function, \(f_{Z}^{i}(r)\), where the index \(i\) refers to an atom with atomic number \(Z\) in the specimen. Under this assumption, \(y\) can be expressed as: \[y=\sum_{Z}\sum_{i}f_{Z}^{i}(|\mathbf{r}-\mathbf{r}_{i}|), \tag{16}\] where \(\mathbf{r}\) is a two-dimensional vector. Additionally, we model \(f_{Z}(r)\) for each atom with atomic number \(Z\) as a weighted sum of Gaussian, Exponential, and Butterworth functions: \[f_{Z}(r)=h_{1}e^{-\frac{r^{2}}{2(r)}^{2}}+h_{2}e^{-\frac{r}{Z}}+\frac{h_{3}}{ 1+(r/r_{Z}^{m})^{2n}}, \tag{17}\] where \(h_{1}\), \(h_{2}\), \(h_{3}\), \(n\) and \(r_{m}\) are the parameters of our model which are restricted to positive values. This parameterization has 3 benefits. First, it accurately models almost any simulated/experimental incoherent EM image. Second, it allows for an easy inclusion of physical constraints. Third, it only requires 5 parameters. To allow realistic tails of \(f_{Z}(r)\), we constrain \(n\) to be a uniform random variable between \([4.0,16.0]\). We would also like to emphasize that all numerical ranges for the data generation were fine-tuned based on analyzing around 2000 real simulations of (S)TEM images for different specimens and microscope settings. In order to encode physical information into this model, \(r_{Z}^{m}\) is chosen proportionally to the transformed two-dimensional mean square radius, \(\hat{r}_{Z}\), of the projected atomic potential, \(V_{Z}^{p}(r)\)[70]: \[r_{Z}^{m}=a\times\left(\hat{r}_{Z}\right)^{\alpha}+b \tag{18}\] where \[a = \mathrm{Std}_{Z}\left\{\hat{r}_{Z}\right\}/\mathrm{Std}_{Z}\left\{ \left(\hat{r}_{Z}\right)^{\alpha}\right\}, \tag{19}\] \[b = \mathbb{E}_{Z}\left\{\hat{r}_{Z}\right\}-a\times\mathbb{E}_{Z} \left\{\left(\hat{r}_{Z}\right)^{\alpha}\right\},\] (20) \[\hat{r}_{Z} = \left[\frac{\int_{0}^{\infty}r^{2}V_{Z}^{p}(r)r\mathrm{d}r}{\int _{0}^{\infty}V_{Z}^{p}(r)r\mathrm{d}r}\right]^{1/2} \tag{21}\] and \(\alpha\) is a uniform random variable between \([0.75,1.25]\). On the other hand, the linear coefficients \(h_{1}\), \(h_{2}\) and \(h_{3}\) are randomly chosen within the range \([0.5,1.0]\) with the following constraint: \[\int f_{Z_{i}}(r)dr>\int f_{Z_{j}}(r)dr,\text{ if }Z_{i}>Z_{j} \tag{22}\] where \(Z_{i}\) and \(Z_{j}\) are the atomic numbers of two elements of the specimen. This constraint arises from the fact that the integrated intensity of quasi-incoherently scattered electrons of a given atomic number is proportional to \(Z^{\prime}\), in which \(\gamma\) is a real number between \(1.0\) and \(2.0\) depending on the microscope settings [71]. The process of **generating low-resolution images** begins by randomly choosing a set of low-resolution image types from the following options: soft particles, sharp particles, grains, bands, boxes, and cracks. This stage uses the specimen type "individual points" to generate random positions where different objects will be placed. Finally, the low-resolution image is obtained by linearly superimposing these individual objects. The generation of soft particles starts by randomly choosing a number of particles in the range \([15,85]\). Each soft particle image is generated by randomly rotating the asymmetric version of Eq. 17, where \(r_{Z}^{m}=(r_{Z}^{m_{x}},r_{Z}^{m_{y}})\) and \(r_{Z}^{m_{y}}=\alpha r_{Z}^{m_{x}}\), with \(\alpha\) a random variable in the range \([0.8,1.2]\). In the case of sharp particles, there is a sharp transition between the border and background of the particle, and the particle can be either polygonal or elliptical with equal probabilities of occurrence. The process starts by randomly choosing a number of particles in the range \([15,40]\). For the polygon option, the number of vertices is randomly chosen in the range \([3,5]\). Each sharp particle image is generated by masking a 3D random positive plane intensity with its randomly rotated shape. This masking creates an intensity gradient over the \(x-y\) plane such that the object does not appear flat. Grain generation in \(2D\) is performed using the Voronoi tessellation method [72], which is one of the available techniques for producing random polygonal grains within a domain. This process starts by randomly selecting a number of points within the range \([15,175]\). Each grain image is created by masking a 3D random positive plane with its corresponding Voronoi cell. Additionally, the grain borderline is included with a probability of occurrence of 0.5, where its intensity value is randomly assigned within the range \([0.5,1.5]\times\mathrm{mean}(\mathrm{grain\ intensity})\). EM images may exhibit contrast inversion related to the projected specimen, which can be easily simulated by inverting the image: \[y\leftarrow\mathrm{max}(y)-\mathrm{y}. \tag{23}\] The probability of this mechanism occurring was set to 0.5. To introduce non-linear dependence between the generated image intensity and the projected specimen's structure, \(y\) is non-linearly transformed with a probability of occurrence of 0.5: \[y\leftarrow|y|^{\beta} \tag{24}\] where \(\beta\) is a uniform random number selected from the range \([0.5,1.5]\). To further break this linearity, a random background was added to \(y\). The background is randomly chosen between a 3D plane and a Gaussian, with an occurrence probability of 0.5 for each. In the first case, a randomly orientated positive 3D plane is generated with a random height between \([0,\mathrm{max}(y)/2]\). In the second case, the Gaussian centre and its standard deviation are randomly chosen within the range of the \(xy\) simulation box size and \([0.2,0.6]\times\min(n_{x},n_{y})\), respectively. From the analysis of the experimental and simulated data, we found that the ratio \(r_{std/mean}=\text{Std}\left\{y\right\}/\mathbb{E}\left\{y\right\}\) is between \([0.01,0.35]\). Therefore, if the EM image does not fulfill the latter constraint, then it is linearly transformed as: \[y\gets cy+d \tag{25}\] where \(c\) and \(d\) are chosen to bring \(r_{std/mean}\) within the range of the constraint. Finally, the EM image is normalized through dividing by its maximum value. \[y\leftarrow\frac{y}{\max(y)} \tag{26}\] Note that the correct parameterization of the model and the randomness of its parameters are subject to physical constraints allowing to encode information in the generated high/medium resolution EM image of the atomic size, atomic vibration, relative intensities between atomic species, detector angle, acceleration voltage, aberrations and/or detector sensitivity. #### TEM noise model The TEM noise model is based on the fact that TEM images are recorded using parallel illumination, and that most signal acquisitions for electrons are set up so that the detector output is directly proportional to the time-averaged flux of electrons reaching the detector. In case of TEM, the electrons are detected indirectly using a charge coupled device (CCD) sensor [73] or a complementary metal oxide semiconductor (CMOS) sensor [74], or directly using a direct electron detector [75]. For indirect detection, primary electrons are converted to photons in a scintillator, which are then directed to the CCD/CMOS sensor through a lens or fiber optic coupling. In contrast, for direct electron detectors, the CMOS sensor is directly exposed to the electron beam. **TEM camera modulation-transfer function** Scattering of incident electrons over the detector leads to the detection of electrons in multiple pixels, which can be quantitatively described using the modulation-transfer function (MTF). Because the effect of the MTF is to produce an isotropic smear out of features on the recorded TEM image, which in general cannot be distinguished from an undistorted TEM image recorded with other microscope settings, we embedded this effect into the undistorted TEM image by convolving it with the point-spread function (PSF), which is the Fourier transform of the MTF: \[y\gets y\otimes\text{PSF}. \tag{27}\] The MTF itself can be separated into a rotationally symmetric part, \(\text{MTF}_{r}\), describing the spread of electrons in the detector, and a part describing the convolution over the quadratic area of a single pixel. This yields the following equation: \[\text{MTF}=\text{MTF}_{r}\operatorname{sinc}(\pi u/2)\operatorname{sinc}(\pi v /2), \tag{28}\] where the Fourier space coordinates \((u,v)\) are defined in units of the Nyquist frequency [76]. Furthermore, we found that the general shape of \(\text{MTF}_{r}\) can be expressed parametrically as: \[\text{MTF}_{r}=ae^{-\frac{x^{2}}{2b^{2}}}+(1-a)e^{-\frac{x^{2}}{2c^{2}}}, \tag{29}\] where \(a\), \(b\) and \(c\) are positive real numbers. These numbers were randomly generated until they fulfill the constraint that on a numerical grid of 1000 points with a length of 10 units of the Nyquist frequency, the \(\text{MTF}_{r}\) is a positive and monotonically decreasing function. **TEM detector noise** TEM detectors are subject to three main sources of noise: shot noise, dark current noise, and readout noise. These noise sources can be classified into two types: temporal and spatial noise. Temporal noise can be reduced by frame averaging, whereas spatial noise cannot. However, some spatial noise can be mitigated by using techniques such as frame subtraction or gain/offset correction. Examples of temporal noise discussed in this document include shot noise, reset noise, output amplifier noise, and dark current shot noise. Spatial noise sources include photoresponse non-uniformity and dark current non-uniformity. Each of these noise sources can lower the SNR of a sensor imaging device. **Photon shot noise** After the initial conversion of the incident electron to its photon counterpart, the generated photons will hit the photosensor pixel area, liberating photo-electrons proportional to the light intensity. Due to the quantum nature of light, there is an intrinsic uncertainty arising from random fluctuations when photons are collected by the photosensor. This uncertainty is described by the Poisson process \(\mathbb{P}\) with mean \(\alpha x\), where \(\alpha\) is a dose scale factor. The distribution of \(\alpha\) is exponential, with a scale parameter of \(0.5\) and a range \([0.5,750]/\mathbb{E}\{y\}\). The use of the exponential distribution yields higher probabilities for the generation of images at lower doses which is the focus of our research. The division by \(\alpha\) in the equation below brings \(x\) back to its original range: \[x\leftarrow\frac{P(\alpha x)}{\alpha} \tag{30}\] **Fixed-pattern noise** Fixed-pattern noise (FPN) is a pixel gain mismatch caused by spatial variations in the thickness of the scintillator, fiber-optic coupling, substrate material, CCD bias pattern, and other artifacts that produce variations in the pixel-to-pixel sensitivity and/or distortions in the optical path to the CCD or in the CCD chip itself [77]. Since FPN is a property of the sensor, it cannot be fully eliminated. However, it can be suppressed using a flat-field correction procedure. We model the remaining distortion as a normal distribution \(\mathbb{N}\) with zero mean and standard deviation \(\sigma_{fpn}\). \[x\gets x+x\mathbb{N}(0,\sigma_{fpn}) \tag{31}\] **Dark-current noise** Dark current is the result of imperfections or impurities in the depleted bulk Si or at the \(SiO_{2}/Si\) interface. These sites introduce electronic states in the forbidden gap which allows the valence electrons to jump into the conduction band and be collected in the sensor wells. This noise is independent of electron/photon-induced signal, but highly dependent on device temperature due to its thermal activation process [78]. **Dark-current nonuniformity** Dark-current nonuniformity (DCNU) arises from the fact that pixels in a hardware photosensor cannot be manufactured exactly the same and there will always be variations in the photo detector area that are spatially uncorrelated, surface defects at the \(SiO_{2}/Si\) interface, and discrete randomly-distributed charge generation centers [79]. This means that different pixels produce different amounts of dark current. This manifests itself as a fixed-pattern exposure-dependent noise and can be modelled by superimposing two distributions. The Log-Normal distribution (\(ln\mathbb{N}\)) is used for the main body and the uniform (\(\mathbb{U}\)) distribution is used for the "hot pixels" or "outliers" [80]. \[\text{DCNU}\gets ln\mathbb{N}(\mu,\sigma)+\mathbb{U}(a,b) \tag{32}\] with \(\mu\) the mean value, \(\sigma\) the standard deviation, \(a=\mu+5\sigma\), and \(b=\mu+8\sigma\). **Dark-current shot noise** Additional noise arises from the random arrival of electrons generated as part of the dark signal, which is governed by the Poisson process. To simulate a single frame, it is necessary to apply shot noise to the DCNU array. \[x\gets x+\mathbb{P}(\text{DCNU}) \tag{33}\] **Readout noise** Readout noise is temporal noise and is generally defined as the combination of the remainder circuitry noise sources between the photoreceptor and the ADC circuitry. This includes thermal noise, flicker noise and reset noise [81]. **Thermal noise** Thermal noise arises from equilibrium fluctuations of an electric current inside an electrical conductor due to the random thermal motion of the charge carriers. It is independent of illumination and occurs regardless of any applied voltage. The noise is commonly referred to as Johnson noise, Johnson-Nyquist noise, or simply white noise. It can be modelled by the normal distribution with zero mean and an appropriate standard deviation \(\sigma\)[81]. \[x\gets x+\mathbb{N}(0,\sigma) \tag{34}\] **Flicker noise** Flicker noise, also known as \(1/f\) noise or pink noise, is often caused by imperfect contacts between different materials at a junction, including metal-to-metal, metal-to-semiconductor, and semiconductor-to-semiconductor. MOSFETs are used in the construction of CMOS image sensors, which tend to exhibit higher levels of \(1/f\) noise than CCD sensors [79]. The amount of flicker noise in a CCD sensor depends on the pixel sampling rate. The equation below describes the effect of flicker noise on a signal \(x\): \[x\gets x+\mathcal{F}(\mathbb{N}(0,\sigma)/f) \tag{35}\] Here, \(\mathcal{F}\) is the two-dimensional Fourier transform, \(\sigma\) is the appropriate standard deviation, and \(f\) is the reciprocal distance. #### 3.2.2 Reset noise Before a measurement of the charge packet of each pixel is taken, the sense node capacitor of a specific row is reset to a reference voltage level. This causes all pixels in that row to be exposed to noise coming in through the reset line, transfer gate, or read transistor. As a result, images may have horizontal lines due to the fixed and temporal components of the noise. This type of noise, known as reset noise (RN), follows a normal distribution with mean zero and a standard deviation \(\sigma\). It can be simulated by adding a random intensity value, generated for each row, to the intensity values of all pixels in that row [80]: \[x\gets x+\mathcal{N}(0,\sigma) \tag{36}\] #### 3.2.3 Black pixel noise Black pixels are dots or small clusters of pixels on the sensor that have significantly lower response than their neighbors, resulting in black spots on the image. Some black pixels may be created during the production process of the CCD camera, while others may appear during its lifetime. Black pixels are time-invariant and will always appear at the same locations on the image. They can be modelled by generating a sensitivity mask (\(S_{\text{Black}}\)) with a spatially uniform distribution of a specified number of black points. Regions can be generated by applying a random walk process for a given number of random steps to the black point positions. The equation below describes the effect of black pixels on a signal \(x\): \[x\gets xS_{\text{Black}} \tag{37}\] #### 3.2.4 Zinger noise Zingers are spurious white dots or regions that can appear randomly in CCD images [82]. Electron-generated X-rays, cosmic rays, and muons can produce a burst of photons in the scintillator, resulting in white spots or streaks in the image. Radioactive elements (such as thorium) present in fiber-optic tapers can also cause zingers [77]. They can be modelled by generating a sensitivity mask (\(S_{\text{Zinger}}\)) with a spatially uniform distribution of a specified number of zinger points. Similar to the black pixel noise, regions can be generated by applying a random walk process for a given number of steps to the zinger point positions: \[x\gets xS_{\text{Zinger}} \tag{38}\] #### 3.2.5 Upper-clip noise Upper clip noise, also known as saturation noise, is a type of noise that occurs when the intensity value of a pixel exceeds the maximum value that the CCD sensor can detect. This causes the pixel to be "clipped" at the maximum value, resulting in an overly bright image with lost details. This type of noise can be modelled by setting a threshold value for the maximum intensity and clipping any pixel values above that threshold \(T_{u}\): \[x\leftarrow\min(x,T_{u}) \tag{39}\] #### 3.2.6 Quantisation noise To generate a digital image, the analog voltage signal read out during the last stage is quantized into discrete values using analog-to-digital conversion (ADC). This process introduces quantization noise, which can be modelled with respect to the ADC gain \(\alpha\): \[x\leftarrow\text{round}(\alpha x) \tag{40}\] Figure 12 shows simulated TEM images with different types of noise. These distortions have been randomly added to the images to mimic real TEM conditions and make it easier to identify the different types of noise. #### 3.2.7 S(T)EM noise model S(T)EM images are formed one pixel at a time by scanning a convergent electron beam along scan lines across the sample with constant stationary probing, which is known as dwell time. The dimension of each square-shaped pixel in the physical space is determined by the magnification. The scanning direction is called the fast/row scan direction. For conventional scan patterns, the scanning begins at the top left corner and after scanning one row of \(n\) pixels, the electron probe moves to the next row's first pixel. The time required to move the beam to the beginning of the scan line is commonly known as fly-back-time. Inaccuracies in beam positions during the scanning process give rise to characteristic scan-line/jitter distortions. Despite all technical improvements in the design of high-performance S(T)EM [3], the presence of these distortions on the recorded images still hampers the extraction of quantitative information from the sample under study [5]. #### 3.2.8 Scanning jitter distortion Scanning jitter is caused by beam instabilities while scanning a raster pattern across the sample during the image acquisition process. There are two distinguishable jitter effects: X-jitter causes random pixel shifts along the fast-scan direction, while Y-jitter causes stretching or squishing of scan lines or line interchanges along the slow-scan direction [11]. Although these displacements are not completely random due to serial acquisition, they depend on the previous scan position. Realistic modelling of scanning jitter distortion can be achieved using the Yule-Walker correlation scheme on time series [83, 84]. Furthermore, the fast and slow scanning directions can be modelled independently due to their different time scales. Here, we focus on displacement series in discrete pixels, in which each term of the series depends on the previous one. Mathematically, these displacement series can be described as: \[\begin{split}\Delta_{t}^{k}&=\frac{a_{t}^{k}}{ \sqrt{(1-\phi_{t}^{2}}}\quad\text{if $k=1$}\\ \Delta_{t}^{k}&=\phi\Delta_{t}^{k-1}+a_{t}^{k}\quad \text{if $k>1$}\end{split} \tag{41}\] where \(t=x,y\) and \(k\) is the pixel index along a given \(t\) direction. \(\phi_{t}\) is the correlation coefficient which describes the coupling between two consecutive values of the series within the range \([0,1]\). \(a_{t}^{i}\) is a normally distributed random number with zero mean and standard deviation \(\sigma_{t}\). The distorted image is created by using bicubic interpolation and evaluating on the non-regular grid, which is built by adding the positions of the regular grid and the generated displacements. \[x\leftarrow\text{SJ}(y) \tag{42}\] The described effects of individual jitter distortions for \(\sigma_{x}=\sigma_{y}=0.75\) and \(\phi_{x}=\phi_{y}=0.6\) along the fast and slow scan directions can be seen in Fig. 13(a) and Fig. 13(b), respectively. Fig. 13(c) shows the undistorted ADF STEM random generated image. Based on our analysis of experimental data, we set the occurrence probability of jitter distortion to 0.9. In addition, we assign the occurrence probability of the X-jitter, Y-jitter and the XY-jitter to 0.25, 0.25 and 0.50, respectively. The values of \(\sigma_{t}\) and \(\phi_{t}\) are randomly chosen within the range \([0.0025,0.8]\)A and \([0.0,0.7]\), respectively. **S(T)EM detector noise** Electrons are detected by a scintillator coupled to a photomultiplier tube (PMT) via a mirror or reflective tube. Impact of the incident electrons on the scintillator cause photons to be emitted, which are directed to the PMT through a light pipe. The PMT consists of a photocathode that emits photoelectrons when illuminated by these photons, followed by a series of stages amplifying the signal. The resulting current at the anode can be measured using conventional ADC electronics [8]. The statistics of the electron multiplication as a series of Poisson events with full width at half maximum (FWHM) of the pulse at the anode Figure 12: Simulated TEM images with random distortions showing the various types of noise. per single incident electron is given by [85]: \[\text{FWHM}=2\sqrt{2\log 2m_{c}}\eta G\sqrt{\frac{1-\eta+\frac{1}{\delta-1}}{m_{c }\eta}+\frac{\delta_{c}^{2}}{m_{c}^{2}}} \tag{43}\] This equation assumes that the secondary gain \(\delta\) at each stage inside the PMT is the same. In this equation, \(G\) represents the PMT gain, \(\eta\) is the detective quantum efficiency, \(m_{c}\) is the number of photons collected per incident electron, and \(\delta_{c}^{2}\) is the variance of that number [85]. A good approximation for the noise spectrum of a photomultiplier is the Poisson distribution, which can be approximated by a Gaussian distribution for large means. Since for each electron reaching the scintillator, around 100 photons reach the cathode of the photomultiplier, a Gaussian approximation can be used with standard deviation \[\sigma=m_{c}\eta G\sqrt{\frac{1-\eta+\frac{1}{\delta-1}}{m_{c}\eta}+\frac{ \delta_{c}^{2}}{m_{c}^{2}}} \tag{44}\] In addition, the number of electrons hitting the scintillator is described by the Poisson process (\(\mathbb{P}\)) [86]. The signal can therefore be constructed in two steps: \[x\leftarrow\mathbb{P}(\alpha x) \tag{45}\] \[x\leftarrow(x+\mathbb{N}(0,\sigma))/\alpha \tag{46}\] where \(\alpha\) is a dose scale factor. Dividing by \(\alpha\) in the latter equation brings \(x\) back to approximately its original range. **Fast scan noise** Fast scan noise arises due to the use of short dwell times during data acquisition and appears as horizontal blur in the recorded images. This effect can also be seen in the Fourier domain as a damping effect on the high frequencies in the horizontal direction. This blurring is caused by the finite decay time of the detection system, which consists of a scintillator, a photomultiplier, and additional readout electronics [86, 87]. In addition to blurring in the horizontal direction, fast scans may introduce other artifacts due to the limited response time of the scan coils. In particular, strong distortions may appear on the left-hand side of the images due to the discontinuity in the scan pattern between consecutive lines. This can be avoided by using a small delay (flyback time) between scanning lines. The optimal value of this delay is hardware-specific, but results in additional dose to the sample, which will be localized on the left-hand side of each image [88]. In general, the effect of fast scan distortion can be modelled by convolution in one dimension along the fast-scan direction between \(x\) and the point spread function (PSF) of the system. After careful analysis of the experimental data, we find that the PSF of the system can be decomposed into contributions from the detector and the readout system. \[\text{Im}_{fad}(x,y)=\text{Im}\,\,\text{\char[-]}\,\,\text{psf}_{detector}\, \,\text{psf}_{readout} \tag{47}\] Figure 13: Image (a) and (b) are distorted jitter images along the fast and slow scan direction, respectively. (c) Undistorted ADF STEM image of a random sample. with \[\text{psf}_{detector}=\left\{\begin{array}{cc}\frac{\alpha}{4\pi^{2}x^{2}+ \alpha^{2}}&:x<=0\\ 0&:x>0\end{array}\right. \tag{48}\] \[\text{psf}_{readout}=\left\{\begin{array}{cc}ae^{-x/\beta}\sin(2\pi x/\gamma+ \theta)&:x<=0\\ 0&:x>0\end{array}\right. \tag{49}\] where \[a=\frac{\beta\gamma(\gamma\sin(\theta)+4\pi\beta cos(\theta))}{\gamma^{2}+16 \pi^{2}\beta^{2}} \tag{50}\] is the normalization factor which ensures that the total integral of the \(\text{psf}_{readout}\) is equal to 1, \(k\) is the pixel value in real space, and \(\alpha\) is the parameter of the Lorentzian function that describes the PSF of the detector. The parameters \(\beta\), \(\gamma\), and \(\theta\) are the parameters of the damped harmonic oscillator which is used to describe the PSF of the readout system. The model parameters were obtained by fitting to experimental images and by applying random variation to the fitting parameters. **Row-line noise** Row-line (RL) noise arises due to the non-response of the detector over some pixels during the scanning process along the fast-scan direction. This noise can be modelled by generating a random number of row lines with random length. The pixel intensities of the lines in the image are replaced by their average intensity multiplied by a random factor within the range \([0.5,1.5]\). This can be represented as: \[x\leftarrow\mathbb{RL}(x) \tag{51}\] **Black pixel noise** Black pixels are randomly occurring pixels that have significantly lower values than their neighbouring pixels, causing black spots to appear in the image. These black pixels may result from information loss during data transmission, cosmic rays, or the detector's non-response. As black pixels are time-dependent, they can be modelled by generating a sensitivity mask (\(S_{\text{Black noise}}\)) with a spatially uniform distribution of a specified number of black points. This can be represented mathematically as: \[x\gets xS_{\text{Black noise}} \tag{52}\] However, in the case of SEM images, black spots in the images may be attributed to pores present in the sample, and hence, this type of distortion is not generated. **Zinger noise** Zingers are random white dots that appear in an image. They are caused by bursts of photons produced by electron-generated X-rays, cosmic rays, and muons in the scintillator [77]. Zinger noise can be simulated by creating a sensitivity mask (\(S_{\text{Zinger noise}}\)) with a spatially uniform distribution of a specified number of Zinger points. \[x\gets xS_{\text{Zinger noise}} \tag{53}\] **Upper-clip noise** Upper clip noise, also known as saturation noise, occurs when the intensity value of a pixel exceeds the maximum value that the analog-to-digital converter can detect. This causes the pixel to be "clipped" at the maximum value, resulting in an overly bright image with lost details. This type of noise can be modelled by setting a threshold value for the maximum intensity and clipping any pixel values above that threshold \(T_{u}\). \[x\leftarrow\min(x,T_{u}) \tag{54}\] **Quantisation noise** To generate an image in digital form, the analog voltage signal read out during the last stage is quantized into discrete values using an ADC with a gain \(\alpha\). This process introduces quantisation noise. \[x\gets round(\alpha x) \tag{55}\] Figure 14 shows simulated STEM images of the different types of noise that can be found in STEM images. These distortions were randomly added to the images to simulate real STEM conditions and make it easier to identify the different types of noise. #### Post-processing distortions Post-processing distortions are typically added after the image is recorded. These distortions, such as interpolation and blurring, can affect the noise in the image in a non-linear way. Post-processing distortions can also include annotations and cropping, which replace part of the original image. Ideally, these distortions should be preserved by the restoration process. **Interpolation distortions** may happen when a user applies a transformation function to the image before it is restored. This might be done to make the image suitable for further post-processing or to better visualise an area of interest. Interpolation distortion can be modelled by applying a random transformation, such as a random linear transformation matrix, to the training image pair. **Gaussian blurring** is a way of distorting an image to reduce noise and improve the SNR. This is done by applying a 2D Gaussian function to the image with a given standard deviation \(\sigma\). Although this type of blurring can improve the quality of an image, it can also alter the distribution of noise in the image. Therefore, when restoring an image, the blurring must be removed along with the distortion. In our training set, we only applied random \(\sigma\) values between 0 and 1 pixel to the distorted images. **Annotations** are added to an image to provide additional information or to highlight specific areas of the image. These can include text, shapes, and arrows, and may be added by the software or by the user. When creating training image pairs, we model the annotations by adding the same random annotations at the same pixel location in both the ground-truth and distorted images. **Cropping** is a type of post-processing distortion that involves removing one or more areas of an image. This can be done manually by the user or automatically in a processing workflow, such as after the image has been shifted, rotated or aligned. The removed areas are usually filled in with a constant value or the median of the image's value range. When creating training image pairs, we model this process by randomly replacing the intensity value in a randomly selected area in both images. The selected area is typically outside a central square or rectangle, such as 50% of the total image area, to mimic the fact that cropping is typically not applied to the central region, which may already be adjusted to show the main feature of interest. ## Code and Data Availability All of the trained models, alongside example scripts for using them, are available on the github repository [https://github.com/Ivanlh20/r_em](https://github.com/Ivanlh20/r_em). Additional material may be provided by the authors upon reasonable request. Figure 14: Random distorted simulated STEM images showing the various types of noise. ## Acknowledgements This work was supported by the European Research Council (Grant 770887 375 PICOMETRICS to S.V.A. and Grant 823717 ESTEEM3). The authors acknowledge financial support from the Research Foundation Flanders (FWO, Belgium) through project fundings (G.0346.21N and EOS 40007495). S.V.A. acknowledges funding from the University of Antwerp Research fund (BOF). The authors thank Lukas Grunewald for data acquisition and support for figure 8. ## Author Contributions I.L. and S.V.A. designed the study. I.L. created the mathematical models for the undistorted and distorted EM images, implemented, trained, and evaluated the NN models. T.F. conducted quantitative analysis of STEM images for the models. All authors contributed to the planning and execution of the research, discussed the results, and helped write the manuscript. ## Competing Interests The authors declare no competing interests. ## Additional Information **Supplementary information** is available for this article. **Correspondence** and requests for materials should be addressed to I.L.([email protected]) or S.V.A. ([email protected]).
2310.19130
Women Wearing Lipstick: Measuring the Bias Between an Object and Its Related Gender
In this paper, we investigate the impact of objects on gender bias in image captioning systems. Our results show that only gender-specific objects have a strong gender bias (e.g., women-lipstick). In addition, we propose a visual semantic-based gender score that measures the degree of bias and can be used as a plug-in for any image captioning system. Our experiments demonstrate the utility of the gender score, since we observe that our score can measure the bias relation between a caption and its related gender; therefore, our score can be used as an additional metric to the existing Object Gender Co-Occ approach. Code and data are publicly available at \url{https://github.com/ahmedssabir/GenderScore}.
Ahmed Sabir, Lluís Padró
2023-10-29T19:39:03Z
http://arxiv.org/abs/2310.19130v2
# Women Wearing Lipstick: Measuring the Bias ###### Abstract In this paper, we investigate the impact of objects on gender bias in image captioning systems. Our results show that only gender-specific objects have a strong gender bias (_e.g. women-lipstick_). In addition, we propose a visual semantic-based gender score that measures the degree of bias and can be used as a plug-in for any image captioning system. Our experiments demonstrate the utility of the gender score, since we observe that our score can measure the bias relation between a caption and its related gender; therefore, our score can be used as an additional metric to the existing Object Gender Co-Occ approach. Code and data are publicly available at [https://github.com/ahmedssabir/GenderScore](https://github.com/ahmedssabir/GenderScore). ## 1 Introduction Visual understanding of image captioning is an important and rapidly evolving research topic Karpathy and Fei-Fei (2015); Anderson et al. (2018). Recent approaches predominantly rely on Transformer Huang et al. (2019); Cho et al. (2022) and the BERT based pre-trained paradigm Devlin et al. (2019) to learn cross-modal representation Li et al. (2020); Zhang et al. (2021); Li et al. (2022, 2023). While image captioning models achieved notable benchmark performance in utilizing the correlation between visual and co-occurring labels to generate an accurate image description, this often results in a gender bias that relates to a specific gender, such as confidently identifying a woman when there is a kitchen in the image. The work of Zhao et al. (2017) tackles the problem of gender bias in visual semantic role labeling by balancing the distribution. For image captioning, Hendricks et al. (2018) consider ameliorating gender bias via a balance classifier. More recently, Hirota et al. (2022) measured racial and gender bias amplification in image captioning via a trainable classifier on an additional human-annotated existing caption dataset for the gender bias task Zhao et al. (2021). To close the full picture of gender bias in image captioning, in this work, unlike other works, we examine the problem from a visual semantic relation perspective. We therefore propose a Gender Score via human-inspired judgment named Belief Revision Blok et al. (2003) which can be used to (1) discover bias and (2) predict gender bias without _training_ or _unbalancing_ the dataset. We conclude our contributions are as follows: (1) we investigate the gender object relation for image captioning at the word level (_i.e_. object-gender) and the sentence level with captions (_i.e_. gender-caption); (2) we propose a Gender Score that uses gender in relation to the visual information in the image to predict the gender bias. Our Gender Score can be used as a plug-in for any out-of-the-box image captioning system. Figure 1 shows an overview of the proposed gender bias measure for image captioning. ## 2 Visual Context Information In this work, we investigate the relation between gender bias and the objects that are mainly used in image captioning systems, and more precisely, the widely used manually annotated image caption datasets: Flickr30K Young et al. (2014) and COCO dataset Lin et al. (2014). However, we focus mainly on the COCO dataset as the most used dataset in gender bias evaluation. COCO Captions is an unbalanced gender bias dataset with a 1:3 ratio bias towards men Zhao et al. (2017); Hendricks et al. (2018); Tang et al. (2021). The dataset contains around 120K images, and each image is annotated with five different human-written captions. To obtain the _visual context_\(o\) from each image \(I\), we use out-of-the-box classifiers to extract the image context information \(o(I)\). Specifically, following Sabir et al. (2023), the objects extracted from all pre-trained models are obtained by extracting the top-3 object class/category (excluding _person_ category) from each classifier after filtering out instances with (1) the cosine distance between objects (voting between classifiers of top-\(k\)) and (2) a low confidence score via a probability threshold \(<0.2\). We next describe each of these classifiers: **ResNet-152 (He et al., 2016):** A residual deep network that is designed for ImageNet classification tasks, which relies heavily on batch normalization. **CLIP (Radford et al., 2021):** (Contrastive Language-Image Pre-training) is a pre-trained model with contrastive loss where the pair of image-text needs to be distinguished from randomly selected sample pairs. CLIP uses Internet resources without human annotation on 400M pairs. **Inception-ResNet FRCNN (Huang et al., 2017):** is an improved variant of Faster R-CNN, with a trade-off of better accuracy and fast inference via high-level features extracted from Inception-ResNet. It is a pre-trained model that is trained on COCO categories, with 80 object categories. ## 3 Gender Object Distance The main component of our Gender Score (next section) is the semantic relation between the object and the gender. Therefore, in this section, we investigate the semantic correlation between the object and the gender in the general dataset (_e.g_. wiki). More specifically, we assume that all images contain a gender _man_, _woman_ or gender neutral _person_, and we aim to measure the Cosine Distance between the object \(o\), objects with context _i.e_. caption \(y\) from the image \(I\) and its related gender \(a^{\prime}\in\{\text{man},\text{woman},\text{person}\}\). In this context, we refer to the Cosine Distance as the closest distance (_i.e_. semantic relatedness score) between the gender-to-object via similarity measure _e.g_. cosine similarity. We apply our analysis to the most widely used human-annotated datasets in image captioning tasks: Flikr30K and COCO Captions datasets. We employ several commonly used pre-trained models: (1) word-level: GloVe (Pennington et al., 2014), fasttext (Bojanowski et al., 2017) and word2vec (Mikolov et al., 2013) out-of-the-box as baselines, then we utilize Gender Neutral GloVe (Zhao et al., 2018), which balances the gender bias; (2) sentence-level: Sentence-BERT (Reimers and Gurevych, 2019) tuned on Natural Language Inference (NLI) (Conneau et al., 2017), (2b) SimCSE (Gao et al., 2021) contrastive learning supervised by NLI, and an improved SimCSE version with additional **Inform**ation aggregation via masked language model objective InfoCSE (Wu et al., 2022). In particular, we measure the Cosine Distance from the gender \(a^{\prime}\) to the object \(o\) or caption \(y\) as shown in Figure 1 (a, b) (_e.g_. how close the gender vector of \(a^{\prime}\) is to the object \(o\)_umbrella_). Table 1 shows the top most object-gender frequent count with the standard and gender-balanced GloVe, respectively. **Gender Bias Ratio:** we follow the work of (Zhao et al., 2017) to calculate the gender bias ratio towards men as: \[\small\text{bias}_{\text{to-}m}=\frac{\text{s}(\text{obj},\,m)}{\text{s}( \text{obj},\,m)+\text{s}(\text{obj},\,w)} \tag{1}\] where \(m\) and \(w\) refer to the gender in the image, and the \(s\) is our proposed gender-to-object semantic relation bias score. In our case, we also use the score to compute the ratio to gender neutral _person_: \[\small\text{Ratio}_{\text{to-}m}=\frac{\text{s}(\text{obj},\,m/w)}{\text{s }(\text{obj},\,person)} \tag{2}\] \begin{table} \begin{tabular}{c c c c|c c} \hline \multicolumn{5}{c}{GloVe Gender Dist (Biased)} & \multicolumn{3}{c}{GloVe Gender Dist (Balanced)} \\ \hline + person & + man & + woman & + person & + man & + woman \\ \hline face & walking & lipstick & can & police & police \\ hand & cowboy & dishwaved & police & white & black \\ ballplayer & goilla & sleeping & go & walking & white \\ mailbox & teddy & horse & walking & black & car \\ book & ballplayer & bathtub & black & car & walking \\ \hline \end{tabular} \end{table} Table 1: Most frequency count of object + gender in the COCO Captions training dataset via Cosine Distance with GloVe and balanced Gender Neutral GN-GloVe. Figure 1: An overview of the proposed gender bias measure for image captioning. (a) word level with word embedding, (b) sentence level with semantic embedding, (c) our proposed gender score that relies on visual bias revision (with GPT-2 for initialization \(\text{P}\left(g^{\prime}_{y}\right)\) (common observation as initial bias without visual) and with BERT for similarity score), and (d) the proposed < MASK > gender estimation (prediction), also with visual bias revision. ## 4 Gender Score In this section, we describe the proposed **G**ender **S**core that estimates gender based on its semantic relation with the visual information extracted from the image. Sabir et al. (2022) proposed a caption re-ranking method that leverages visual semantic context. This approach utilized Belief Revision (Blok et al., 2003) to convert the similarity _i.e_. Cosine Distance (Section 3) into a probability measure. **Belief Revision.** To obtain likelihood bias revisions based on similarity scores (_i.e_. gender, object), we need three parameters: (1) **Hypothesis (g)**: caption \(y\) with the associated gender \(a\in\{\text{man, woman}\}\), (2) **Informativeness (c)**: image object information \(o\) confidence and (3) **Similarities**: the degree of relatedness between object and gender \(sim(y,o)\). \[\text{GS}_{a}(y)=\frac{1}{|\mathcal{D}|}\sum_{(y,o)\in\mathcal{D}}\text{P}\left( g_{y}\mid c_{o}\right)=\text{P}(g_{y})^{\alpha} \tag{3}\] where \(\text{P}(g_{y})\) is the _hypothesis_ probability of \(y\), \(\mathcal{D}\) is the predicted captions with the gender \(a\), and \(\text{P}(c_{o})\) is the probability of the evidence that causes hypothesis probability revision _i.e_. visual bias revision via object context \(o\) from the image \(I\), \(o(I)\): **Hypothesis**: \(\text{P}(g_{y})\) (caption \(y\) with the gender \(a\)) **Informativeness**: \(1-\text{P}(c_{o})\) (object context \(o\)) **Similarities**: \(\alpha=\left[\frac{1-\text{sim}(y,o)}{1+\text{sim}(y,o)}\right]^{1-\text{P}(c _{o})}\) (visual bias) The visual context \(\text{P}(c_{o})\) will revise the caption with the associated gender \(\text{P}(g_{y})\) (_i.e_. gender bias) if there is a semantic relation between them \(sim(y,o)\). We discuss each component next: **Hypothesis initial bias**: In visual-based belief revision, one of the conditions is to start with an initial hypothesis and then revise it using visual context and a similarity score. Therefore, we initialize the caption hypothesis \(\text{P}(g_{y})\) with a common observation \(\text{P}(g^{\prime}_{y})\), such as a Language Model (LM) (we \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{COCO Captions} & \multicolumn{4}{c}{Flickr30K} \\ \cline{2-11} & \multicolumn{3}{c}{Avg: Gender Object Distance} & \multicolumn{3}{c}{Ratio} & \multicolumn{3}{c}{Avg: Gender Object Distance} & \multicolumn{3}{c}{Ratio} \\ \cline{2-11} Model & + person & + man & + woman & to-m & to-w & + person & + man & + woman & to-m & to-w \\ \hline Word2Vec (Mikolov et al., 2013) & 0.101 & 0.116 & 0.124 & 0.48 & 0.51 & 0.116 & 0.142 & 0.154 & 0.47 & 0.52 \\ Gloe (Pennington et al., 2014) & 0.146 & 0.175 & 0.169 & 0.50 & 0.49 & 0.131 & 0.170 & 0.168 & 0.50 & 0.49 \\ Fastet (Bojanowski et al., 2017) & 0.180 & 0.200 & 0.191 & 0.51 & 0.48 & 0.146 & 0.196 & 0.191 & 0.50 & 0.49 \\ GN-GloVe (Zhao et al., 2018) & 0.032 & 0.055 & 0.054 & 0.50 & 0.49 & 0.024 & 0.085 & 0.088 & 0.49 & 0.50 \\ \hline SBERT (Reimers and Gurevych, 2019) & 0.124 & 0.155 & 0.128 & 0.54 & 0.45 & 0.121 & 0.167 & 0.129 & 0.56 & 0.43 \\ SimCSE-RoBERTa (Gao et al., 2021) & 0.194 & 0.137 & 0.093 & 0.59 & 0.40 & 0.189 & 0.140 & 0.107 & 0.56 & 0.43 \\ InfoCSE-RoBERTa (Wu et al., 2022) & 0.199 & 0.222 & 0.211 & 0.51 & 0.48 & 0.228 & 0.265 & 0.241 & 0.52 & 0.47 \\ \hline \hline \end{tabular} \end{table} Table 2: Result of Average Cosine Distance between gender and object in the COCO Captions and Flikr30K training datasets. The ratio is the gender bias rate **towards** men/women. The results show there is a slight bias toward men. GloVe and GN-GloVe (balanced) show identical results on COCO Captions, which indicate that not all objects have a strong bias toward a specific gender. In particular, regarding non-biased objects, both models exhibit a low/similar bias ratio _e.g_. _bicycle-gender_ (GloVe: m=0.31 | w=0.27, ratio=0.53) and (GN-GloVe: m=0.15 | w=0.13, ratio=0.53). \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Avg: Gender Score} & \multicolumn{3}{c}{Bias Ratio} & \multicolumn{3}{c}{Leakage} \\ \cline{2-11} Model & + person & + man & + woman & m & w & to-m & m & w \\ \hline Human & & & & & & & 930 & 291 \\ **Gender Object Distance** & & & & & & & & \\ Transformer (Vaswani et al., 2017) & 0.075 & 0.062 & 0.033 & 0.82 & 0.44 & 0.65 & 0.85 & 1.40 \\ A0ANet (Huang et al., 2019) & 0.079 & 0.061 & 0.047 & 0.77 & 0.59 & 0.56 & 0.82 & 1.29 \\ Vilbert (Lu et al., 2020) & 0.089 & 0.070 & 0.056 & 0.78 & 0.62 & 0.55 & 0.75 & 1.06 \\ OSCAR (Li et al., 2020) & 0.076 & 0.054 & 0.044 & 0.71 & 0.57 & 0.58 & 0.93 & 1.40 \\ BLIP (Li et al., 2022) & 0.068 & 0.050 & 0.040 & 0.73 & 0.58 & 0.55 & 0.83 & 1.32 \\ TACLIPS-Reward (Cho et al., 2022) & 0.069 & 0.053 & 0.040 & 0.76 & 0.57 & 0.56 & 0.82 & 1.30 \\ BLIP-2 (Li et al., 2023) & 0.073 & 0.049 & 0.043 & 0.67 & 0.58 & 0.53 & 0.73 & 1.22 \\ \hline \hline \end{tabular} \end{table} Table 3: Result of the Average Cosine Distance and Gender Score, between gender and object information, on the Karpathy test split. Our score balances the bias better than direct Cosine Distance, primarily because not all objects exhibit strong gender bias. The leakage is a comparison between human-annotated caption and the classifier output that uses the gender-related object to influence the final gender prediction, such as associating women with food. consider this as an initial bias without visual). We employ (GPT-2) (Radford et al., 2019) with mean token probability since it achieves better results. **Informativeness of bias information:** As the visual context probability \(\mathrm{P}(c_{o})\) approaches 1 and in consequence is less informative (very frequent objects have no discriminative power since they may co-occur with any gender) \(1-\mathrm{P}(c_{o})\) approaches zero, causing \(\alpha\) to get closer to 1, and thus, a smaller revision of \(\mathrm{P}(g^{\prime}_{y})\). Therefore, as we described in the Visual Context Information Section 2, we leverage a threshold and semantic filter visual context dataset from ResNet, Inception-ResNet v2 based Faster R-CNN object detector, and CLIP to extract top-\(k\) textual visual context information from the image. The extracted object is used to measure the gender-to-object bias direct relation. **Relatedness between hypothesis and bias information:** Likelihood revision occurs if there is a close correlation between the hypothesis and the new information. As the \(sim(y,o)\) (gender, object), gets closer to 1 (higher relatedness) \(\alpha\) gets closer to 0, and thus hypothesis probability is revised (_i.e_. gender bias) and raised closer to 1. Therefore, the initial hypothesis will be revised or backed off to 1 (no bias) based on the relatedness score. In our case, we employ Sentence-BERT to compute the Cosine Distance (Section 3) by using object \(o\) as context for the caption \(y\) with associated gender \(a\). ## 5 Experiments **Caption model.** We examine seven of the most recent Transformer state-of-the-art caption models: Transformer (Vaswani et al., 2017) with bottom-up top-down features (Anderson et al., 2018), AoANet Huang et al. (2019), Vilbert Lu et al. (2020), OSCAR Li et al. (2020), BLIP Li et al. (2022), Transformer with Reinforcement Learning (RL) CLIPS+CIDEr as image+text similarity Reward (TraCLIPS-Reward) (Cho et al., 2022) and Large LM\({}_{2.7\text{B}}\) based BLIP-2 (Li et al., 2023). Note that for a fair comparison with other pre-trained models, OSCAR uses a cross-entropy evaluation score rather than the RL-based CIDEr optimization score. **Data.** Our gender bias score is performed on the (82783 \(\times\) 5 human annotations) COCO Captions dataset. For baselines (testing), the score is used to evaluate gender-to-object bias on the standard 5K Karpathy test split images (Karpathy and Fei-Fei, 2015) (GT is the average of five human bias ratios). Our experiments apply visual semantics between the object or object with context _i.e_. caption and its related gender information to predict object-to-gender related bias. The proposed visual-gender scores are (1): Cosine Distance, which uses the similarity score to try to estimate the proper gender as shown in Table 2; (2) Gender Score, which carries out Belief Revision (visual bias likelihood revision) to revise the hypothesis initial bias via similarity _e.g_. Cosine Distance (gender, object). For our baseline, we adopt the Object Gender Co-Occ metric (Zhao et al., 2017) for the image captioning task. In this work, we hypothesize that every image has a gender \(\in\)\(\{\text{man},\text{woman}\}\) or gender neutral (_i.e_. _person_), and our rationale is that each model suffers from _Right for the Wrong Reasons_ (Hen \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{Bias Ratio Toward Men} & \multicolumn{3}{c}{Bias Ratio Toward Women} \\ \cline{2-9} & skateboard & kitchen & motorcycle & baseball & skateboard & kitchen & motorcycle & baseball \\ \hline Object Gender Co-Occ (Zhao et al., 2017) & 0.96 & 0.50 & 0.33 & 0.75 & 0.05 & 0.50 & 0.16 & 0.25 \\ AANet Huang et al. (2019) & 0.97 & 0.51 & 0.85 & 0.81 & 0.02 & 0.48 & 0.14 & 0.18 \\ Vilbert Lu et al. (2020) & 0.96 & 0.47 & 0.84 & 0.66 & 0.03 & 0.52 & 0.15 & 0.33 \\ OSCAR Li et al. (2020) & 0.97 & 0.58 & 0.82 & 0.90 & 0.02 & 0.41 & 0.18 & 0.09 \\ BLIP Li et al. (2022) & 0.96 & 0.52 & 0.88 & 0.97 & 0.03 & 0.47 & 0.11 & 0.02 \\ TracCLIPS-Reward Cho et al. (2022) & 0.89 & 0.48 & 0.93 & 0.50 & 0.10 & 0.51 & 0.06 & 0.50 \\ BLIP-2 Li et al. (2023) & 0.94 & 0.57 & 0.88 & 0.90 & 0.05 & 0.42 & 0.11 & 0.10 \\ \hline Gender Score & & & & & & & & \\ Transformer Vaswani et al. (2017) & 0.96 & 0.51 & 0.33 & 0.61 & 0.03 & 0.48 & 0.16 & 0.38 \\ AANet Huang et al. (2019) & 0.97 & 0.46 & 0.84 & 0.82 & 0.02 & 0.53 & 0.15 & 0.17 \\ Vilbert Lu et al. (2020) & 0.96 & 0.53 & 0.84 & 0.65 & 0.03 & 0.46 & 0.15 & 0.34 \\ OSCAR Li et al. (2020) & 0.98 & 0.42 & 0.78 & 0.83 & 0.01 & 0.57 & 0.21 & 0.16 \\ BLIP Li et al. (2022) & 0.96 & 0.50 & 0.86 & 0.98 & 0.03 & 0.49 & 0.13 & 0.01 \\ TracCLIPS-Reward Cho et al. (2022) & 0.88 & 0.43 & 0.92 & 0.50 & 0.11 & 0.56 & 0.07 & 0.49 \\ BLIP-2 Li et al. (2023) & 0.93 & 0.56 & 0.82 & 0.89 & 0.06 & 0.43 & 0.17 & 0.10 \\ \hline \hline \end{tabular} \end{table} Table 4: Example of the most common gender bias objects in COCO Captions, Karpathy test split. The result shows that our score (bias ratio) aligns closely with the existing Object Gender Co-Occ approach when applied to the most gender-biased objects toward men. Note that TraCLIPS-Reward (CLIPS+CIDEr) inherits biases from RL-CLIPS, resulting in distinct gender predictions and generates caption w/o a specific gender _i.e_. _person_, _baseball player_, etc. dricks et al., 2018) or leakage (Wang et al., 2019) (see Table 3), such as associating all kitchens with women. Therefore, we want to explore all the cases and let the proposed distance/score decide which gender (_i.e_. bias) is in the image based on a visual bias. In particular, inspired by the cloze probability last word completion task (Gonzalez-Marquez, 2007), we generate two identical sentences but with a different gender, and then we compute the likelihood revisions between the sentence-gender and the caption using the object probability. Table 2 shows that GloVe and GN-GloVe (balanced) have identical results on COCO Captions dataset, which indicate that not all objects have a strong bias toward a specific gender. In addition, Table 3 shows that our score balances the bias of the Cosine Distance and demonstrates, on average, that not all objects have a strong gender bias. Also, our approach detects strong specific object-to-gender bias and has a similar result to the existing Object Gender Co-Occ method on the most biased object toward men, as shown in Table 4. TraCLIPS-Reward inherits biases from RL-CLIPS and thus generates caption w/o a specific gender (_e.g_._person_, _guy_, etc). Therefore, we adopt the combined CLIPS+CIDEr Rewards, which suffer less gender prediction error. **Gender Score Estimation.** In this experiment, we <MASK> the gender and use the object with context to predict the gender as shown in Figure 2. The idea is to measure the amplified gender-to-object bias in pre-trained models. Table 5 shows that the fill-in gender score has a more bias towards men results than object-gender pair counting. The rationale is that the model can estimate the strong gender object bias as shown in Figure 2, including the false positive and leakage cases by the classifier. **Discussion.** Our approach measures the amplified gender bias more accurately than the Object Gender Co-Occ (Zhao et al., 2017) in the following two scenarios: (1) where the gender is not obvious in the image and is misclassified by the caption baseline, and (2) when there is leakage by the classifier. In addition, unlike the Object Gender Co-Occ, our model balances the gender to object bias and only measures a strong object to gender bias relation as shown in Figure 2. For instance, the word "hitting" in a generated caption (as a stand-alone without context) is associated with a bias toward men more than women, and it will influence the final gender-to-caption bias score. However, our gender score balances the unwanted bias and only measures the pronounced gender to object bias relation. ## 6 Conclusion In this work, we investigate the relation between objects and gender bias in image captioning systems. Our results show that not all objects exhibit a strong gender bias and only in special cases does an object have a strong gender bias. We also propose a Gender Score that can be used as an additional metric to the existing Object-Gender Co-Occ method. Figure 2: Examples of Gender Score Estimation and Cosine Distance. The result shows that (Top) the score balances the bias (as men and women have a similar bias for the sport _tennis_), (Middle) a slight bias object _umbrella_ toward women, and (Bottom) men strong object bias relation (_paddle, surfboard_), the model adjusts the women bias while preserving the object gender score. \begin{table} \begin{tabular}{l c c c} \hline \hline & \multicolumn{2}{c}{Gender} & \multicolumn{2}{c}{Bias Ratio} \\ \cline{2-4} Model & man & woman & to-m & to-w \\ \hline \multicolumn{4}{l}{Object Gender Co-Occ (Zhao et al., 2017)} \\ Transformer & 792 & 408 & 0.66 & 0.34 \\ AoANet & 770 & 368 & 0.67 & 0.32 \\ Vilbert & 702 & 311 & 0.69 & 0.30 \\ OSCAR & 845 & 409 & 0.67 & 0.32 \\ BLIP & 775 & 385 & 0.66 & 0.33 \\ TraCLIPS-Reward & 769 & 381 & 0.66 & 0.33 \\ BLIP-2 & 695 & 356 & 0.66 & 0.33 \\ \hline \multicolumn{4}{l}{Gender Score (Gender Score Estimation)} \\ Transformer & 616 & 217 & 0.73 & 0.26 \\ AoANet & 527 & 213 & 0.71 & 0.28 \\ Vilbert & 526 & 161 & 0.76 & 0.23 \\ OSCAR & 630 & 237 & 0.72 & 0.27 \\ BLIP & 554 & 240 & 0.69 & 0.30 \\ TraCLIPS-Reward & 537 & 251 & 0.68 & 0.31 \\ BLIP-2 & 498 & 239 & 0.67 & 0.32 \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison results between Object Gender Co-Occ (predicted gender-object results) and our gender score estimation on Karpathy test split. The proposed score measures gender bias more accurately, particularly when there is a strong object to gender bias relation. ### Limitations The accuracy of our Gender Score heavily relies on the semantic relation (_i_.\(e\). Cosine Distance) between gender and object, a high or low degree of similarity score between them can influence the final bias score negatively. Also, our model relies on a large pre-trained model(s), which inherently encapsulate their own latent biases that might impact the visual bias revision behavior in gender-to-object bias scenarios. Specifically, in cases when there are multiple strong bias contexts (_i_.\(e\). non-objects context) with high similarity scores toward a particular gender present within the caption. This can imbalance the final gender-to-object score, leading to errors in gender prediction and bias estimation. In addition, the false positive object context information extracted by the visual classifier will result in inaccurate bias estimation. ## Ethical Considerations We rely upon an existing range of well-known publicly available caption datasets crawled from the web and annotated by humans that assume a binary conceptualization of gender. Therefore, it is important to acknowledge that within the scope of this work, we are treating gender as strictly binary (_i_.\(e\). man and woman) oversimplifies a complex and multifaceted aspect of human identity. Gender is better understood as a spectrum, with many variations beyond just two categories, and should be addressed in future work. Since all models are being trained on these datasets, we anticipate all models contain other biases (racial, cultural, _etc_.). For example, an observation in Table 1, when we remove gender bias, we notice the emergence of another bias, such as racial bias. For example, vectors representing _Black_ person or women are closer together than those representing other colors, like _white_. Moreover, there is another form of bias that has received limited attention in the literature, the propagation of gender and racial bias via RL (_e_.\(g\). RL-CLIPS). For instance, in Figure 1 the model associates gender with race, as "asian woman".
2301.00320
Relevance Classification of Flood-related Twitter Posts via Multiple Transformers
In recent years, social media has been widely explored as a potential source of communication and information in disasters and emergency situations. Several interesting works and case studies of disaster analytics exploring different aspects of natural disasters have been already conducted. Along with the great potential, disaster analytics comes with several challenges mainly due to the nature of social media content. In this paper, we explore one such challenge and propose a text classification framework to deal with Twitter noisy data. More specifically, we employed several transformers both individually and in combination, so as to differentiate between relevant and non-relevant Twitter posts, achieving the highest F1-score of 0.87.
Wisal Mukhtiar, Waliiya Rizwan, Aneela Habib, Yasir Saleem Afridi, Laiq Hasan, Kashif Ahmad
2023-01-01T01:34:15Z
http://arxiv.org/abs/2301.00320v1
# Relevance Classification of Flood-related Twitter Posts via Multiple Transformers ###### Abstract In recent years, social media has been widely explored as a potential source of communication and information in disasters and emergency situations. Several interesting works and case studies of disaster analytics exploring different aspects of natural disasters have been already conducted. Along with the great potential, disaster analytics comes with several challenges mainly due to the nature of social media content. In this paper, we explore one such challenge and propose a text classification framework to deal with Twitter noisy data. More specifically, we employed several transformers both individually and in combination, so as to differentiate between relevant and non-relevant Twitter posts, achieving the highest F1-score of 0.87. ## 1 Introduction Natural disasters, which are hazardous events and occur frequently in different parts of the world, can have devastating effects on society. Depending on the severity of the disaster, it may result in significant damage to the infrastructure and human lives. Rapid response to natural disasters may help in mitigating their adverse impact on society. In disasters and emergency situations, access to relevant and timely information is key to a rapid and effective response. However, the literature reports several situations where access to relevant and timely information may not be possible due to several factors [1]. In recent years, social media outlets, such as Twitter, Facebook, and Instagram, have been explored as a source of communication and information dissemination in emergency situations [2]. The literature already reports the feasibility and effectiveness of social media for a diversified list of tasks in disaster analytics. For instance, Ahmad et al. [3] explored social media outlets as a source of information collection and dissemination during natural disasters by proposing a system that is able to collect and analyze disaster-related multimedia content from social media. Similarly, social media content has also been utilized for disaster severity and damage assessment [4, 5]. Despite being very effective in disaster analytics, social media data also come with several limitations. For instance, social media content contains a lot of noise and irrelevant information. This paper targets one of such challenges by proposing several solutions for the Relevance Classification of Twitter Posts (RCTP), sub-task introduced in DisasterMM challenge of MediaEval 2022 [6]. The task aims at automatically analyzing and classifying flood-related tweets into relevant and non-relevant tweets. ## 2 Related Work Disaster analysis in social media content has been one of the active topics of research in the domain over the last few years [2]. During this time, different aspects and applications of disaster analytics in social media content have been explored [7]. Some key applications include communication/information dissemination, damage assessment, response management, sentiment analysis, and identification of the needs of affected individuals. The literature already reports several interesting works on these applications. For instance, Nguyen et al. [8] utilized social media content for damage assessment by analyzing disaster-related visual media posts. Ahmad et al. [9] analyzed social media imagery for monitoring road conditions after floods. Moreover, a vast majority of the literature demonstrates how social media outlets can be used as means of communication in disasters and emergency situations [10, 1]. In the literature, different types of disasters including natural disasters, such as earthquakes, landslides, droughts, wildfires, and floods, as well as man-made disasters, such as accidents, have been explored [1, 11]. However, the majority of the works have targeted floods, being one of the most common natural disasters. The literature reports several interesting works on flood analysis in social media content for different tasks. For instance, Ahmad et al. [9] proposed a late fusion-based framework for the automatic detection of passable roads after a flood. For this purpose, several deep learning models are trained on flood-related images from social media. Alam et al. [4], on the other hand, employed social media imagery for post floods damage severity assessment. Flood detection and analysis in social content have also been a part of the MediaEval benchmark initiative as a shared task for several years. Each time a separate aspect of flood analysis has been explored. For instance, in MediaEval 2017 the task aimed at the retrieval of flood-related images from social media. The task mainly involved analyzing the water level in different areas to differentiate between floods and regular water reservoirs, such as lakes [12]. In MediaEval 2018, the task was slightly modified by asking the participants to propose multi-modal classification frameworks for flood-related multimedia content [13]. In MediaEval 2019 and 2020, the tasks aimed at analyzing flood severity and flood events recognition in social media posts. ## 3 Approach Figure 1 provides the block diagram of the proposed framework for the RCTP task. The framework is composed of three main components namely (i) Pre-processing, (ii) Training and Classification, and (iii) Fusion. In the first step, different pre-processing techniques are employed to clean the dataset. Three different transformers are then trained on the data to obtain classification scores. In the final step, the classification scores of the individual models are combined in a late fusion scheme. The details of these steps are provided below. ### Pre-processing In the pre-processing step, we employed different techniques for cleaning the dataset. More specifically, we removed unnecessary information, such as user names, URLs, emojis, punctuation marks, stop words, etc. Besides this, we also performed the necessary pre-possessing tasks that are required to transform the raw text into a form that is suitable for the transformers. To achieve this, we used the TF.text library1. Footnote 1: [https://www.tensorflow.org/text/guide/bert_preprocessing_guide#text_preprocessing_with_tftext#](https://www.tensorflow.org/text/guide/bert_preprocessing_guide#text_preprocessing_with_tftext#) ### Classification via Transformers After cleaning and pre-processing the data, we trained three different models, namely BERT [14], RoBERTa [15], and XLNet [16]. The selection of these models for the task is motivated by their proven performance on similar tasks [17]. A brief overview of these models is provided below. * **BERT**: Bidirectional Encoder Representations from Transformers (BERT) is one of the state-of-the-art NLP algorithms for text processing. The model is pre-trained on a large collection of unlabeled text and can be fine-tuned for different text-analysis applications. The key attributes of the model include its bi-directional nature, pre-training with Masked Language Modeling (MLM), and Next Structure Prediction (NSP) objectives. In the experiments with BERT, we used the Adam optimizer with a learning rate of 0.001 and a batch size of 8 for 3 epochs. * **RoBERTa**: Robustly Optimized BERT is a modified version of the BERT model with an improved training mechanism. More specifically, in RoBERTa the NSP capabilities are removed. Moreover, dynamic masking is introduced. In addition, a larger batch size and a larger amount of training data were used in the training process. In this work, we used a learning rate of 0.001, batch size of 20, and 10 epochs during the fine-tuning of the model for the desired task. * **XLNet**: XLNet is another state-of-the-art NLP algorithm. Similar to BERT, XLNet is also a bidirectional transformer and uses an improved training approach. In contrast to BERT and traditional NLP algorithms, XLNet relies on Permutation Language Modeling (PLM) by predicting all the tokens in random order. This allows XLNet to handle dependencies and bidirectional relationships in a better way. In this work, we used a learning rate of 0.002, a batch size of 32, and 4 epochs during the fine-tuning of the model for the desired task. Figure 1: Block diagram of the proposed approach. We obtained the results in the form of posterior probabilities from these models, which are then used in the fusion scheme to obtain the final predicted labels. The fusion method used in this work is described in the next section. ### Fusion Our fusion method is based on late fusion, where we combined the classification scores obtained with the individual models for the final classification decision as shown in Equ. 1. In the equation, \(S_{final}\) represents the final classification score while \(s_{n}\) is the score obtained with the nth model. We note that in the current implementation, we used a simple fusion method by treating all the models equally (i.e., simple aggregation of the individual scores). \[S_{final}=S_{1}+S_{2}+s_{3}+....+S_{n} \tag{1}\] ## 4 Results and Analysis Table 1 provides the experimental results of the proposed solutions on the development set. As can be been in the table, overall better results are obtained with the BERT model, and surprisingly, a lower F1-score is observed for RoBERTa. In the future, we will further investigate the potential causes of the lower performance of RoBERTa by exploring different implementations and hyperparameter settings for it. As far as the performance of the fusion methods is concerned, overall better results are obtained with the pair of XLNet and BERT. One of the potential reasons for the lower performance of the fusion of all the models is the less accurate prediction of RoBERTa, as also evident from the performance of the individual models. Table 2 provides the official results of the proposed solutions on the test set. In total, three different runs were submitted. The first run is based on the fusion of all three models used in this work. The remaining two runs are based on the fusion of the models in pairs of two. In run 2, BERT and XLNet are combined while in run 3 RoBERTa and XLNet are jointly used. As can be seen in the table, better results are obtained for the fusion of the models in pairs of two where the best performing pair of two models obtained an improvement of 20% over the fusion of all three models. \begin{table} \begin{tabular}{c c} \hline \hline **Method** & **F1-Score** \\ \hline BERT & 0.94 \\ RoBERTa & 0.78 \\ XLNet & 0.93 \\ Fusion 1 (RoBERTa, BERT, XLNet) & 0.75 \\ Fusion 2 (BERT, XLNet) & 0.93 \\ Fusion 3 (RoBERTa, XLNet) & 0.92 \\ \hline \hline \end{tabular} \end{table} Table 1: Experimental results of the proposed solutions on the development set. ## 5 Conclusions In this paper, we presented our solutions for the RCTP subtask of DisasterMM challenge posted in MediaEval 2022. We proposed a late fusion framework incorporating several state-of-the-art transformers for the task. In the current implementation, all the models are treated equally by assigning them equal weights (i.e., 1). In the future, we aim to employ merit-based fusion methods to further improve the final classification score.
2304.05486
A Topology by Geometrization for Sub-Iterated Immediate Snapshot Message Adversaries and Applications to Set-Agreement
The Iterated Immediate Snapshot model (IIS) is a central model in the message adversary setting. We consider general message adversaries whose executions are arbitrary subsets of the executions of the IIS message adversary. We present a new topological approach for such general adversaries, based upon geometric simplicial complexes. We are able to define a topology directly on the considered sets of executions, which gives both simpler and more powerful ways of using topology for distributed computability. As application of this new framework, we present a complete characterization and lower bounds for solving set-agreement for general sub-IIS message adversaries.
Yannis Coutouly, Emmanuel Godard
2023-04-11T20:40:04Z
http://arxiv.org/abs/2304.05486v1
A Topology by Geometrization for Sub-Iterated Immediate Snapshot Message Adversaries and Applications to Set-Agreement ###### Abstract The Iterated Immediate Snapshot model (IIS) is a central model in the message adversary setting. We consider general message adversaries whose executions are arbitrary subsets of the executions of the IIS message adversary. We present a new topological approach for such general adversaries, based upon geometric simplicial complexes. We are able to define a topology directly on the considered sets of executions, which gives both simpler and more powerful ways of using topology for distributed computability. As application of this new framework, we present a complete characterization and lower bounds for solving set-agreement for general sub-IIS message adversaries. topological methods, geometric simplicial complex, set-agreement Digital Object Identifier 10.4230/LIPIcs... ## 1 Introduction Since the seminal works of Herlihy-Shavit, Borowsky-Gafni and Saks Zaharoglou [1, 2, 3], using topological methods has proved very fruitful for distributed computing and for distributed computability in particular. Since those first results, the topological framework has been refined to be presented in a more effective way. In particular, the Iterated Immediate Snapshot model (\(IIS_{n}\)) is a special message adversary that has been proposed as a central model to investigate distributed computability. In this note, we propose an enhanced presentation of the topological approach in order to investigate distributed computability and complexity for message adversaries more general than \(IIS_{n}\). The Iterated Write Snapshot is a shared memory model. Single-writers/multi-readers registers are accessible by processes. There is usually as many registers as processes and the registers are arranged in a one-shot array. It can be assumed, as in [1], that there is a writeSnapshot() primitive that enables processes to atomically write values to their register and read the content of the full array. Each concurrent access can read the values corresponding to the calling process and also the values previously written by other processes in the same round. In a given round, all possible interleaving of calls to writeSnapshot() are allowed. The main interest of this model is that it has a simple synchronous and regular structure and that it was proved in [1] that a bounded colourless task can be wait-free solved in the classical read-write model if and only if it is solvable in the Iterated Write Snapshot model. So this model has the same computing power as shared memory, but using topological tools is simpler in this model (see the tutorial in [12] and the thorough coverage of [12]). Later, the Iterated Immediate Snapshot model has been introduced which is the counterpart of Iterated Write Snapshot model to message passing. Process may never fail, however, it is possible that a "fast" correct process never sees another correct process. The communication structure that one gets with a standard shared memory model is usually edge-transitive : in a round, if a process \(p\) "sees" a process \(p^{\prime}\) that also "sees" a process \(q\), then \(p\) also "sees" \(q\). Considering message passing systems, this condition is actually not necessary, and in [1], where the terminology of "message adversaries" is introduced, this condition is dropped by considering various families of directed graphs where the instant graphs are not transitive. In [1], Afek and Gafni show that the same tasks can be solved as in standard asynchronous shared memory if the instant directed graphs are tournaments. So some message adversaries with specific non-complete underlying graph of communication can have the same computing power as classical shared memory. Subsequently, Raynal and Stainer have shown in [13] that it is possible to consider various message adversaries (they are not all iterated) where further restrictions on the set of possible scenarios, _i.e._ weaker adversaries, correspond to well known asynchronous shared-memory models enriched with failures detectors. It appears that the general message adversary model is a very rich, and at some time very convenient (some message adversaries have very simple protocol complex) model to describe distributed systems from the point of view of computability. It is therefore of great interest to investigate as general as possible message adversaries, since these can _model_ numerous type of failures while being actually fault-free by themselves. We consider the setting of Iterated Immediate Snapshot message adversary (\(IIS_{n}\)), a system of \(n+1\) processes whose messages exchanges satisfy the Immediacy and Containment properties (see Section 2.2). In this paper, we investigate arbitrary sub-IIS message adversaries, that is message adversaries whose set of executions are arbitrary subsets of the executions of the \(IIS_{n}\) message adversary. We present a new topological method that enables to define a topology directly on the set of executions of \(IIS_{n}\), for any \(n\). We use the precise description we give of this new topology to give a full characterization of sub-IIS message adversaries solving the classical set-agreement problem. ### Motivation and Contributions In order to correctly handle general message adversaries, contrary to the usual focus in topological methods for distributed computing, we consider simplicial complexes primarily as _geometric_ simplicial complexes. We show then that it is possible to associate, via a natural _geo_ mapping, any infinite execution of \(IIS_{n}\) to a point of the standard euclidean space \(\mathbb{R}^{N}\). The topology on the set of executions is then the topology induced from the standard topology by the mapping _geo_ : the open sets are pre-images \(geo^{-1}(\Omega)\) of the open sets \(\Omega\) of \(R^{N}\). The standard euclidean topology is simple and well understood, however, since _geo_ is not injective, it is necessary to describe so-called "non-separable sets" in order to fully understand the new topology. In topology, two distinct elements \(x,y\) are said to be non-separable if for any two neighbourhoods \(\Omega_{x}\) of \(x\) and \(\Omega_{y}\) of \(y\), we have \(\Omega_{x}\cap\Omega_{y}\neq\emptyset\). In our setting, two executions are non-separable when they have the same image via the mapping \(geo\). So in Section 4, we first investigate \(geo\) pre-image sets of given points in \(\mathbb{R}^{N}\). The standard chromatic subdivision is the combinatorial topology representation of one round of the Immediate Snapshot model. Its simple and regular structure makes topological reasoning attractive. In this paper, we introduce a new universal algorithm and show its simple relationship with the standard chromatic subdivision as exposed in the geometric simplicial complex setting. This new algorithm averages with specific weights vectors of \(\mathbb{R}^{N}\) at each node. We denote this averaging algorithm the Chromatic Average Algorithm. Running the Chromatic Average Algorithm in the \(IIS_{n}\) model gives a natural geometric counterpart in \(\mathbb{R}^{N}\) to any given execution of \(IIS_{n}\). The mapping associating executions of \(IIS_{n}\) to points in \(R^{N}\) is the geometrization \(geo\). We present in Theorem 29 a complete combinatorial description of the different kind of pre-image sets \(geo^{-1}(x)\) with \(x\in\mathbb{R}^{N}\), so it describes all non-separable sets in \(IIS_{n}\). Interestingly, we show that they are of only three possible size : 1, 2 and infinite size. Since we do have non-separable sets in our setting, it shows that the standard abstract simplicial complexes approach is actually not always directly usable, since abstract simplicial complexes are known to have separable topology. It means that, for the first time, we could have to explicitly use the geometric version of simplicial complexes to fully investigate general distributed computability, in particular for non-iterated message adversaries. We call the topology defined here the _geometrization topology_ to emphasize this change of paradigm. The \(k-\)set-agreement problem is a standard problem in distributed computing and it is known to be a good benchmark for topological approaches. The \(k-\)set-agreement problem is a distributed task where processes have to agree on no more than \(k\) different initial values. The set-agreement problem is the \(k-\)set agreement problem with \(k+1\) processes. In the shared memory model, the impossibility of wait-free \(k-\)set agreement for more than \(k+1\) processes is one of the crowning achievements of topological methods in distributed computing [11, 12, 13]. We apply our technique to derive a characterization and lower bounds for general message adversaries solving set-agreement. The characterization of Th. 30 states that set-agreement is solvable for \(\mathcal{M}\subset IIS_{n}\) if and only if the geometrization of \(\mathcal{M}\) has an "hole", _i.e._\(geo(\mathcal{M})\) does not cover the convex hull of \(S^{n}\), the simplex of dimension \(n\). This new topological approach enables to efficiently handle non-iterated message adversaries in a general way for the first time. As is seen in the application to set-agreement, it is possibly to investigate new phenomenon. Moreover, in difference to what is usually implied, it seems that handling general adversaries is not really a question of compacity but way more a question of separability. Here, the characterization we give is also appropriate and directly applicable to non-compact adversaries. ### Related Works We have shown in the previous section that the model considered here is very relevant in many ways. This model of synchronous communication has actually been introduced numerous times under different names. We mention briefly the mobile omissions model [10], the more recent "Heard-of" model [14], the iterated snapshot model [1] and its final evolution as a _message adversary_ model [1]. Some equivalences have been proved between these synchronous presentations and asynchronous models in the case of colourless tasks [1]. Note also that in the case of dynamic networks, whenever the communication primitive is a broadcast (to the current neighbours), this model can also be used. In [1], Gafni, Kuznetsov and Manolescu investigate models that are subsets of the Iterated Immediate Snapshot model. They introduced _ad hoc_ ancillary infinite simplicial complexes, that are called _terminating subdivisions_. We believe our tools can provide a simpler, and less error-prone, framework to investigate distributed computability of sub-IIS models. In a series of works, averaging algorithms to solve relaxed versions of the Consensus problem, including approximate Consensus, have been investigated. In [14], Charron-Bost, Fugger, and Nowak have used matrix oriented approaches to show the convergence of different averaging algorithms. We use a similar stochastic matrix technique here to prove the convergence of the Chromatic Average Algorithm. In [12], Fugger, Nowak and Schwarz have shown tight bounds for solving approximate and asymptotic Consensus in quite general message adversaries. In [15], Nowak, Schmid, and Winkler propose knowledge-based topologies for all message adversaries. It is then used to characterize message adversaries that can solve Consensus. The scope of [15] is larger than the scope of the paper, however, note that contrary to those topologies, that are _implicitly_ defined by indistinguishability of local knowledge, the geometrization topology here is explicitly defined and fully described by Th. 29. Moreover, set-agreement has not been investigated in this knowledge-based framework. The \(k-\)set agreement problem is a classical and important coordination problems. It is also a theoretical benchmark for distributed computability in numerous models. A review by Raynal can be found in [11]. Set-agreement is the weaker version of \(k-\)set agreement, _i.e._ with \(k+1\) processes only. In [1], the "two generals problem", that is the Consensus problem for two processes is investigated for arbitrary sub-IIS models by Godard and Perdereau. Given that Consensus for two processes is actually the set-agreement problem, the characterization of solvability of set-agreement presented here is a generalization to any number of processes of the results of [1]. ### Outline In Section 2, we present standard definitions and our notation for message adversaries. In Section 3, we start by restating the standard definitions of using combinatorial topology for distributed computing in the geometric context. We then introduce the geometrization topology of \(IIS_{n}\). In Section 4, we describe and prove precisely the properties of the geometrization topology. In Section 5, we apply our framework to derive new characterization of computability and lower bounds for set-agreement. We conclude in Section 6 with some open questions. ## 2 Models and Definitions ### Message Adversaries We introduce and present here our notations. Let \(n\in\mathbb{N}\), we consider systems with \(n+1\) processes. We denote \(\Pi_{n}=[0,..,n]\) the set of processes. Since sending a message is an asymmetric operation, we will work with directed graphs. We recall the main standard definitions in the following. We use standard directed graph (or digraph) notations : given \(G\), \(V(G)\) is the set of vertices, \(A(G)\subset V(G)\times V(G)\) is the set of arcs. We denote by \(\mathcal{G}_{n}\) the set of directed graphs with vertices in \(\Pi_{n}\). A _dynamic graph_\(\mathbf{G}\) is a sequence \(G_{1},G_{2},\cdots,G_{r},\cdots\) where \(G_{r}\) is a directed graph with vertices in \(\Pi_{n}\). We also denote by \(\mathbf{G}(r)\) the digraph \(G_{r}\). A _message adversary_ is a set of dynamic graphs. Since that \(n\) will be mostly fixed through the paper, we use \(\Pi\) for the set of processes and \(\mathcal{G}\) for the set of graphs with vertices \(\Pi\) when there is no ambiguity. Intuitively, the graph at position \(r\) of the sequence describes whether there will be, or not, transmission of some messages sent at round \(r\). A formal definition of an execution under a scenario will be given in Section 2.4. We will use the standard following notations in order to describe more easily our message adversaries [10]. A sequence is seen as a word over the alphabet \(\mathcal{G}\). The empty word is noted \(\varepsilon\). Given \(A\subset\mathcal{G}\), \(A^{*}\) is the set of all finite sequences of elements of \(A\), \(A^{\omega}\) is the set of all infinite ones and \(A^{\infty}=A^{*}\cup A^{\omega}\). Given \(\mathbf{G}\in\mathcal{G}^{\omega}\), if \(\mathbf{G}=\mathbf{H}\mathbf{K}\), with \(\mathbf{H}\in\mathcal{G}_{n}^{*},\mathbf{K}\in\mathcal{G}_{n}^{\omega}\), we say that \(\mathbf{H}\) is _a prefix_ of \(\mathbf{G}\), and \(\mathbf{K}\)_a suffix_. \(Pref(\mathbf{G})\) denotes the set of prefixes of \(\mathbf{G}\). An adversary of the form \(A^{\omega}\) is called an _oblivious adversary_ or an _iterated adversary_. A word in \(\mathcal{M}\subset\mathcal{G}^{\omega}\) is called a _communication scenario_ (or _scenario_ for short) of message adversary \(\mathcal{M}\). Given a word \(\mathbf{H}\in\mathcal{G}^{*}\), it is called a _partial scenario_ and \(len(\mathbf{H})\) is the length of this word. The prefix of \(\mathbf{G}\) of length \(r\) is denoted \(\mathbf{G}_{|r}\) (not to be confused with \(\mathbf{G}(r)\) which is the \(r\)-th letter of \(\mathbf{G}\), it the digraph at time \(r\)). The following definitions provide a notion of causality when considering infinite word over digraphs. [[10]] Let \(\mathbf{G}\) a sequence \(G_{1},G_{2},\cdots,G_{r},\cdots\). Let \(p,q\in\Pi\). There is a _joinerly_ in \(\mathbf{G}\) at time \(r\) from \(p\) to \(q\), if there exists a sequence \(p_{0},p_{1},\ldots,p_{t}\in\Pi\), and a sequence \(r\leq i_{0}<i_{1}<\cdots<i_{t}\in\mathbb{N}\) where we have \(\models\)\(p_{0}=p,p_{t}=q\), \(\models\)for each \(0<j\leq t\), \((p_{j-1},p_{j})\in A(G_{i_{j}})\) This is denoted \(p\stackrel{{ r}}{{\underset{\mathbf{G}}{\mathbf{G}}{\mathbf{G}}{ \mathbf{G}}}}q\). We also say that \(p\) is causally influencing \(q\) from round \(r\) in \(\mathbf{G}\). ### Iterated Immediate Snapshot Message Adversary We say that a graph \(G\) has the _Immediacy Property_ if for all \(a,b,c\in V(G)\), \((a,b),(b,c)\in A(G)\) implies that \((a,c)\in A(G)\). A graph \(G\) has the _containment Property_ if for all \(a,b\in V(G)\), \((a,b)\in A(G)\) or \((b,a)\in A(G)\). [[1, 1]] We set \(IS_{n}=\{G\in\mathcal{G}_{n}\mid G\) has the Immediacy and Containment properties\(\}\). The Iterated Immediate Snapshot message adversary for \(n+1\) processes is the message adversary \(IIS_{n}=IS_{n}^{\omega}\). The Iterated Immediate Snapshot model was first introduced as a (shared) memory model and then has been shown to be equivalent to the above message adversary first as tournaments and iterated tournaments [1, 1], then as this message adversary [1, 1]. See also [15] for a survey of the reductions involved in these layered models. ### Examples We do not restrict our study to regular languages, however all message adversaries we are aware of are regular, as can be seen in the following examples, where the regular expressions prove to be very convenient. We show how standard fault environments are conveniently described in our framework. Consider a message passing system with \(n\) processes where, at each round, all messages can be lost. The associated message adversary is \(\mathcal{G}_{n-1}^{\omega}\). Consider a system with two processes \(\{\circ,\bullet\}\) where, at each round, only one message can be lost. The associated message adversary is \(\{\circ+\bullet,\circ+\bullet,\circ\rightarrow\bullet\}^{\omega}\). This is \(IIS_{1}\) ### Execution of a Distributed Algorithm Given a message adversary \(\mathcal{M}\) and a set of initial configurations \(\mathcal{I}\), we define what is an execution of a given algorithm \(\mathcal{A}\) subject to \(\mathcal{M}\) with initialization \(\mathcal{I}\). An execution is constituted of an initialization step, and a (possibly infinite) sequence of rounds of messages exchanges and corresponding local state updates. When the initialization is clear from the context, we will use _scenario_ and _execution_ interchangeably. An execution of an algorithm \(\mathcal{A}\) under scenario \(w\in\mathcal{M}\) and initialization \(\iota\in\mathcal{I}\) is the following. This execution is denoted \(\iota.w\). First, \(\iota\) affects an initial state to all processes of \(\Pi\). A round is decomposed in 3 steps : sending, receiving, updating the local state. At round \(r\in\mathbb{N}\), messages are sent by the processes using the SendAll() primitive. The fact that the corresponding receive actions, using the Receive() primitive, will be successful depends on \(G=w(r)\), \(G\) is called the _instant graph_ at round \(r\). Let \(p,q\in\Pi\). The message sent by \(p\) to all its neighbours is received by \(q\) on the condition that the arc \((p,q)\in A(G)\). Then, all processes update their state according to the received values and \(\mathcal{A}\). Note that, it is usually assumed that \(p\) always receives its own value, that is \((p,p)\in A(G)\) for all \(p\) and \(G\). Let \(w\in\mathcal{M},\iota\in\mathcal{I}\). Given \(u\in Pref(w)\), we denote by \(\mathbf{s}_{p}(\iota.u)\) the state of process \(p\) at the \(len(u)\)-th round of the algorithm \(\mathcal{A}\) under scenario \(w\) with initialization \(\iota\). This means in particular that \(\mathbf{s}_{p}(\iota.\varepsilon)\) represents the initial state of \(p\) in \(\iota\), where \(\varepsilon\) denotes the empty word. A task is given by a set \(\mathcal{I}\) of initial configurations, a set of output values \(Out\) and a relation \(\Delta\), the specification, between initial configurations and output configuration1. We say that a process _decides_ when it outputs a value in \(\O\). Finally and classically, Footnote 1: Note that the standard definition in the topological setting involves carrier map that we do not consider here for we will consider only one specific task, the Set Agreement problem An algorithm \(\mathcal{A}\) solves a Task \((\mathcal{I},Out,\Delta)\) for the message adversary \(\mathcal{M}\) if for any \(\iota\in\mathcal{I}\), any scenario \(w\in\mathcal{M}\), there exist \(u\) a prefix of \(w\) such that the states of the processes \(out=(\mathbf{s}_{0}(\iota.u),\ldots,\mathbf{s}_{n}(\iota.u))\) satisfy the specification of the task, ie \(\iota\Delta out\). ## 3 A Topology by Geometrization In this paper we present a new topological approach for investigating distributed computability. It extends the known simplicial complexes-based known method for finite executions to infinite executions without considering infinite additional complexes like in [1]. This enables to define directly a topology on the set of executions of the standard Iterated Immediate Snapshot model \(IIS_{n}\). ### Combinatorial Topology Definitions #### Geometric Simplicial Complexes Before giving the definition of the geometrization topology in Sect. 3.2.2, we state the definition of simplicial complexes, but not first as abstract complex, as is usually done in distributed computing, but primarily as geometrical objects in \(\mathbb{R}^{N}\). This is the reason we call this definition the geometrization topology. Intuitively we will associatea point in \(\mathbb{R}^{N}\) to any execution via a geometrization mapping \(geo\). The geometrization topology is the topology induced by \(geo^{-1}\) from the standard topology in \(\mathbb{R}^{N}\). This also makes \(geo\) continuous by definition. In the standard approach, geometric simplices are also used but they are introduced as geometric realizations of the abstract simplicial complexes. As will be seen later, when dealing with infinite complexes, the standard topology of these simplices does not enable to handle the computability of distributed tasks since we will need to define an other topology. We show that the topology on infinite complexes, as defined in standard topology textbook, is different from the one we show here to be relevant for distributed computability. See Section 2. Note that to be correctly interpreted, the topology we construct is on the set of infinite executions, not on the complexes corresponding the finite executions. The following definitions are standard definitions from algebraic topology [10]. We fix an integer \(N\in\mathbb{N}\) for this part. We denote \(||x||\) the euclidean norm in \(\mathbb{R}^{N}\). For a bounded subset \(X\subset\mathbb{R}^{n}\), we denote \(diam(X)\) its diameter. Let \(n\in\mathbb{N}\). A finite set \(\sigma=\{x_{0},\ldots,x_{n}\}\subset\mathbb{R}^{N}\) is called a simplex of dimension \(n\) if the vectors \(\{x_{1}-x_{0},\ldots,x_{n}-x_{0}\}\) are linearly independent. We denote by \(|\sigma|\) the convex hull of \(\sigma\). A simplicial complex is a collection \(C\) of simplices such that : 1. If \(\sigma\in C\) and \(\sigma^{\prime}\subseteq\sigma\), then \(\sigma^{\prime}\in C\), 2. If \(\sigma,\tau\in C\) and \(|\sigma|\cap|\tau|\neq\emptyset\) then there exists \(\sigma^{\prime}\in C\) such that \(|\sigma|\cap|\tau|=|\sigma^{\prime}|\), \(\sigma^{\prime}\subset\sigma,\sigma^{\prime}\subset\tau\). We denote \(\wr C\wr C=\bigcup\limits_{S\in C}|S|\), this is the _geometrization of \(C\)_. Note that these definitions do not require complexes to be a finite collection of simplicies. The simplices of dimension \(0\) (singleton) of \(C\) are called vertices, we denote \(V(C)\) the set of vertices of \(C\). A complex is pure of dimension \(n\) if all maximal simplices are of dimension \(n\). In this case, a simplex of dimension \(n-1\) is called a facet. The _boundary_ of a simplex \(\sigma=\{x_{0},\ldots,x_{n}\}\) is the pure complex \(\bigcup_{i\in[0,n]}\{x_{j}\mid j\in[0,n],i\neq j\}\) of dimension \(n-1\). It is denoted \(\delta(\sigma)\), it is the union of the facets of \(\sigma\). Let \(A\) and \(B\) be simplicial complexes. A map \(f\colon V(A)\to V(B)\) defines a _simplicial map_ if it preserves the simplices, _i.e._ for each simplex \(\sigma\) of \(A\), the image \(f(\sigma)\) is a simplex of \(B\). By linear combination of the barycentric coordinates, \(f\) extends to the linear simplicial map \(f\colon\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\wr\,\,\wr\,\wr\,\wr\,\,\wr\,\wr\,\,\wr\,\wr\,\wr\,\,\wr\,\wr\,\,\wr\,\wr\,\,\wr\,\wr\,\,\wr\,\wr\,\,\wr\,\,\wr\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\wr\,\,\,\wr\,\,\wr\,\,\,\wr\,\,\,\wr\,\,\,\wr\,\,\,\wr\,\,\,\wr\,\,\,\,\wr\,\,\,\wr\,\,\,\wr\,\,\,\,\wr\,\,\,\,\wr\,\,\,\,\,\wr\,\,\,\,\,\,\,\wr\, Note that it is not necessary to assume \(V(S)\subset V(C)\) here, since the vertices of the simplex \(S\) being extremal points, they are necesarily in \(V(C)\). We start by defining some geometric transformations of simplices (here seen as sets of points). The choice of the coefficients will be justified later. Consider a set \(V=(y_{0},\ldots,y_{d})\) of size \(d+1\) in \(\mathbb{R}^{N}\). We define the function \(\zeta_{V}:V\longrightarrow\mathbb{R}^{N}\) by, for all \(j\in[0,d]\) \[\zeta_{V}(y_{j})=\frac{1-\frac{d}{2d+1}}{d+1}y_{j}+\sum_{i\neq j}\frac{1+\frac {1}{2d+1}}{d+1}y_{i}\] We now define in a geometric way the _standard chromatic subdivision_ of colored simplex \((S,\mathcal{P})\), where \(S=\{x_{0},x_{1},\ldots,x_{n}\}\) and \(\mathcal{P}(x_{i})=i\). The chromatic subdivision \(\mathrm{C}hr(S)\) for the colored simplex \(S=\{x_{0},\ldots,x_{n}\}\) is a simplicial complex defined by the set of vertices \(V(\mathrm{C}hr(S))=\{\zeta_{V}(x_{i})\mid i\in[0,n],V\subset V(S),x_{i}\in V\}\). For each pair \((i,V)\), \(i\in[0,n]\) and \(V\subset V(S)\), there is an associated vertex \(y\) of \(\mathrm{C}hr(S)\), and conversely each vertex has an associated pair. The _color_ of \((i,V)\) is \(i\). The set \(V\) is called the _view_. We define \(\Phi\) the following _presentation_ of a vertex \(y\), \(\Phi(y)=(\mathcal{P}(y),V_{y})\) where \(\mathcal{P}(y)=i\) and \(V_{y}=V\). The simplices of \(\mathrm{C}hr(S)\) are the set of \(d+1\) points \(\{\zeta_{V_{0}}(x_{i_{0}}),\cdots,\zeta_{V_{d}}(x_{i_{d}})\}\) where * there exists a permutation \(\pi\) on \([0,d]\) such that \(V_{\pi(0)}\subseteq\cdots\subseteq V_{\pi(d)}\), * If \(i_{j}\in\mathcal{P}(V_{\ell})\) then \(V_{j}\subset V_{\ell}\). In Fig. 1, we present the construction for \(\mathrm{C}hr(S^{2})\). For convenience, we associate \(\circ,*,\bullet\) to the processes \(0,1,2\) respectively. In Fig. 0(a), we consider the triangle \(x_{\circ},x_{*},x_{\bullet}\) in \(\mathbb{R}^{2}\), with \(x_{\circ}=(0,0)\), \(x_{\bullet}=(1,0)\), \(x_{*}=(\frac{1}{2},\frac{\sqrt{3}}{2})\). We have that \(\zeta_{(x_{\circ},x_{\bullet})}(x_{\bullet})=(\frac{1}{3},0)\), \(\zeta_{(x_{\circ},x_{\bullet})}(x_{\circ})=(\frac{2}{3},0)\) and \(\zeta_{(x_{\circ},x_{\bullet},x_{\bullet})}(x_{*})=(\frac{1}{2},\frac{\sqrt{3} }{5})\). The relation between instant graph (top) and simplex \(\left\{(\frac{2}{3},0),(1,0),(\frac{1}{2},\frac{\sqrt{3}}{5})\right\}\) (grey area) is detailed in the following section. In the following, we will be interested in iterations of \(Chr(S^{n},\mathcal{P})\). The last property of the definition of chromatic subdivision means with we can drop the \(C\) index in the coloring of complex \(C\) and use \(\mathcal{P}\) to denote the coloring at all steps. From its special role, it is called Figure 1: Standard chromatic subdivision construction for dimension 2. On the left, the association between an instant graph of \(IS_{2}\) (top) and a simplex of \(\mathrm{C}hr(S^{2})\) (grey area) is illustrated. the _process color_ and we drop \(\mathcal{P}\) in \(Chr(S,\mathcal{P})\) using in the following \(Chr(S)\) for all simplices \(S\) of iterations of \(Chr(S^{n})\). In [10], Kozlov showed how the standard chromatic subdivision complex relates to Schlegel diagrams (special projections of cross-polytopes), and used this relation to prove the standard chromatic subdivision was actually a subdivision. In [11, section 3.6.3], a general embedding in \(R^{n}\) parameterized by \(\epsilon\in\mathbb{R}\) is given for the standard chromatic subdivision. The geometrization here is done choosing \(\epsilon=\frac{d}{2d+1}\) in order to have "well balanced" drawings. ### Encoding Iterated Immediate Snapshots Configurations #### Algorithms in the Iterated Immediate Snapshots Model It is well known, see _e.g._[11, Chap. 3&4, Def. 3.6.3], that each maximal simplex \(S=\{\zeta_{V_{0}}(x_{i_{0}}),\cdots,\zeta_{V_{n}}(x_{i_{n}})\}\) from the chromatic subdivision of \(S^{n}\) can be associated with a graph of \(IS_{n}\) denoted \(\Theta(S)\). We have \(V(\Theta(S))=\Pi_{n}=[0,n]\) and set \(\Theta(\zeta_{V_{j}}(x_{i_{j}}))=\mathcal{P}(x_{i_{j}})\).The arcs are defined using the representation \(\Phi\) of points, \(A(\Theta(S))=\{(i,j)\mid i\neq j,V_{i}\subseteq V_{j}\}\). The mapping \(\theta\) will denote \(\Theta^{-1}\). We can transpose this presentation to an averaging algorithm called the _Chromatic Average_ Algorithm presented in Algorithm 1. ``` 1\(x\gets x_{i}^{*}\); 2Loopforever 3SendAll(\((i,x)\)); 4\(V\leftarrow\)Receive() // set of all received messages; 5\(d\gets sizeof(V)-1\) // \(i\) received \(d+1\) messages including its own ; 6\(x=\frac{1-\frac{d+1}{2d+1}}{d+1}x+\sum_{(j,x_{j})\in V,j\neq i}\frac{1+\frac{1- \frac{d+1}{2d+1}}{d+1}}{x_{j}}\); ``` **Algorithm 1** The Chromatic Average Algorithm for process \(i\) Executing one round of the loop in Chromatic Average for instant graph \(G\), the state of process \(i\) is \(x_{i}^{\prime}=\zeta_{V_{i}}(x_{i}^{*})\), where \(V_{i}\) is the view of \(i\) on this round, that is the set of \((j,x_{j})\) it has received; with \(\Theta(\{\zeta_{V_{0}}(x_{0}^{*}),\cdots,\zeta_{V_{n}}(x_{n}^{*})\})=G\). See eg. in Fig. 0(a), the simplex of the grey area corresponds to the ordered sequence of views \(\{x_{\bullet}\}\subset\{x_{\bullet},x_{\circ}\}\subset\{x_{\bullet},x_{\circ },x_{\bullet}\}\), associated to the directed graph depicted at the top right. Adjacency for a given \(i\) corresponds to the smallest subset containing \(x_{i}\). By iterating, the chromatic subdivisions \(\mathrm{C}hr^{r}(S^{n})\) are given by the global state under all possible \(r\) rounds of the Chromatic Average Algorithm. Finite rounds give the Iterated Chromatic Subdivision (hence the name). This is an algorithm that is not meant to terminate (like the full information protocol). The infinite runs are used below to define a topology on \(IIS_{n}\). The Chromatic Average algorithm is therefore the geometric counterpart to the Full Information Protocol that is associated with \(\mathrm{C}hr\)[11]. In particular, any algorithm can be presented as the Chromatic Average together with a terminating condition and an output function of \(x\). This one round transformation for the canonical \(S^{n}\) can actually be done for any simplex \(S\) of dimension \(n\) of \(\mathbb{R}^{N}\). For \(G\in IS_{n}\), we denote \(\mu_{G}(S)\) the geometric simplex that is the image of \(S\) by one round of the Chromatic average algorithm under instant graph \(G\). The definitions of the previous section can be considered as mostly textbook (as in [11]), or folklore. To the best of our knowledge, the Chromatic Average Algorithm, as such, is new, and there is no previous complete proof of the link between the Chromatic Average Algorithm and iterated standard chromatic subdivisions. However, one shall remark that people are, usually, actually _drawing_ standard chromatic subdivisions using the Chromatic Average Algorithm. #### A Topology for \(Iis_{n}\) Let \(w\in IIS_{n}\), \(w=G_{1}G_{2}\cdots\). For the prefix of \(w\) of size \(r\), \(S\) a simplex of dimension \(n\), we define \(geo(w_{|r})(S)=\mu_{G_{r}}\circ\mu_{G_{r-1}}\circ\cdots\circ\mu_{G_{1}}(S)\). Finally, we set \(geo(w)=\lim_{r\to\infty}geo(w_{|r})(S^{n})\). We prove in the following section that this actually converges. We define the _geometrization topology_ on the space \(IIS_{n}\) by considering as open sets the sets \(geo^{-1}(\Omega)\) where \(\Omega\) is an open set of \(\mathbb{R}^{N}\). A collection of sets can define a topology when any union of sets of the collection is in the collection, and when any finite intersection of sets of the collection is in the collection. This is straightforward for a collection of inverse images of a collection that satisfies these properties. A neighbourhood for point \(x\) is an open set containing \(x\). In topological spaces, a pair of distinct points \(x,y\) is called _non-separable_ if there does not exist two disjoint neighbourhoods of these points. The pre-images \(geo^{-1}(x)\) that are not singletons are _non-separable sets_. We will see that we always have non-separable sets and that they play an important role for task solvability. Subset of \(IIS_{n}\) will get the subset topology, that is, for \(\mathcal{M}\subseteq IIS_{n}\), open sets are the sets \(geo^{-1}(\Omega)\cap\mathcal{M}\) where \(\Omega\) is an open set of \(\mathbb{R}^{N}\). We set \(\mathcal{U}\mathcal{M}=geo(\mathcal{M})\) the geometrization of \(\mathcal{M}\). Note that the geometrization should not be confused with the standard _geometric realization_. They are the same at the set level but not at the topological level. This is quite well known, see e.g. [10]. At times, in order to emphasize this difference, for a simplex \(S\subset\mathbb{R}^{N}\), we will also use \(\wr S\wr\) instead of \(|S|\). The _geometrization_ of \(C\), denoted \(\wr C\wr\), that is the union of the convex hulls \(|\sigma|\) of the simplices \(\sigma\) of \(C\), is endowed with the standard topology from \(\mathbb{R}^{N}\). We also note this topological space as \(\wr C\wr\). **Remark 13**.: When considering the input complex embedded in \(\mathbb{R}^{N}\), the geometrization topology could be applied on all simplices, in effect providing a new topological framework for so called protocol complex. This could be done by applying the Chromatic Average algorithm. This is not detailed here as we will not need it to investigate set-agreement. Moreover, note that this construction works also for any model that corresponds to a mesh-shrinking subdivision. #### Convexity and Metric Results Before investigating the geometrization topology, we present some metric results relating vertices of the chromatic subdivision. In particular we prove that the sequences \(geo(w_{|r})(S)\) converge to a point. The following lemma comes from the convexity of the \(\mu_{G}\) transforms. **Lemma 14**.: _Let \(w\) a run, let \(r,r^{\prime}\in\mathbb{N},r<r^{\prime}\) then \(|geo(w_{|r})(S^{n})|\subset|geo(w_{|r})(S^{n})|\)._ Proof.: Consider only one step. We have that \(\frac{1-\frac{d}{2d+1}}{d+1}+d\times\frac{1+\frac{1}{2d+1}}{d+1}=\frac{1- \frac{d}{2d+1}+d+\frac{d}{2d+1}}{d+1}=1\). So one step of the Chromatic Average gives, on each process, a linear combination with non-negative coefficients that sums to \(1\), it is therefore a barycentric combination on the points of the simplex at the beginning of the round. It is therefore a convex mapping of this simplex. Since composing convex mapping is also convex, and that \(S^{n}\) is a convex set, we get the result by recurrence. There exists reals \(0<K^{\prime}<K<1\), such that for all \(G\) of \(IS_{n}\), all \(p,q\in V(S)\), \(p^{\prime},q^{\prime}\in V(\mu_{G}(S))\), such that \(\mathcal{P}(p)=\mathcal{P}(p^{\prime})\) and \(\mathcal{P}(q)=\mathcal{P}(q^{\prime})\), we have \[K^{\prime}||p-q||\leq||p^{\prime}-q^{\prime}||\leq K||p-q||.\] Proof.: This is a consequence of \(\mu_{G}\) transforms being convex when \(G\in IS_{n}\). It corresponds to a stochastic matrix (non-negative coefficients and all lines coefficient sums to one) that is scrambling (there is a line without null coefficients) hence contractive. See _e.g._[10, Chap. 1] for definitions and a proof for any given \(G\) of \(IS_{n}\). Then \(K\) (resp. \(K^{\prime}\)) is the largest (resp. smallest) such bounds over all \(G\in IS_{n}\). While iterating the chromatic subdivision, we remark that the diameter of the corresponding simplices is contracting. From Lemma 4.1, we have Let \(S\) a simplex of \(\mathbb{R}^{N}\), then \(diam(\mu_{G_{r}}\circ\mu_{G_{r-1}}\circ\cdots\circ\mu_{G_{1}}(S))\leq K^{r} diam(S)\), where \(K\) is the constant from the previous lemma. Since the simplices are contracted by the \(\mu_{G}\) functions, the sequence of isobarycenters of \((geo(w_{|r}(S))_{r\in\mathbb{N}^{*}}\) has the Cauchy property and this sequence is therefore convergent to some point \(x\in\mathbb{R}^{N}\). Since the diameter of the simplices converges to \(0\), it makes senses to say that the limit of the simplices is the point \(x\). Note that it would also be possible to formally define a metric on the convex subsets of \(\mathbb{R}^{N}\) and consider the convergence of the simplices in this space. ## 4 Geometrization Equivalence As will be be shown later, the geometrization has a crucial role in order to understand the relationship between sets of possible executions and solvability of distributed tasks. In this section, we describe more precisely the pre-images sets, that is subsets of \(IIS_{n}\) of the form \(geo^{-1}(x)\) for \(x\in|S^{n}|\). In particular, we will get a description of the non-separable sets of execution. ### Definitions We say that two executions \(w,w^{\prime}\in IIS_{n}\) are _geo-equivalent_ if \(geo(w)=geo(w^{\prime})\). The set of all \(w^{\prime}\) such that \(geo(w)=geo(w^{\prime})\) is called the equivalence class of \(w\). Since the topology we are interested in for \(\wr IIS_{n}\wr\) is the one induced by the standard separable space \(\mathbb{R}^{N}\) via the \(geo^{-1}\) mapping, it is straightforward to see that non-separable sets are exactly the geo-equivalence classes that are not singletons. In this section, we describe all equivalence classes and show that there is a finite number of possible size for these sets. We define subsets of \(IS_{n}\), the sets \(Solo(P)\), that correspond to instant graphs where the process in \(P\subset\Pi\) have no incoming message from processes outside of \(P\). We have \(Solo(\Pi)=IS_{n}\). In the complex \(\mathrm{Ch}r(S^{n})\), with \(P\subset\Pi\), \(Solo(P)\) is the set of simplices \(T\in\mathrm{Ch}r(S^{n})\) such that \(\forall(p,q)\in A(\Theta(T)),q\in P\Rightarrow p\in P\). We denote by \(\mathcal{K}_{\Pi}\) the instant graph that is complete on \(\Pi\). An execution \(w\) is said _fair_ for \(P\), \(w\in Fair(P)\), if \(w\in Solo(P)^{\omega}\) and for all \(p,q\in P\), \(\forall r\in\mathbb{N}\), we have \(p\underset{w}{\overset{\tau_{*}}{w}}q\). Fairness for \(P\) means that processes in \(P\) are only influenced by processes in \(P\), and that any process always influence other processes infnitely many times. We have the equivalent, and constructive definition : **Proposition 18**.: _Let \(w\in Solo(P)^{\omega}\). An execution \(w\) is \(Fair\) for \(P\) if and only if \(w\) has no suffix in \(\bigcup_{Q\neq\emptyset,Q\subseteq P}Solo(Q)^{\omega}\)._ Proof.: Assume we have a suffix \(s\) for \(w\) in \(Solo(Q)^{\omega}\) with \(Q\subseteq P\).Let \(p\in P\setminus Q\) and \(r\) the starting index of the suffix. Then \(\forall q\in Q\), we must have \(p\underset{w}{\overset{\tau_{*}}{w}}q\) by definition of fairness for \(P\). Denote \(q_{0}\) the first element of \(Q\) to be causally influenced by \(p\) at some time \(t\geq r\). So \(q_{0}\) receive a message from some \(p^{\prime}\in\Pi\), \(p^{\prime}\neq q_{0}\) at time \(t\). Since \(s\in Solo(Q)^{\omega}\), this means that \(q_{0}\) can only receive message from processes in \(Q\). Hence \(p^{\prime}\in Q\) and \(p^{\prime}\) was influenced by \(p\) at time \(t-1\). A contradiction with the minimality of \(q_{0}\). So \(w\) is not in \(Fair(P)\). Conversely, assume that \(w\notin Fair(P)\). Then \(\exists p,q\in P,\exists s,\forall r\geq s\), \(\neg p\underset{w}{\overset{\tau_{*}}{w}}q\). We set \(Q\) as the set of processes that causally influence \(q\) for all \(r\geq s\). We have \(p\notin Q\) so \(Q\subsetneq P\). We denote \(s_{0}\), a time at which no process of \(\Pi\setminus Q\) influence a process in \(Q\). By construction, the suffix at step \(s_{0}\) is in \(Solo(Q)^{\omega}\). ### First Results on Geometrization We start by presenting a series of results about geometrization. Lemma 14 gives the following immediate corollaries. **Corollary 19**.: _Let \(w\) a run in \(IIS_{n}\), then \(\forall r\in\mathbb{N}\), \(geo(w)\in|geo(w_{|r})(S^{n})|\)._ **Proposition 20**.: _Let \(w,w^{\prime}\) two geo-equivalent runs in \(IIS_{n}\), then \(\forall r\in\mathbb{N}\), \(geo(w_{|r})(S^{n})\ \cap geo(w^{\prime}_{|r})(S^{n})\neq\emptyset\)._ Proof.: The intersection of the geometrizations \(|geo(w_{|r})(S^{n})|\) and \(|geo(w^{\prime}_{|r})(S^{n})|\) contains at least \(geo(w)\) by previous corollary. Since the simplices \(geo(w_{|r})(S^{n})\) and \(geo(w^{\prime}_{|r})(S^{n})\) belong to the complex \(\mathrm{C}hr^{r}(S^{n})\), they also intersect as simplices. **Proposition 21**.: _Let \(S\) a maximal simplex of the chromatic subdivision \(\mathrm{C}hrS^{n}\) that is not \(\theta(\mathcal{K}(\Pi))\). Then there is \(P\subsetneq\Pi\) such that \(\Theta(S)\in Solo(P)\)._ Conversely we can describe \(Solo(P)\) more precisely. We denote by \(\delta(S^{n},P)\) the subsimplex of \(S^{n}\) corresponding to \(P\subset\Pi\). This is the boundary relative to \(P\) in \(S^{n}\), and we have that \(\bigcup_{P\subset\Pi}\delta(S^{n},P)=\bigcup_{P\in\Pi}\delta(S^{n},\pi\setminus p )=\delta(S^{n})\). **Proposition 22** (Boundaries of \(\mathrm{C}hr\) are _Solo_).: _Let \(P\) a subset of \(\Pi\). Then \(Solo(P)=\{S\mid S\) a maximal simplex of \(\mathrm{C}hr(S^{n}),|S|\cap|\delta(S^{n},P)|\neq\emptyset\}\)._ Proof.: Denote \(q\) such that \(\Pi=P\cup\{q\}\). Then by construction, \(Solo(P)\) corresponds exactly to the simplex where the processes in \(P\) do not receive any message from \(q\), ie the simplex intersecting the boundary \(\delta(S^{n},P)\). For a given size \(s\) of \(P\), the \(Solo\) sets are disjoint, however this does not form a partition of \(\mathrm{C}hr\). Finally, by iterating the previous proposition, the boundaries of \(S^{n}\) are described by \(Solo(P)^{\omega}\). **Proposition 23**.: _Let \(P\) a subset of \(\Pi\). We have \(\wr Solo(P)^{\omega}l=|\delta(S^{n},P)|\)._ We can now state the main result that links geometrically fair executions and corresponding simplices : in a fair execution, the corresponding simplices, that are included by convexity, have to eventually be strictly included in the interior. [Geometric interpretation of \(Fair\)] Let \(w\) an execution that is \(Fair\) for \(\Pi\), then \(\forall s\in\mathbb{N},\exists r>s\in\mathbb{N}\), such that \(\delta(geo(w_{|s})(S))\cap geo(w_{|r})(S)=\emptyset\). Proof.: Let \(s\in\mathbb{N}\), and an execution \(w\). We denote \(T=geo(w_{|s}\). Consider a process \(p\in\Pi\), for all process \(q\neq p\) we have \(p\mathop{\stackrel{{ s}}{{w}}}q\). Since \(w\) is fair in \(\Pi\), we can consider \(r>s\) the time at which \(p\) is influencing all \(q\) from round \(s\). At his step, for all \(q\neq p\), the barycentric coordinate of the vertex of \(geo(w_{|r}(S)\) of colour \(q\) relative to the vertex of \(geo(w_{|r}(S)\) of colour \(p\) is strictly positive. This means that \(geo(w_{|r}(S)\) does not intersect \(\delta(T,\Pi\setminus p)\). Since \(w\) is fair in \(\Pi\), we can repeat this argument for any \(p\in\Pi\). We denote the \(r^{*}\) the maximal such \(r\) and since \(\bigcup_{p\in\Pi}\delta(T,\Pi\setminus p)=\delta(T)\), we have that \(\delta(geo(w_{|s})(S))\cap geo(w_{|r^{*}})(S)=\emptyset\). ### A Characterization of Geo-Equivalence We start by simple, but useful, sufficient conditions about the size of geo-classes. [\(Fair(\Pi)\) is separable] Let \(w\in IIS_{n}\), denote \(\Sigma\) the geo-class of \(w\). If \(w\) is \(Fair\) on \(\Pi\), then \(\#\Sigma=1\). Proof.: Let \(w^{\prime}\in\Sigma\). We will show that \(w^{\prime}\) shares all prefixes of \(w\). Let \(r\in\mathbb{N}\). From Prop. 4.2 and Lemma 4.2, we get that \(geo(w)\) does not belong to the boundary of \(geo(w_{|r})(S)\), nor to the boundary of \(geo(w_{|r}^{\prime})(S)\). Assume that \(w\) and \(w^{\prime}\) have not the same prefix of size \(r\), that is \(geo(w_{|r})(S)\neq geo(w_{|r}^{\prime})(S)\). From Prop. 4.2\(geo(w_{|r})(S)\), \(geo(w_{|r}^{\prime})(S)\) have to intersect (as simplices), and since they are different, they can intersect only on their boundary. This means that \(geo(w)\) would belong to the boundary, a contradiction. So they have the same prefixes and \(w^{\prime}=w\). [Infinite Cardinal] Let \(n\geq 2\). Let \(w\), \(w^{\prime}\) two distinct executions such that \(geo(w)=geo(w^{\prime})\) and there exist \(s\in\mathbb{N}\) such that \(\forall r>s\exists Tgeo(w_{|r})(S)\ \cap\ geo(w_{|r}^{\prime})(S)=T\) with \(T\) a simplex of dimension \(k\leq n-2\). Then, the geo-equivalence class of \(w\) is of infinite size. Proof.: Let \(w\), \(w^{\prime}\) two executions with \(geo(w)=geo(w^{\prime})\) and \(\forall r>s\), \(geo(w_{|r})(S)\ \cap geo(w_{|r}^{\prime})(S)=T\) with \(T\) od dimension \(k\leq n-2\). Denote \(P\) the colors of \(T\). Since \(k\leq n-2\), we have at least \(p_{1}\neq p_{2}\in\Pi\setminus P\). The suffix at length \(s\) of \(w\) is in \(Solo(P)\). Hence, for the processes in \(P\), when running in \(w\) or \(w^{\prime}\), it is not possible to distinguish these 3 cases about the induced subgraph by \(\{p_{1},p_{2}\}\) in the instant graphs : \(p_{1}\gets p_{2}\), \(p_{1}\leftrightarrow p_{2}\) and \(p_{1}\to p_{2}\). So \(\forall r>s\), we have 3 possible ways of completing what is happening on the induced subgraph by processes in \(P\) in \(G\in Solo(P)\). So we have infinitely many different executions, the cardinality of the geo-class of \(w\) is infinite. Let's consider the remaining cases. Let \(w\in IIS_{n}\), denote \(\Sigma\) the geo-class of \(w\). [Boundaries of \(S^{n}\) are separable] If \(w\) is \(Fair\) on \(\Pi\setminus\{p\}\) for some \(p\in\Pi\) then \(\#\Sigma=1\). Proof.: We denote \(Q=\Pi\setminus\{p\}\). We apply Prop. 25 for \(n-1\) to \(w^{\prime}\) the restriction of \(w\) to the set of processes \(Q\) (this is possible by definition of \(Fair\): no process of \(Q\) receives message from outside of \(Q\)). Since \(w^{\prime}\) satisfies the condition for Prop. 25 (by definition of Fair), which means that the geo-class of \(w^{\prime}\) is of size \(1\). Since there is only one unique way of completing an execution restricted to \(Q\) to one in \(Solo(Q)\) (adding \((q,p),\forall q\in Q\)), we get that there is only \(w\) in the equivalence class. A suffix of a word \(w\) is _strict_ if it is not equal to \(w\). If \(w\) has only a strict suffix that is \(Fair\) on \(\Pi\setminus\{p\}\) for some \(p\in\Pi\) then \(\#\Sigma=2\); Proof.: We denote \(Q=\Pi\setminus\{p\}\). We can write \(w=uav\) where \(u\in IS^{*}_{n}\), \(a\in IS_{n}\) and \(v\) is Fair on \(Q\) but \(av\) is not. We can choose \(u\) such that \(u\) has the shortest length. We consider \(w^{\prime}\) such that \(geo(w^{\prime})=geo(w)\). Let \(r\) be the length of \(ua\). We denote by \(T\) the facet of \(geo(ua)(S^{n})\) with colors \(Q\). Since \(v\) is Fair for \(Q\), we can apply to \(v_{|Q}\) the restriction of \(v\) to \(Q\) Prop. 24. So \(geo(w)\) is not on the boundaries of \(T\) which means, from Prop. 20, that either \(geo(w^{\prime}_{|r})(S^{n})=geo(w_{|r})(S^{n})\) either \(geo(w^{\prime}_{|r})(S^{n})\cap geo(w_{|r})(S^{n})=T\). In both cases, we can apply Prop. 25 to \(v^{\prime}\) the restriction of \(w\) to \(Q\). Which means that there is only one restricted execution in \(Q\). Since there is only one way to complete to \(p\), there are as many elements in the class that simplices at round \(r\) that include \(T\). Since we have a subdivision, we have exactly two simplices sharing the facet \(T\). In the first case, this means that \(w^{\prime}_{|r}=ua\) and \(w=w^{\prime}\). In the second case, we have that \(w^{\prime}_{|r}=ub\) for some \(b\neq a\). We remark that if \(w^{\prime}_{|r-1}\neq u\) this would contradict the minimality of \(u\). Indeed, the prefixes of length \(r-1\) are different, this means that \(av\) is Fair for \(Q\). Using these previous propositions, and remarking that for any \(w\), there exists \(P\) such that \(w\) has a suffix in \(Fair(P)\), we can now present our main result regarding the complete classification of geo-equivalence classes. Let \(n\in\mathbb{N}\) and \(\Sigma\) a geo-equivalence class on \(S^{n}\). Then there are exactly \(3\) cardinals that are possible for \(\Sigma\) (only \(2\) when \(n=1\), the case of [1]): Let \(w\in IIS_{n}\), denote \(\Sigma\) the geo-class of \(w\). * If \(w\) is \(Fair\) on \(\Pi\) or on \(\Pi\setminus\{p\}\) for some \(p\in\Pi\), then \(\#\Sigma=1\); * \(w\) has only a strict suffix that is \(Fair\) on \(\Pi\setminus\{p\}\) for some \(p\in\Pi\) then \(\#\Sigma=2\); * otherwise \(\Sigma\) is infinite. ## 5 The Set-Agreement Problem For all \(n\), the _set-agreement problem_ is defined by the following properties [11]. Given initial \(init\) values in \([0,n]\), each process outputs a value such that the size of the set of output values is at most \(n\), the output values are initial values of some processes, All processes terminates. We will consider in this part sub-IIS message adversaries \(\mathcal{M}\), that is \(\mathcal{M}\subseteq IIS_{n}\). It is well known that set-agreement is impossible to solve on \(IIS_{n}\), we prove the following characterization. Let \(\mathcal{M}\subset IIS_{n}\). It is possible to solve Set-Agreement on \(\mathcal{M}\) if and only if \(\mathcal{M}\uplus|S^{n}|\). ### Impossibility Result On the impossibility side, we will prove a stronger version with non-silent algorithms. An algorithm is said to be _non-silent_ if it sends message forever. Here, this means that a process could have decided a value while still participating in the algorithm. Let \(\mathcal{M}\subset IIS_{n}\). If \(\mathcal{U}\mathcal{M}_{l}=\wr{IIS_{n}\wr}=|S^{n}|\) then it is not possible to solve Set-Agreement on \(\mathcal{M}\), even with a non-silent algorithm. We will need the following definition from combinatorial topology. [Sperner Labelling] Consider a simplicial complex \(C\) that is a subdivision of a chromatic simplex \((S,\chi)\). A labelling \(\lambda:V(C)\longrightarrow\Pi\) is a _Sperner labelling_ if for all \(x\in V(C)\), for all \(\sigma\subset S\), we have that \(x\in|\sigma|\Rightarrow\lambda(x)\in\chi(\sigma)\). [Sperner Lemma [28]] Let a simplicial complex \(C\) that is a subdivision of a chromatic simplex \((S,\chi)\) with Sperner labelling \(\lambda\). Then there exists \(\sigma\in C\), such that \(\lambda(\sigma)=\Pi\). A simplex \(\sigma\) with labelling using all \(\Pi\) colors is called _panchromatic_. Proof of Theorem 3.1.: By absurd, we assume there is a non-silent algorithm \(\mathcal{A}\) (in full information protocol form) solving set-agreement on \(\mathcal{M}\). We run the algorithm on initial inputs \(init(i)=i\). We translate the full information protocol to the chromatic average, non-silent form : the initial value of \(i\) is \(x_{i}^{*}\); when the decision value is given, we still compute and send the chromatic average forever. We can also assume a "normalized" version of the algorithm : when a process receives a decision value from a neighbour, it will decide instantly on this value. Such a normalization does not impact the correctness of the algorithm since set-agreement is a colorless task. The proof will use the Sperner Lemma with labels obtained from the eventual decision value of the algorithm. However it is not possible to use directly the Sperner Lemma for the "full subdivision under \(\mathcal{M}\)" (which we won't define), since this subdivision could be infinite. The following proof will use Konig Lemma to get an equivalent statement. Given \(t\geq 0\), we consider \(\mathrm{C}hr^{t}(S^{n})\) under our algorithm with initial values \(init(i)=i\). For any vertex, we define the following labelling \(\lambda_{t}\): if the process \(i\) has not terminated at time \(t\) with state \(x\in V(\mathrm{C}hr^{t}(S^{n}))\), then the Sperner label \(\lambda_{t}(x)=i\), otherwise it is the decided value. Since the decided value depends only on the local state, the label of a vertex at time \(t\) is independent of the execution leading to it. The goal of the following is to show that there is an entire geo-equivalence class that does not belongs to \(\mathcal{M}\). By Integrity property, we have that the value decided on a face of \(S^{n}\) of processes \(i_{1},\ldots,i_{n}\), ie for \(Solo(i_{1},\ldots,i_{n})^{\omega}\) are taken in \(i_{1},\ldots,i_{n}\). From Prop. 3, at any \(t\), this labelling defines therefore a Sperner labelling of a (chromatic) subdivision of \(S\). We consider the set \(\mathcal{S}\) of all simplices \(S\) of dimension \(n\) of \(\mathrm{C}hr^{t}(S^{n})\), for all \(t\). For a given \(t\), from Sperner Lemma, there is at least a simplex of \(\mathrm{C}hr^{t}(S^{n})\), that is panchromatic. There is therefore an infinite number of simplices \(S\) that are panchromatic in \(\mathcal{S}\). We consider now \(\mathcal{T}\subset\mathcal{S}\), the set of simplices \(T\in\mathcal{S}\) such that there is an infinite number of panchromatic simplex \(S\) such that \(|S|\subset|T|\). Note that \(T\) needs not be panchromatic. Since the number of simplices of \(\mathrm{C}hr^{t}(S^{n})\) is finite for a given \(t\), there is at least one simplex of \(\mathrm{C}hr^{t}(S^{n})\) that is in \(\mathcal{T}\). Therefore the set \(\mathcal{T}\) is infinite. We build a rooted-tree structure over \(\mathcal{T}\) : the root is \(S^{n}\) (indeed it is in \(\mathcal{T}\)), the parent-child relationship between \(T\) and \(T^{\prime}\) is defined when \(T^{\prime}\in\mathrm{C}hr(T)\). We have an infinite tree with finite branching. By Konig Lemma, we have an infinite simple path from the root. We denote \(T_{t}\) the vertex at level \(t\) of this path. We have \(|T_{t+1}|\subset|T_{t}|\) and \((T_{t})_{t\in\mathbb{N}}\) converges (same argument as the end of Section 3.2.3) to some \(y\in|S^{n}|\). The increasing prefixes corresponding to \(T_{t}\) define an execution \(w\) of \(IIS_{n}\). We will now consider two different cases, not on the fact whether or not, \(w\in\mathcal{M}\), but on the result of \(\mathcal{A}\) on execution \(w\). For first case, assume that algorithm \(\mathcal{A}\) has eventually decided on all processes on run \(w\) at some time \(t_{0}\). Since it could be that \(w\notin\mathcal{M}\), we cannot conclude yet. But since all processes have decided, they do not change their label in subsequent steps. By definition, \(T_{t_{0}}=geo(w_{|t_{0}}(S^{n})\) contains an infinite number of panchromatic simplices, _i.e._ at least one. So the simplex \(geo(w_{|t_{0}}(S^{n})\) is panchromatic. Hence any run with prefix \(w_{|t_{0}}\) cannot be in \(\mathcal{M}\), since \(\mathcal{A}\) solves set-agreement on \(\mathcal{M}\). Therefore \(w^{\prime}=w_{|t_{0}}\mathcal{K}_{\mathrm{II}}^{\omega}\) ( where \(\mathcal{K}_{\Pi}\) is the complete graph), is a fair execution that does not belong to \(\mathcal{M}\). Its entire geo-equivalence class, which is a singleton, is not in \(\mathcal{M}\). The second case is when algorithm \(\mathcal{A}\) does not eventually decide on all processes on run \(w\). Therefore \(w\notin\mathcal{M}\). Now we show that all elements \(w^{\prime}\) of the geo-class of \(w\) are also not in \(\mathcal{M}\). Assume otherwise, then \(\mathcal{A}\) halts on \(w^{\prime}\). By Prop. 20, at any \(t\), the simplex corresponding to \(w^{\prime}_{|t}\) intersects \(T_{t}\) on a simplex of smaller dimension whose geometrization contains \(y\). Consider \(t_{0}\) such that the execution has decided at this round for \(w^{\prime}\). Consider now \(T_{t_{0}+1}\), it intersects the decided simplex of \(\mathrm{Ch}r^{t_{0}}(S^{n})\) corresponding to \(w^{\prime}\), which means that the processes corresponding to the intersection were solo in \(w^{\prime}(t_{0}+1)\) and in \(w(t_{0}+1)\). When a process does not belong to a set of solo processes of the round, it receives all their values. So by normalization property of algorithm \(\mathcal{A}\), this means that in \(T_{t_{0}+1}\), all processe have decided. A contradiction with the fact that \(\mathcal{A}\) does not decide on all processes on run \(w\). This impossibility result means that there are many strict subsets \(\mathcal{M}\) of \(IIS_{n}\) where it is impossible to solve set-agreement, including cases where \(IIS_{n}\setminus\mathcal{M}\) is of infinite size. ### Algorithms for Set-Agreement In this section, we consider message adversaries \(\mathcal{M}\) that are of the form \(IIS_{n}\setminus geo^{-1}(y)\) for a given \(y\in|S^{n}|\). We note \(w\in IIS_{n}\), such that \(geo(w)=y\). We have \(w\notin\mathcal{M}\). In other words, \(\mathcal{M}=IIS_{n}\setminus\mathcal{C}\), where \(\mathcal{C}=geo^{-1}(geo(w))\) is the equivalence class of \(w\). We also denote \(\sigma_{y}(r)\) the simplex \(geo(w_{|r})(S^{n})\). #### 5.2.1 From Sperner Lemma to Set-Agreement Algorithm Remark that the protocol complex at time \(r\) is exactly \(\mathrm{Ch}r^{r}(S^{n})\), there is no hole "appearing" in finite time for such \(\mathcal{M}\). From Sperner Lemma, any Sperner labelling of a subdivision of \(S^{n}\) admits at least one simplex that is panchromatic. In order to solve set-agreement, the idea of Algorithm 2 is to try to confine the panchromatic, problematic but unavoidable, simplex of \(\mathrm{Ch}r^{t}(S^{n})\) to \(\sigma_{y}(r)\). Since the geo-class of \(w\) is not in \(\mathcal{M}\), any execution will eventually diverge from \(\sigma_{y}(r)\) and end in a non panchromatic simplex. We now define a special case of Sperner labelling of the Standard Chromatic Subdivision that admits exactly one given simplex that is panchromatic. We consider the generic colored simplex \((S,\chi)\) where \(S=(x_{0},\ldots,x_{n})\) and coloring function \(\chi\), that could be different from \(\mathcal{P}\). We consider labellings of the complex \(Chr(S)\) **Definition 34**.: _Let \(\tau\in Chr(S,\chi)\). \(f:V(Chr(S,\chi))\longrightarrow\Pi\) is a Sperner \(\tau-\)panlabelling if : \(f\) is a Sperner labelling of \((S,\chi)\); for all simplex \(\sigma\in\mathrm{C}hr(S)\), \(f(\sigma)=\Pi\) if and only if \(\sigma=\tau\)._ **Proposition 35**.: _Let \(\tau\) a maximal simplex of \(Chr(S,\chi)\), there exists a \(\tau-\)panlabelling \(\lambda\) of \(Chr(S,\chi)\)._ This key proposition is proved in the next section. Denote \(\lambda_{\tau}(S,\chi)\) such a Sperner \(\tau-\)panlabelling of \(\mathrm{C}hr(S,\chi)\). Before stating the algorithm, we show how to construct a sequence of panlabellings for \(\mathrm{C}hr^{r}(S^{n})\). Let \(r\in\mathbb{N}\), we denote \(\Psi_{w}(r)\) the following labelling defined by recurrence. Intuitively, it is the following labelling. In \(\mathrm{C}hr^{r}(S^{n})\), we have \(\sigma_{y}(r)\) that is panchromatic, all other simplices using at most \(n\) colors. In \(\mathrm{C}hr^{r+1}(S^{n})\), we label vertices that do not belongs to the subdivision of \(\sigma_{y}(r)\) by the labels used at step \(r\). In vertices from \(\mathrm{C}hr\sigma_{y}(r)\), we use \(\lambda_{\theta(w(r+1))}\) the Sperner \(\tau-\)panlabelling associated with \(\theta(w(r+1))\) to complete the labelling that uses at most \(n\) colors on a given simplex, except at \(\sigma_{y}(r)\). In order to simplify notation, we also note \(\lambda_{G}\) the labelling \(\lambda_{\theta(G)}\). Of course, we apply \(\lambda_{w(r+1)}\) using as input (corner) colors, the colors from \(\Psi_{w}(r)\). This way, on the neighbours of \(\sigma_{y}(r)\) the labelling is compatible. We denote \(\gamma_{r}(x)\) the precursor of level \(r\) of \(x\in V(\mathrm{C}hr^{r+1}(S^{n}))\), that is the vertex of \(V(\mathrm{C}hr^{r}(S^{n}))\) from which \(x\) is originating. We set \(\Psi_{w}(1)(x)=\lambda_{w(1)}(S^{n},\mathcal{P})(x)\) for all \(x\in V(\mathrm{C}hr^{r}(S^{n}))\), and for \(r\in\mathbb{N}^{*}\) \[\Psi_{w}(r+1)(p) = \Psi_{w}(r)(\gamma_{r}(x))\text{ if }x\notin|geo(w_{|r})(S^{n})|\] \[\lambda_{w(r+1)}(\Psi_{w}(r)(\sigma_{y}(r))(x)\text{ if }x\in|geo(w_{|r })(S^{n})|\] **Proposition 36**.: _For all \(r\), \(\Psi_{w}(r)\) is a Sperner \(\tau-\)panlabelling of \(\mathrm{C}hr^{r}(S^{n})\) for \(\sigma_{y}(r)\)._ Proof.: The proof is done by recurrence. The case \(r=1\) is Prop 35. Assume that \(\Psi_{w}(r)\) is a Sperner \(\sigma_{y}(r)-\)panlabelling of \(\mathrm{C}hr^{r}(S^{n})\). Consider now \(\Psi_{w}(r+1)\) for \(\mathrm{C}hr^{r+1}(S^{n})\). By construction and recurrence assumption, panchromatic simplices can only lay in \(|\sigma_{y}(r)|\). Since \(\lambda_{w(r+1)}\) is a Sperner panlabelling and that the corner colors for \(\sigma_{y}\) are taken from \(\Psi_{w}(r)\), we have that \(\sigma_{y}(r)\) is the only panchromatic simplex of \(\mathrm{C}hr^{r+1}(S^{n})\) We now prove the correctness of \(\mathcal{A}_{w}\) presented in Algorithm 2. Consider an execution \(v\in\mathcal{M}\). For Termination: since elements of the geo-class of \(w\) are not in \(\mathcal{M}\), there exists a round \(r\) at which \(v_{|r}\neq w^{\prime}_{|r}\) for all \(w^{\prime}\in geo^{-1}(geo(w)\), _i.e._ the conditional at line 2 is false for all processes and the algorithm is terminating. For Agreement: when terminating at round \(r\), \(i\) is not in \(\sigma_{y}(r)\), by loop conditional, so since \(\Psi_{w}(r)\) is only panchromatic on \(\sigma_{y}(r)\), the number of decided values is less than \(n\). Integrity comes from the fact that \(\Psi_{w}(r)\) is a Sperner labelling. #### 5.2.2 Sperner Panlabellings of the Standard Chromatic Subdivision In this section, \(n\) is fixed. We show how to construct a Sperner panlabelling of the standard chromatic subdivision. We consider the generic colored simplex \((S,\chi)\) where \(S=(x_{0},\ldots,x_{n})\) and coloring function \(\chi\), that could be different from \(\mathcal{P}\). We consider labellings of the colored complex \(Chr(S,\chi)\). We show the following combinatorial result about Sperner labellings. Let \(\tau\) be a maximal simplex of \(Chr(S,\chi)\), then there exists a \(\tau-\)panlabelling \(\lambda\) of \(Chr(S,\chi)\). We start by some definitions related to proving the above theorem. It is possible to associate to any simplex \(\sigma\) of \(Chr(S)\) a pre-order \(\succ\) on \(\Pi\) that corresponds to the associate graph \(\Theta(\sigma)\) : \(i\succ j\) when \((i,j)\in A(\Theta(\sigma))\). We call equivalence classes for \(\Theta(\sigma)\), the classes of the equivalence relation defined by \(i\succ j\wedge j\succ i\). It corresponds actually to the strongly connected components of the directed graph \(\Theta(\sigma)\). We define the _process view_ of a point. This is the color of points in the view \(V\) of vertex \((i,V)\) of the standard chromatic subdivision. [Process View] The _process view_ of point \(x=(\chi(x),V)\in V(\mathrm{Chr}(S,\chi))\) is defined by : \(V_{x}=\{\chi(y)|y\in V\}\). For \(\tau\in\mathrm{Chr}(S,\chi)\), we also define the _process view relative to \(\tau\)_ of a process \(p\), denoted \(V_{p}^{\tau}\). It is the process view of the point of \(\tau\) whose color is \(p\). It is linked to pre-order \(\succ\) : we have \(V_{p}^{\tau}=\{q\mid q\succ p\}\). Let \(\tau\) be a fixed maximal simplex of \(Chr(S)\). We show how to construct a \(\tau-\)panlabelling. We choose a permutation \(\varphi\) on \(\Pi\) such that it defines circular permutations on the equivalent classes of \(\Theta(\tau)\). Let \(p\in\Pi\), given \(W\subset V_{p}^{\tau}\) such that \(p\in W\), we denote by \(min^{*}(p,W)=\min\{i\in\mathbb{N}^{*}\mid\varphi^{i}(p)\in W\}\). Note that since \(\varphi\) is a permutation, there exists \(j>0\) such that \(\varphi^{j}(p)=p\), and since \(p\in W\), the minimum is taken over a non-empty set. Finally we set \(\varphi^{*}(p,W)=\varphi^{min^{*}(p,W)}(p).\) This is the first point of \(W\) that is in the orbit of \(p\) in \(\varphi\). We define \(\lambda_{\tau}:V(Chr(S))\to V(S)\), for \(x\in V(Chr(S))\), we set \[\lambda_{\tau}(x)=\begin{cases}q&\text{if $\exists q\in V_{x}$ and $q\notin V_{\chi(x)}^{\tau}$}\\ \varphi^{*}(\chi(x),V_{x}\cap V_{\chi(x)}^{\tau})&\text{otherwise.}\end{cases}\] Intuitively, for a given vertex of \(Chr(S)\) with view \(V\), if the process sees an other process \(q\) than in \(\tau\), then it is labelled by this \(q\), otherwise it will choose the first process in the circular orbit of \(\varphi\) that is in its view. The labelling \(\lambda_{\tau}\) is a \(\tau-\)panlabelling. First we show that it is indeed a Sperner labelling. In both cases of the definition, \(\lambda_{\tau}(x)\) belongs to \(V_{x}\). For \(x\in V(Chr(S))\), for \(\sigma\subset S\), \(x\in|\sigma|\), with \(\sigma\) minimum for this property, means that the presentation of \(x\) is \(\Phi(x)=(i,\sigma)\) for some \(x_{i}\) such that \(x_{i}\in V(\sigma)\) and \(\chi(x_{i})=i\). Now we show that the only panchromatic simplex is \(\tau\). By construction, with \(x\in V(\tau)\), \(\varphi^{*}(\chi(x),V_{x})=\varphi(\chi(x))\) since in this case \(V_{x}=V_{\chi(x)}^{\tau}\). So \(\tau\) is panchromatic through \(\lambda_{\tau}\). Now we consider \(\sigma\neq\tau\). We have two possible cases : 1. \(\exists x\in V(\sigma),q\in V_{x},q\notin V_{\chi(x)}^{\tau}\), 2. \(\forall x\in V(\sigma),V_{x}\subseteq V_{\chi(x)}^{\tau}\). We start with the first case, we denote by \(C\) the highest, for \(\succ\) in \(\sigma\), class such there is \(x\) in \(C\) satisfying the clause (1). We show that \(\#\lambda_{\tau}(C)\cap C<\#C\), where \(\#\) denotes the cardinal of a set. By definition of \(C\), \(\lambda_{\tau}^{-1}(C)\subseteq C\). Since \(\lambda_{\tau}(x)\notin C\), this means that \(\#\lambda_{\tau}(C)\cap C\neq\#C\). By assumption all classes \(C^{\prime}\) that are higher than \(C\) choose colors in \(C^{\prime}\), so \(\sigma\) is not panchromatic under \(\lambda_{\tau}\). Now, we assume we do not have case (1), this means that \(\forall x\in V(\sigma),\lambda_{\tau}(x)=\varphi^{*}(\chi(x),V_{x})\). Since \(\sigma\neq\tau\), there exists \(x\in V(\sigma),V_{x}\subsetneq V_{\chi(x)}^{\tau}\). We choose the lowest such \(x\) for \(\succ\) in \(\tau\). We consider \(C_{x}\) the class of \(x\) in \(\sigma\). We show that \(\#\lambda_{\tau}(C_{x})<\#C_{x}\). We denote \(C_{x}^{\tau}\) the class of color \(\chi(x)\) in \(\tau\). First we show that \(C_{x}\subseteq C_{x}^{\tau}\). Indeed, assume there is \(y\in C_{x}\) such that \(y\notin C_{x}^{\tau}\). Since the view of elements of the same class are the same, this means that \(\chi(x)\in V_{y}\) and \(y\) would satisfy property 1. A contradiction to the case we are considering. And this is true for all \(y\in C_{x}\). Now we show \(C_{x}\subsetneq C_{x}^{\tau}\). We have \(V_{x}\subsetneq V_{\chi(x)}^{\tau}\). Let \(y\in V_{\chi(x)}^{\tau}\setminus V_{x}\). If \(y\notin C_{x}^{\tau}\), by the same previous argument, we get a contradiction. Hence \(y\in C_{x}^{\tau}\) and therefore \(C_{x}\subsetneq C_{x}^{\tau}\). We denote \(p=\varphi^{*}(\chi(x),C_{x}^{\tau}\setminus C_{x})\). We note \(p^{\prime}=\varphi^{-1}(p)\). We have by definition of \(\varphi^{*}(.,C_{x}^{\tau}\setminus C_{x})\), that \(p^{\prime}\in C_{x}\), therefore \(C_{\chi_{|\sigma}^{-1}(p^{\prime})}=C_{x}\). Now we set \(p^{\prime\prime}=\varphi^{*}(p^{\prime},C_{x})\). The color \(p^{\prime\prime}\) has at least two predecessors in the labelling : \(p^{\prime}\) by construction (since \(x\) was choosen the lowest for \(\succ\) then \(V_{\chi_{|\sigma}^{-1}(p^{\prime})}=V_{p^{\prime}}^{\tau}\)) and \(p^{\prime\prime\prime}=\varphi^{-1}(p^{\prime\prime})\) which is not \(p^{\prime}\) since \(\varphi(p^{\prime})=p\notin C_{x}\). So \(\#\lambda_{\tau}(\chi_{|\sigma}^{-1}(V_{x}^{\tau}))<\#V_{x}^{\tau}\), and \(\lambda_{\tau}(\sigma)\neq\Pi\). #### 5.2.3 Lower Bounds It is possible to use the impossibility result to prove the following lower bound. Algorithm 2 is therefore optimal for fair \(w\). Let \(\mathcal{A}\) be an algorithm that solves set-agreement on \(\mathcal{M}=IIS_{n}\setminus geo^{-1}(geo(w))\) with \(w\in IIS_{n}\). Then, for any execution \(v\), \(t\in\mathbb{N}\), such that \(v_{|t}=w_{|t}^{\prime}\) for some \(w^{\prime}\in geo^{-1}(geo(w))\), \(\mathcal{A}\) has not terminated at \(t\). Proof.: Suppose \(\mathcal{A}\) has decided on all process at \(t\), with \(v_{|t}=w_{|t}^{\prime}\) for some \(w^{\prime}\in geo^{-1}(geo(w))\). So \(\mathcal{A}\) solve set-agreement on \(w^{\prime}\). A contradiction with Th. 31 since \(\mathcal{UM}\cup\{w^{\prime}\}\wr=|S^{n}|\). ## 6 Conclusion and Open Questions In this note, we have presented how to construct a topology directly on the set of executions of \(IIS_{n}\) the Iterated Immediate Snapshot message adversary. Though this is not a simple textbook topology as usual, since there are non-separable points, the characterization we presented enables to fully understand it. As a first application, we were able to characterize for the first time general subsets of \(IIS_{n}\) where set-agreement is solvable. We also believe this new approach could be successfully applied to other distributed tasks and distributed models. Another interesting open question would be to compare the geometrization topology to the knowledge-based ones defined in [20].
2301.10133
Read the Signs: Towards Invariance to Gradient Descent's Hyperparameter Initialization
We propose ActiveLR, an optimization meta algorithm that localizes the learning rate, $\alpha$, and adapts them at each epoch according to whether the gradient at each epoch changes sign or not. This sign-conscious algorithm is aware of whether from the previous step to the current one the update of each parameter has been too large or too small and adjusts the $\alpha$ accordingly. We implement the Active version (ours) of widely used and recently published gradient descent optimizers, namely SGD with momentum, AdamW, RAdam, and AdaBelief. Our experiments on ImageNet, CIFAR-10, WikiText-103, WikiText-2, and PASCAL VOC using different model architectures, such as ResNet and Transformers, show an increase in generalizability and training set fit, and decrease in training time for the Active variants of the tested optimizers. The results also show robustness of the Active variant of these optimizers to different values of the initial learning rate. Furthermore, the detrimental effects of using large mini-batch sizes are mitigated. ActiveLR, thus, alleviates the need for hyper-parameter search for two of the most commonly tuned hyper-parameters that require heavy time and computational costs to pick. We encourage AI researchers and practitioners to use the Active variant of their optimizer of choice for faster training, better generalizability, and reducing carbon footprint of training deep neural networks.
Davood Wadi, Marc Fredette, Sylvain Senecal
2023-01-24T16:57:00Z
http://arxiv.org/abs/2301.10133v1
# Read the Signs ###### Abstract We propose ActiveLR, an optimization meta algorithm that localizes the learning rate, \(\alpha\), and adapts them at each epoch according to whether the gradient at each epoch changes sign or not. This sign-conscious algorithm is aware of whether from the previous step to the current one the update of each parameter has been too large or too small and adjusts the \(\alpha\) accordingly. We implement the Active version (ours) of widely used and recently published gradient descent optimizers, namely SGD with momentum, AdamW, RAdam, and AdaBelief. Our experiments on ImageNet, CIFAR-10, WikiText-103, WikiText-2, and PASCAL VOC using different model architectures, such as ResNet and Transformers, show an increase in generalizability and training set fit, and decrease in training time for the Active variants of the tested optimizers. The results also show robustness of the Active variant of these optimizers to different values of the initial learning rate. Furthermore, the detrimental effects of using large mini-batch sizes are mitigated. ActiveLR, thus, alleviates the need for hyper-parameter search for two of the most commonly tuned hyper-parameters that require heavy time and computational costs to pick. We encourage AI researchers and practitioners to use the Active variant of their optimizer of choice for faster training, better generalizability, and reducing carbon footprint of training deep neural networks. Machine Learning, ICML, ICML ## 1 Introduction The current state-of-the-art optimization algorithms are mechanical in their nature, in the sense that they passively average the gradients, based on a predefined number of past steps, instead of actively adjusting the step size at each epoch according to the change in the sign of the gradients. The moving-average-based optimizers, such as SGD with momentum (Sutskever et al., 2013), Adam (Kingma and Ba, 2014), RAdam (Liu et al., 2019), and RMSprop (Tieleman and Hinton, 2012), base their optimization calculations on a fixed number of previous gradient values, regardless of what their sign is. If the sign of the gradients changes after epoch \(e\), we want the optimizer to recognize this jump over the optimum immediately and decrease the step size accordingly. Furthermore, current optimizers update all the weights by a global scalar learning rate, \(\alpha\) (referred to by most deep learning frameworks as LR or learning rate). We introduce ActiveLR, an optimization meta algorithm that can be implemented on top of current optimizers to adjust the hyper-parameter \(\alpha\) at each epoch for each model parameter. We mathematically prove that ActiveLR variant of a given optimizer has a lower objective function cost compared to the vanilla implementation of that optimizer. One of the main objectives of ActiveLR is to obviate the need for the tuning of the initial learning rate, \(\alpha\), and mini-batch size. For different datasets, architectures, and tasks, the values of \(\alpha^{*}\), the optimal initial learning rate, and the optimal mini-batch size (Iiduka, 2021), which lead to optimal convergence, are different. Although for well-known benchmark tasks, such as ImageNet with ResNet18 for object recognition, the optimal values for different hyper-parameters have been immensely studied and made publicly available, for new tasks, researchers and practitioners must start anew and perform costly hyper-parameters search to find the optimal values for the initial learning rate, mini-batch size, what learning rate scheduler to use, and at which epochs they should decay the learning rate by what amount. Moreover, learning rate values higher than \(\alpha^{*}\) lead to divergence of the optimization, while values lower than \(\alpha^{*}\) lead to significantly slower rate of convergence and also increase the possibility of being stuck in a local minimum. Also, large mini-batch sizes are necessary for fast training of large-scale datasets and utilizing the full computational power of multiple GPUs (You et al., 2017). Our tests show that vanilla implementations of SGD with momentum, AdamW, RAdam, and Adabelief get stuck in local minima for smaller learning rates and fail to achieve high performance on the test set. Moreover, their performance has a negative correlation with mini-batch size. As we increase the mini-batch size, the training becomes unstable and generalizability suffers significantly. On the other hand, ActiveLR implementations of these optimizers (i.e., ActiveSGD, ActiveAdamW, ActiveRAdam, and ActiveBelief) outperform their original implementations, and are also robust to the values of initial learning rate and mini-batch size. This has major implication for research and practice. Not only is tuning the learning rate and mini-batch size time consuming and costly, it causes severe environmental impacts. For instance, the greenhouse gas emissions for training a state-of-the-art transformers model is equivalent to 10 years of emissions of a person in the U.S. (Lacoste et al., 2019; Strubell et al., 2019). Furthermore, while researchers at large institutions have access to high compute power and afford to perform intense hyper-parameter search on hundreds of GPUs and TPUs, most AI researchers and practitioners have significantly more limited access to computation power. Therefore, ActiveLR helps in democratizing AI, saving time and cost to AI researchers, and also reducing greenhouse emissions. ## 2 Problematization SGD and adaptive optimizers have two shortcomings. First, when the optimizer approaches the optimum, the accumulated momentum will cause it to oscillate over the optimum. It is because when the gradient changes its sign, if the exponential moving average has the opposite sign, the weight of the parameter will be updated up the slope of the error curve, deviating the model from the optimum. Therefore, it takes longer for such optimizers to stabilize around the optimum compared to an optimizer that is aware of the gradients' sign at each step and adjusts the step size, \(\alpha\), according to whether the gradients have changed their sign or not. Second, using a global learning rate, \(\alpha\), to update all the parameters does not account for the specific position of each neuron compared with its optimum value. At each training epoch, some neurons are farther away from their optimum--requiring a larger \(\alpha\)--while others are closer to their optimum--requiring a smaller \(\alpha\). In addition, for a deep neural network, the gradient for middle layers are much smaller compared to the initial and final layers (see 5.1), requiring a larger \(\alpha\). Moreover, when we simultaneously update the weights of a previous layer (e.g. \(layer_{i-1}\)) to correct the same error function, this simultaneous change in the incoming weights to the next layer (\(layer_{i}\)) causes an overshoot effect--determined by the term "fan-in"--on the weights of \(layer_{i}\). The size of fan-in determines the amount of input a layer receives and varies from layer (Tieleman & Hinton, 2012a). Therefore, we need local \(\alpha\)'s for each neuron to account for these disparities between different neurons. ## 3 Related work Jacobs (1988) was the first to suggest that every weight of a neural network be given its own \(\alpha\), and each \(\alpha\) be allowed to vary over time. He states that when the sign of the derivative for a parameter is constant for consecutive epochs, the \(\alpha\) for that parameter should be increased. On the other hand, when the sign of the derivative for a parameter changes for consecutive epochs, the \(\alpha\) for that parameter should be decreased (Figure 1). Efforts have been made to reduce the sensitivity of gradient descent optimization to learning rate initialization (Baydin et al., 2017; Zhang et al., 2019). However, previous work on reducing sensitivity to initial learning rate has introduced additional hyper-parameters that need to be tuned, which seems to defeat the purpose-removing a hyper-parameter and introducing others to be tuned. To best of our knowledge, there has not been an automatic algorithm that reduces sensitivity to both initial learning rate and mini-batch size. The latest mention of a sign-conscious optimization algorithm was in (Tieleman & Hinton, 2012a). However, since previous implementations worked only on full-batch training and researchers almost never want to train their neural networks with full batches, the development of sign-conscious Figure 1: Optimization of a parameter of a model using a sign-conscious optimizer. When the sign of the gradient does not change (_left_), the Active optimizer increases the step size \(\alpha\), nudging the model weight to the true value of the weight (2.0). When the sign of the gradient changes (_right_), the weight has jumped over the optimum. Therefore, the Active optimizer decreases the step size \(\alpha\) for the next iteration. This not only decreases convergence time, but also avoids constantly jumping over the optimum value. ## 4 ActiveLR We introduce a meta algorithm that utilizes a vanilla optimization algorithm, such as Adam or SGD, in the inner-loop part, and in the outer-loop part of the optimization, at every epoch, actively adjusts the local learning rate for each parameter based on the change in the sign of the cumulative gradients. Unlike previous implementations, ActiveLR works with mini-batch and stochastic gradient descent. Almost always, mini-batch gradient descent is the optimal way to train neural networks. Firstly, modern datasets (e.g., ImageNet) do not fit into GPU memory, so full-batch gradient descent on such datasets is not possible. Furthermore, researchers prefer to use mini-batches instead of full-batches, as mini-batches provide faster convergence and better generalizability (Qian & Klabjan, 2020; Wilson & Martinez, 2003). ### The ActiveLR meta algorithm Assuming we have \(k\) mini-batches of data in our dataset, for each iteration t in the dataset, given that each mini-batch of training data has a specific gradient w.r.t. the parameter \(\theta_{i}\), \(\nabla f^{t}(\theta_{i}^{t-1})\), that is calculated for that mini-batch, we define the cumulative gradient of the model as the arithmetic summation of all mini-batch gradients. For epoch \(e\), the cumulative gradient, \(\nabla f^{e}_{cu}\), with respect to the inner-loop-updated parameter, \(\theta_{i}\), can be derived from: \[\nabla f^{e}_{cu}(\theta_{i})=\sum_{t=1}^{k}\nabla f^{t}(\theta_{i}^{t-1})\] The cumulative gradient \(\nabla f^{e}_{cu}(\theta_{i})\) is the gradient of the objective function \(f\) with respect to the parameter \(\theta_{i}\) at epoch \(e\), as if the model has experienced the whole dataset (full-batch training). Since after seeing each mini-batch the optimizer updates the parameters, the calculated gradients for each mini-batch is different from the gradient of the mini-batch if there was no update (this is what we are trying to approximate). Consequently, we wish to prove that the sign of the gradient for the whole dataset with respect to parameter \(\theta_{i}\) when the parameter does not change inside the loop is the same as that of the updated parameter within the loop. Thus, we prove the Theorem 1 that enables us to use gradient-descent-updated gradients instead of no-update gradients. **Theorem 4.1**.: _In the local convex regime of a non-convex objective function, the cumulative gradient of parameter \(\theta_{i}\) at epoch \(e\) with no inner-loop updates, \(\overset{*}{cu}\), has the same sign as the cumulative gradient of parameter \(\theta_{i}\) at epoch \(e\) with inner-loop updates, \(\overset{*}{cu}\), if the learning rate, \(\alpha\), is smaller than \(\alpha^{*}\) that causes the inner-loop to diverge._ \[\overset{*}{cu}\overset{*}{cu}\geq 0\] ActiveLR can now be implemented using mini-batches by comparing the sign of the cumulative gradient at epoch \(e-\) 1, \(\nabla f^{e-1}_{cu}(\theta_{i})\), with the sign of the cumulative gradient at epoch \(e\), \(\nabla f^{e}_{cu}(\theta_{i})\) (Algorithm 1). It allows the optimizer to update the parameters at each step of the mini-batch (Wilson & Martinez, 2003; Qian & Klabjan, 2020), while adjusting the \(\alpha\) at the end of each epoch (Tieleman & Hinton, 2012a), enabling inner-loop learning through the backend optimizer (e.g., Adam, SGD) and active learning rate adjustment together. ``` Inputs:\(\alpha^{0}\) {initial learning rate}, \(\theta_{i}^{0}\) {initial parameter \(i\)}, \(f(\theta)\) {stochastic objective function}, \(\alpha_{high}\) {\(\alpha\) growth constant}, \(\alpha_{low}\) {\(\alpha\) shrink constant} Output:\(\theta_{i}^{T}\) {the updated parameter \(i\)} \(e\gets 0\), \(g_{i,cu}^{0}\gets 0\) {initialization} repeat \(e\gets e+1\) {next epoch} \(t\gets 0\) \(g_{i,cu}^{e}\gets 0\) {set/reset the cumulative gradient to zero at each epoch} for mini-batches in dataset do \(t\gets t+1\) \(g_{i}^{t}\leftarrow\nabla_{\theta_{i}}f^{t}(\theta_{i}^{t-1})\) {calculate the gradient of the objective w.r.t. the parameter at timestep \(t\)} \(g_{i,cu}^{e}\gets g_{i,cu}^{e}+g_{i}^{t}\) {add mini-batch gradient to cumulative gradient} \(\theta_{i}^{t}\leftarrow\theta_{i}^{t-1}-\alpha_{i}^{t}g_{i}^{t}\) {SGD update} endfor \(\alpha_{i}^{e}\leftarrow\begin{cases}\alpha_{i}^{e-1}+\alpha_{high},&sign(g_{i,cu }^{e}\times g_{i,cu}^{e-1})>0\\ \alpha_{i}^{e-1}\times\alpha_{low},&sign(g_{i,cu}^{e}\times g_{i,cu}^{e-1}) \leq 0\end{cases}\) {adjust learning rate} until\(\theta_{i}^{t}\) converged Return:\(\theta_{i}^{T}\) ``` **Algorithm 1** ActiveLR for SGD (ActiveSGD) ### Orthogonality of ActiveLR to other optimizers A major advantage of ActiveLR is the orthogonality of the hyper-parameter that it adjusts, \(\alpha\), to the hyper-parameters that other well-known algorithms adjust. To be more specific, we take a look at the generic parameter update that is the backbone of SGD, Adam, RAdam, etc. \[\theta_{i}^{t}\leftarrow\theta_{i}^{t-1}-\alpha F(g_{i}^{t}) \tag{1}\] The difference among these algorithms is the way they manipulate the \(F(g_{i}^{t})\) term through the \(F\) function, while keeping \(\alpha\) as is in the original SGD algorithm. For instance, in the case of Adam, \(F(g_{i}^{t})\) is \(\frac{\frac{\beta_{1}m_{i}^{e-1}+(1-\beta_{1})g_{i}^{t}}{\sqrt{\frac{\beta_{2}m_ {i}^{e-1}+(1-\beta_{2})g_{i}^{t}}{1-\beta_{2}^{t}}+\epsilon}}}{\frac{\beta_{2}m_ {i}^{e-1}+(1-\beta_{2})g_{i}^{t}}{1-\beta_{2}^{t}}+\epsilon}\). However, since ActiveLR modifies the \(\alpha\) term in Equation 1, it can be combined with other optimization algorithms. In fact, in our experiments we report results on ActiveLR combined with SGD with momentum (ActiveSGD), AdamW (ActiveAdamW), with RAdam (ActiveRAdam), and AdaBelief (ActiveBelief) and compare the Active results with their original, vanilla variants. ## 5 Convergence analysis At any given epoch, \(e\), we are faced with the choice between the ActiveLR algorithm vs. its corresponding vanilla backbone. We prove that the value of the objective function for ActiveSGD (ActiveLR combined with SGD) at epoch \(e+\,1\) is as good or better than the vanilla SGD (backbone) algorithm at epoch \(e+\,1\), and show the conditions where each inequality holds. The proof is capable of any first-order backbone algorithm. For other backbones, such as Adam, RMSProp, etc., their respective ActiveLR implementation should have the same lines of proof, with some modifications peculiar to the optimizer of interest. **Theorem 5.1**.: _Let us call the cost of vanilla SGD at epoch \(e\), \(f(\theta_{e}^{A})\), and the cost of its corresponding ActiveLR implementation, \(f(\theta_{e}^{A})\). We define ActiveSGD's gradient, \(g_{e+1}^{A}\), as \(\frac{\partial f(\theta_{e}^{A})}{\partial\theta_{e}^{A}}\), and vanilla SGD's gradient, \(g_{e+1}^{S}\), as \(\frac{\partial f(\theta_{e}^{S})}{\partial\theta_{e}^{S}}\). \(\alpha\) is the initial learning rate for ActiveSGD and also the constant learning rate for SGD. \(\alpha_{High}\) is the higher learning rate (\(\alpha_{High}>\alpha\)) that ActiveLR uses when \(g_{e}^{A}g_{e+1}^{A}>0\) and \(\alpha_{Low}\) the lower learning rate (\(\alpha_{Low}<\alpha\)) that ActiveLR uses when \(g_{e}^{A}g_{e+1}^{A}<0\). In the local convex regime of \(f\), at any arbitrary epoch, \(e\), the difference between the cost of using vanilla SGD and ActiveSGD at the next epoch, \(e+1\), is_ \[\begin{cases}f(\theta_{e+1}^{S})-f(\theta_{e+1}^{A})\geq g_{e+2}^{A}g_{e+1}^{A }(\alpha_{High}-\alpha),\\ if\,g_{e+1}^{A}>0.\\ f(\theta_{e+1}^{S})-f(\theta_{e+1}^{A})\geq g_{e+2}^{A}g_{e+1}^{A}(\alpha_{Low }-\alpha),\\ if\,g_{e}^{A}g_{e+1}^{A}<0.\end{cases} \tag{2}\] _where the right hand side for both cases is non-negative._ In other words, ActiveSGD is at least as good as vanilla SGD when gradients are zero (equality in 2), and strictly better than vanilla SGD when gradients are not zero (inequality in 2). The lower bound of the advantage is \(g_{e+2}^{A}g_{e+1}^{A}(\alpha_{High}-\alpha)\), when \(g_{e}^{A}g_{e+1}^{A}>0\) and \(g_{e+2}^{A}g_{e+1}^{A}(\alpha_{Low}-\alpha)\) when \(g_{e}^{A}g_{e+1}^{A}<0\). The hyper-parameters of ActiveLRTo set the operations for \(\alpha_{low}\) and \(\alpha_{high}\), we follow (Tieleman and Hinton, 2012)-multiply by \(\alpha_{low}\) and add \(\alpha_{high}\). We carry out an ablation analysis of the choice of operations (e.g., addition or multiplication). For the results, please refer to A.3. To obtain the default values of \(\alpha_{low}\) and \(\alpha_{high}\), we carry out a preliminary experiment on CIFAR-10 and find that any combination of \(\alpha_{low}\) and \(\alpha_{high}\) that satisfy the constraint \(\alpha_{low}+\alpha_{high}=1\) reduce sensitivity to initial learning rates and mini-batch size, while increasing the overall accuracy. For all the experiments that follow, we have kept \(\alpha_{low}=0.9\) and \(\alpha_{high}=0.1\) constant. The results show the robustness of the default values of \(\alpha_{low}\) and \(\alpha_{high}\) across datasets and tasks, which suggests that ActiveLR's hyper-parameters do not need to be tuned. ### Distribution of gradient norms across layers We examine the L1-norm of the gradients across layers for the optimization of the CIFAR-10 dataset on the ResNet-18 architecture, which has \(21\)\(2\)D-convolution layers and \(1\) final fully connected layer (Figure 2). The gradient norm of the final dense layer (layer \(\#=22\)) directly correlates with the loss. In contrast, to achieve minimal loss, the gradient norm of the convolutional layers (layer \(\#\in[1,21]\)) increases to a maximum value (the learning phase) and then decreases (the stabilizing phase around the optimum). For vanilla Adam, when the \(\alpha\) is too small (\(\alpha=10^{-5}\)), the gradient norm for every convolutional layer increases monotonically with the number of iterations and the loss reaches a plateau, indicating getting stuck in a local minimum (a). In other words, when the learning rate is lower than an optimum value, the learning phase never ends. When the learning rate is optimal (\(\alpha=10^{-3}\)), Adam increases the gradient norms to an optimal level where it achieves the minimal loss (the end of the learning phase). Afterwards, in the stabilizing phase, the gradients are decreased at a relatively slow rate, which causes the loss to oscillate (\(iteration>60\)) around the minimum value (b). ActiveAdam, with the same low learning rate that gets Adam stuck (\(\alpha=10^{-5}\)), is able to achieve what Adam with the optimal learning rate does in the training phase (i.e., increase the gradient norms to a maximum value). Additionally, after the minimal loss is achieved, ActiveAdam sharply reduces the gradient norms, preventing fluctuations around the optimum (\(iteration>60\)). Figure 2: **Gradient norms across layers for CIFAR-10 on ResNet-18**. Vanilla Adam when the \(\alpha\) is too small (\(\alpha=10^{-5}\)) gets stuck in a local minimum a. Vanilla Adam when the learning rate is optimal (\(\alpha=10^{-3}\)) oscillates around the optimum b. ActiveAdam with the same low learning rate that gets vanilla Adam stuck (\(\alpha=10^{-5}\)) achieves minimal loss and remains in the optimum c. \(60\)) (c). ### ActiveLR and non-convex functions The Active implementation of a given algorithm such as Adam (ActiveAdam) provides numerous advantages over the vanilla implementation in troublesome non-convex optimization scenarios. ActiveLR and local minimaLocal minima are a major plague in deep neural network optimization (Ding et al., 2019; Yun et al., 2018). They are one of the main reasons that optimizers do not generalize well and fail to fit the training data properly. This is especially the case when the learning rate is smaller than an optimum value. In Figure 3a, the vanilla Adam quickly goes down the objective function at the starting point \(x=5\), but as soon as it reaches the local minimum, \(x=3\), the gradient becomes significantly small, causing vanilla Adam to oscillate around the local minimum no matter how long we keep training. ActiveAdam, on the other hand, quickly increases the learning rate, as soon as it realizes that the gradient signs are not changing. It enables ActiveAdam to pass through the local minimum and move towards the global minimum, \(x\rightarrow-\infty\). In Figure 3b, the vanilla optimizer faces two challenges. This bivariate function, \(f(x,y)=-x^{3}-x^{2}y+y^{2}+4y+1680\), has three local minima and a global minimum of \((+\infty,+\infty)\). The first local minimum, \((-4,6)\), the starting point, has a higher function value than the other two, \((0,-2)\) and \((1,-\frac{3}{2})\), which have the same lower value. The first challenge is starting near a local minimum, where the gradients are too small. Adam struggles to escape the starting point because of the small gradients. After a large number of iterations, Adam eventually escapes the starting local optimum but faces the second challenge. It gets trapped in another local minimum, \((0,-2)\), and is unable to escape. ActiveAdam, on the other hand, quickly escapes the initial local minimum since it increases its learning rate when the gradients do not change. With the higher learning rate it has accumulated, it is able to quickly escape the other two local minima and converge to the global minimum, \((+\infty,+\infty)\). ActiveLR and saddle pointsSaddle points are another major pitfall in training neural networks (Kawaguchi, 2016). For deep neural networks, large sets of strict and non-strict saddle points have been shown to exist (Achour et al., 2021). Being close to a saddle point equates very small gradient values in all directions. As a result, non-Active optimizers will require significantly higher training iterations to escape saddle points. In Figure 3c, the saddle point lies at \((0,0)\). After \(12\) iterations, Adam is relatively where it was when the training started. ActiveAdam, however, quickly increases its learning rate, following iterations of no sign change, and reaches the global minimum at \((0,1)\) in \(12\) iterations. ## 6 Experiments We perform our experiments on ImageNet (Deng et al., 2009) with ResNet-18 and CIFAR-10 (Krizhevsky et al., 2009) with ResNet-34 for image classification, WikiText-2 and WikiText-103 (Merity et al., 2016) with GPT-2 (Radford et al., 2019) for language modelling, and PASCAL VOC (Everingham et al., 2010) dataset on Faster-RCNN+FPN for object detection. Based on the literature, non-adaptive optimizers (e.g. SGD) work with higher initial learning rates compared to adaptive optimizers. Therefore, for all the experiments, we use \(50\) times the hyperparameter space for initial learning rates of SGD with momentum and ActiveSGD compared to the adaptive optimizers. For each vanilla optimizer and its Active variant, we test the performance across a range of initial learning rates and, for other hyperparameters, we use the values suggested on the optimizer's official GitHub page for each task. We use a heterogeneous cluster with each node comprised of 4 x NVIDIA P100 Pascal (12G or 16G HBM2 memory) or 4 x NVIDIA V100 Volta (32G HBM2 memory). To simulate common practitioners' limited access to GPUs, we train the WikiText-2, WikiText-103, PASCAL VOC, and CIFAR-10 experiments on 1 GPU and the ImageNet experiments on 4 GPUs. Figure 4: Reduced sensitivity to mini-batch size on CIFAR-10 test accuracy \(([\mu\pm\sigma])\) Figure 3: (a) univariate unimodal, (b) bivariate multimodal, and (c) saddle function optimization using vanilla Adam and ActiveAdam ### Sensitivity to mini-batch size ActiveLR alleviates the problem of large mini-batch sizes by its automatic learning rate adaptation. In Figure 4, we can see that for the CIFAR-10 dataset, increasing the mini-batch size from the optimal value of \(128\) to larger values significantly decreases the test-set accuracy of vanilla SGD with momentum. Larger mini-batch sizes also destabilize training as can be seen by higher fluctuations in accuracy for vanilla SGD with momentum. ActiveSGD, on the other hand, achieves a relatively high accuracy regardless of the mini-batch size and also remains stable. (\(\alpha=10^{-4}\), ResNet-18) ### Sensitivity to learning rate Cifar-10We train SGD with momentum, AdaBelief, RAdam, and AdamW along with their Active variant (i.e., ActiveSGD, ActiveBelief, ActiveRAdam, ActiveAdamW) with 9 different initial learning rates \(5\times 10^{n}\), \(n\in[-7,-3]\), each ran \(3\) times. We test different values for weight decay in the range of \([10^{3},10^{-8}]\) and found that the optimal weight decay, \(wd^{*}\) depends on the learning rate, \(\alpha\). For CIFAR-10, \(wd^{*}\times\alpha=10^{-4}\). As seen in Figure 5, the Active variant of each optimizer achieves a higher average accuracy compared with the vanilla variant. More importantly, the area between the accuracy achieved by the worst-performing learning rate and the best-performing learning rate is significantly smaller for the Active variants compared to the vanilla variants. This indicates reduced sensitivity to the selection of the initial learning rate that ActiveLR achieves. WikiText-2We fine-tune GPT-2 with 124,439,808 parameters on WikiText-2 (vocabulary size \(50257\), maximum sequence length \(1024\), dimensionality of the embeddings and hidden states \(768\), number of hidden layers in the Transformer encoder \(12\), number of attention heads for each attention layer \(12\), activation function \(gelu\), dropout probability \(0.1\)). Initial learning rates in the range of \([10^{-8},10^{-4}]\) for adaptive optimizers and \([5\times 10^{-7},5\times 10^{-3}]\) for non-adaptive optimizers are tested. For each initial learning rate, we calculate the test-set PPL for the epoch that gives the highest validation-set PPL. In Figure 6, we can see that ActiveLR gives better test-set PPL compared to vanilla optimizers on average, while producing lower train-set errors. Furthermore, vanilla optimizers have a significant degree of variation in their test-set and train-set PPLs, indicating their sensitivity to the initial learning rate, while ActiveLR shows relative insensitivity to the initial learning rate. A surprising finding is the generalizability of ActiveSGD. Although in the literature the performance of SGD with momentum for transformers models has been shown to be inferior to adaptive methods-and our results confirm it-the ActiveLR variant of SGD with momentum shows better generalizability than the adaptive methods. In fact, ActiveSGD gives the best t Figure 5: Test accuracy (\([\mu\pm\sigma]\)) of SGD with momentum, AdaBelief, RAdam, and AdamW compared with their Active variant on CIFAR-10. The red area in-between lines indicates the sensitivity of the optimizer w.r.t. the value of the initial learning rate. The horizontal lines indicate the highest mean accuracy of each optimizer across learning rates. Figure 6: WikiText-2 on GPT-2 with various initial learning rates identical_). It also has the lowest amount of variance among the optimizers, showing reduced sensitivity to the initial learning rate. WikiText-103 and PASCAL VOCPlease refer to Appendix A.2. ## 7 Social impact ActiveLR makes contributions to several social causes. First, given the growth of the size of datasets and deep learning models, it has become increasingly more difficult for average AI practitioners and researchers to train state-of-the-art models. By reducing sensitivity to hyperparameter initialization, ActiveLR helps in democratizing AI. Second, carbon emissions from training deep neural networks have reached alarming levels. Consider training a new dataset with the size of the ImageNet on a relatively small model, such as ResNet-18. Let us assume the goal is to achieve top-1 accuracy of 55% or higher. Since we a priori do not know what the optimal initial learning rate is, we need to test various initial learning rates. Testing 9 initial learning rates starting from \(10^{-4}\) up to \(10^{0}\), we find that, with SGD with momentum, 7 out of 9 initial learning rates achieve our desired accuracy in \(100\) epochs, while, with ActiveSGD, 8 out of 9 values reach our desired accuracy (Figure 8). Considering the worst-case, where the suboptimal learning rates are selected first, SGD with momentum, compared with ActiveSGD, requires 13,816,000 TFLOPS extra computation (2200 seconds per epoch, 61.11 hours total). With an average carbon efficiency of 0.432 kgCO\({}_{2}\)eq/kWh, using 4 x Tesla V100-SXM2-32GB (TDP of 300W) GPUs (same as our experiments), the excessive \(CO_{2}\) emissions are estimated to be 7.92 kgCO\({}_{2}\)eq. This is equivalent to 3.96 Kgs of extra coal burned. On the other hand, ActiveSGD not only prevents such extra carbon emissions, but also gives higher average accuracy and fit (estimations were conducted using the MachineLearning Impact calculator presented in Lacoste et al. (2019).) As a result, we believe that ActiveLR is a step towards eliminating the need for hyperparameter search, which helps democratize AI and reduce carbon emissions. ## 8 Limitations While we try to include as many experiments on various tasks to show the robustness of the results of ActiveLR, due to time and computational limits, we could not test all benchmark datasets. We are working on releasing test results for COCO dataset (Lin et al., 2014) for object segmentation, and IWSLT14 DE-EN for neural machine translation on transformers, among others, for ActiveLR. With Active LR, we demonstrated the capability to reduce sensitivity to initial learning rate and mini-batch size. We encourage future research to identify and tackle optimization sensitivity to _other_ hyper-parameters, such as weight decay, in an attempt to streamline neural network optimization, reduce carbon emissions, and training time. ## 9 Conclusion In this paper, we show that for all the learning rates tested, the Active variant of SGD with momentum, AdamW, AdaBelief, and RAdam achieves the best fit and highest accuracy. Our tests show robust results for different datasets and model architectures. It did so in a lower number of epochs, which translates to faster training. Moreover, we show that ActiveLR remains stable and does not suffer from loss in generalizability when trained with large mini-batch sizes. As a result, multi-GPU training of large datasets can be sped up with ActiveLR using large mini-batch sizes. The orthogonality of ActiveLR to other optimizers allows it to be implemented on top of them. We also show relative insensitivity of ActiveLR to the values of initial learning rate and mini-batch size. We encourage practitioners to use ActiveLR for their model training to achieve better training/test-set performance in a shorter amount of time, remove the need for tuning the initial learning rate and mini-batch size, and reduce their carbon footprint. ## Acknowledgements The authors would like to thank Compute Canada for its provision of computational resources that made the experimentations possible.
2303.06128
New Constraints on Sodium Production in Globular Clusters From the $^{23}$Na$(^3$He$, \textbf{d})^{24}$Mg Reaction
The star to star anticorrelation of sodium and oxygen {is} a defining feature of globular clusters, but, to date, the astrophysical site responsible for this unique chemical signature remains unknown. Sodium enrichment within these clusters depends sensitively on reaction rate of the sodium destroying reactions $^{23}$Na$(p, \gamma)$ and $^{23}$Na$(p, \alpha)$. In this paper, we report the results of a $^{23}$Na$(^3\text{He}, d)^{24}$Mg transfer reaction carried out at Triangle Universities Nuclear Laboratory using a $21$ MeV $^3$He beam. Astrophysically relevant states {in $^{24}$Mg} between $11 < E_x < 12$ MeV were studied using high resolution magnetic spectroscopy, thereby allowing the extraction of excitation energies and spectroscopic factors. Bayesian methods are combined with the distorted wave Born approximation to assign statistically meaningful uncertainties to the extracted spectroscopic factors. For the first time, these uncertainties are propagated through to the estimation of proton partial widths. Our experimental data are used to calculate the reaction rate. The impact of the new rates are investigated using asymptotic giant branch star models. It is found that while the astrophysical conditions still dominate the total uncertainty, intra-model variations on sodium production from the $^{23}$Na$(p, \gamma)$ and $^{23}$Na$(p, \alpha)$ reaction channels are a lingering source of uncertainty.
C. Marshall, K. Setoodehnia, G. C. Cinquegrana, J. H. Kelly, F. Portillo Chaves, A. Karakas, R. Longland
2023-03-10T18:48:11Z
http://arxiv.org/abs/2303.06128v1
New Constraints on Sodium Production in Globular Clusters From the \({}^{23}\)Na(\({}^{3}\)He, d)\({}^{24}\)Mg Reaction ###### Abstract The star to star anticorrelation of sodium and oxygen is a defining feature of globular clusters, but, to date, the astrophysical site responsible for this unique chemical signature remains unknown. Sodium enrichment within these clusters depends sensitively on reaction rate of the sodium destroying reactions \({}^{23}\)Na(\(p,\gamma\)) and \({}^{23}\)Na(\(p,\alpha\)). In this paper, we report the results of a \({}^{23}\)Na(\({}^{3}\)He, \(d\))\({}^{24}\)Mg transfer reaction carried out at Triangle Universities Nuclear Laboratory using a \(21\) MeV \({}^{3}\)He beam. Astrophysically relevant states in \({}^{24}\)Mg between \(11<E_{x}<12\) MeV were studied using high resolution magnetic spectroscopy, thereby allowing the extraction of excitation energies and spectroscopic factors. Bayesian methods are combined with the distorted wave Born approximation to assign statistically meaningful uncertainties to the extracted spectroscopic factors. For the first time, these uncertainties are propagated through to the estimation of proton partial widths. Our experimental data are used to calculate the reaction rate. The impact of the new rates are investigated using asymptotic giant branch star models. It is found that while the astrophysical conditions still dominate the total uncertainty, intra-model variations on sodium production from the \({}^{23}\)Na(\(p,\gamma\)) and \({}^{23}\)Na(\(p,\alpha\)) reaction channels are a lingering source of uncertainty. ## I Introduction Globular clusters are among the oldest objects in the Milky Way. Comprised of tens to hundreds of thousands of stars that are gravitationally bound, they offer a unique probe of galactic and stellar evolution [1; 2]. Despite decades of intense study, we have an incomplete understanding of their unique chemical evolution [3]. In particular, high resolution photometry has unambiguously determined the presence of multiple stellar populations within these clusters [4], with the youngest of these populations displaying a star-to-star variation in light elements. The anti-correlation between sodium and oxygen is the most ubiquitous chemical signature, and as such can be considered a defining feature of globular clusters [3]. The Na-O anti-correlation indicates that some amount of cluster material has undergone hydrogen burning at elevated temperatures [5; 6; 7]. However, at this time the source of this enriched material is still unknown, with models of massive asymptotic branch stars, fast rotating massive stars, interacting massive binaries, and very massive stars all failing to reproduce the observed chemical signatures [8]. Sodium is synthesized from a series of proton capture reactions that occur during hydrogen burning at \(50\)-\(100\) MK. Known as the NeNa cycle, this group of proton induced reactions and \(\beta\)-decays around \(A=20\)-\(24\) are of critical importance to understanding the creation of sodium in globular clusters. Within the NeNa cycle, sodium may be destroyed via the \({}^{23}\)Na(\(p,\gamma\)) or \({}^{23}\)Na(\(p,\alpha\)) reactions, both of which proceed through the compound nucleus \({}^{24}\)Mg. For decades direct measurements have aimed to constrain these astrophysical reaction rates for the \((p,\gamma)\) and \((p,\alpha)\) channels [9; 10]. The study of Gorres _et al._ (Ref. [10]) is of particular note, as it was one of the first to directly search for a resonance around \(138\)-keV. Corresponding to the \(E_{x}\approx 11830\)-keV state in \({}^{24}\)Mg, this state was first observed in the indirect measurements of Refs. [11; 12] and is thought to dominate the \({}^{23}\)Na(\(p,\gamma\)) rate at the temperatures important to globular cluster nucleosynthesis. Since the study of Gorres _et al._, several direct searches have been performed, all with the intent of measuring the \(138\)-keV resonance strength. The authors of Ref. [13] reported an upper limit of \(\omega\gamma_{(p,\gamma)}\leq 1.5\times 10^{-7}\) eV. Subsequently, the authors of Ref. [14] used a high intensity proton beam of \(\approx 1\) mA to give a further reduced upper limit of \(\omega\gamma_{(p,\gamma)}\leq 5.17\times 10^{-9}\) eV, in the process ruling out its importance for the \((p,\alpha)\) channel. Recently, nearly thirty years after the first direct search was carried out, detection of the \(138\)-keV resonance with a statistical significance above \(2\sigma\) came in Ref. [15] reporting \(\omega\gamma_{(p,\gamma)}=1.46^{+0.58}_{-0.53}\times 10^{-9}\) eV. These efforts have solidified the important role of the \(138\)-keV resonance in globular cluster nucleosynthesis. At the present time, direct measurements of the \(138\)-keV resonance strength have greatly reduced the uncertainty of the \({}^{23}\)Na(\(p,\gamma\)) reaction rate at the temperatures of relevance to globular clusters to approximately \(30\%\). However, much of the rate is still dependent on the results and evaluation presented in Ref. [16]. In that study, a (\({}^{3}\)He, \(d\)) transfer reaction was performed, and a state at \(E_{x}=11831.7(18)\) keV was observed. The \({}^{23}\)Na(\({}^{3}\)He, \(d\)) measurement we present in this paper was carried out to further reduce the reaction rate uncertainty. Earlier results from our experiment have been published in Ref. [17], and provided evidence that the \(138\)-keV resonance lies at a lower energy of \(133\) keV, resulting in a factor of \(2\) increase in the \({}^{23}\)Na(\(p,\gamma\)) reaction rate. This paper uses the same data set as as Ref. [17] but expands upon the analysis of that paper by providing a more complete set of updated excitation energies, and reports spectroscopic factors for levels of astrophysical interest. Bayesian analysis methods are applied to extract excitation energies, spectroscopic factors, and \(\ell\) values. Our analysis is the first of its kind, where every quantity extracted from the transfer measurement is assigned uncertainties based on Bayesian statistical arguments, allowing these quantities and their uncertainties to be incorporated into thermonuclear reaction rate libraries. Our paper is organized as follows: Sec. II provides an overview of the experimental techniques, Sec. III gives an in depth discussion of the necessary corrections to the current nuclear data in order to extract accurate excitation energies for the current experiment, Sec. IV reports our energy values and gives updated recommended values, Sec. V presents the analysis of the transfer angular distributions using a Bayesian method for the distorted-wave Born approximation (DWBA), and Sec. VI reports our values for the proton partial widths derived from this experiment. Sec. VII presents our updated astrophysical reaction rate and its incorporation into an asymptotic giant branch (AGB) model, one of the possible sites for the Na-O abundance anomaly in globular clusters. Sec. VIII provides additional outlook and discussion. ## II Experiment details The \({}^{23}\)Na\((^{3}\)He, \(d)^{24}\)Mg experiment was carried out at Triangle Universities Nuclear Laboratory (TUNL) using the Split-pole spectrograph (SPS) [18]. A beam of \({}^{3}\)He\({}^{+2}\) was accelerated to \(21\) MeV using the 10 MV TUNL FN tandem accelerator, and the beam energy was set using a set of high resolution \(90\)-\(90\) dipole magnets. While the amount of beam that made it to the target varied throughout the experiment, typical beam currents were \(100-200\) eAn of \({}^{3}\)He\({}^{+2}\). For the experiment reported here, NaBr was selected as the target material based on the observations of Ref. [16]. The authors of that study noted that NaBr targets were stable to beam bombardment, reasonably resistant to oxygen contamination, and found no evidence of contaminant states arising from reactions on \({}^{79,81}\)Br in the region of interest. Our targets were fabricated by using thermal evaporation to deposit a layer of NaBr on \(22\)\(\mu\)g/cm\({}^{2}\) thick \({}^{\rm nat}\)C foils. The carbon foils were purchased from Arizona Carbon Foil Co., Inc. [19], and floated onto target frames to create the backing for the NaBr layer. A quartz crystal thickness monitor measured the rate of deposition and total thickness of the targets. A total of six targets were placed into the evaporator, and evaporation was halted once they reached a thickness of \(70\)\(\mu\)g/cm\({}^{2}\). After the evaporation was complete, the targets were brought up to atmosphere and then immediately placed into a container for transfer to the target chamber of the SPS. This container was brought down to rough vacuum to reduce exposure to air during transport. Three of the targets were mounted onto the SPS target ladder. In addition to the NaBr targets, the ladder was also mounted with a \(1\) mm diameter collimator for beam tuning, a \({}^{\rm nat}\)C target identical to the backing of the NaBr targets for background runs, and thermally evaporated \({}^{27}\)Al on a \({}^{\rm nat}\)C backing to use for an external energy calibration. All three NaBr targets were used during the 120 hour beam time. No degradation for any of the targets was observed in the elastic scattering spectra (discussed below), nor was there any sign of significant oxidation. The \({}^{23}\)Na\((^{3}\)He, \(d)^{24}\)Mg reaction was measured at angles between \(3^{\circ}\)-\(21^{\circ}\) in steps of \(2^{\circ}\) with a field of \(1.14\)-\(1.15\) T. Additionally, the elastic scattering reaction, \({}^{23}\)Na\((^{3}\)He,\({}^{3}\) He), was measured at angles between \(15^{\circ}\)-\(55^{\circ}\) in \(5^{\circ}\) steps and \(59^{\circ}\) using fields of \(0.75\)-\(0.80\) T. The solid angle of the SPS was fixed throughout the experiment at \(\Omega_{\rm SPS}=1.00(4)\) msr. After the reaction products were momentum to charge analyzed by the spectrograph, they were detected at the focal plane of the SPS. The focal plane detector consists of two position sensitive avalanche counters, a \(\Delta E\) proportionality counter, and a residual \(E\) scintillator. Additional detail about this detector can be found in Ref. [20]. Due to potential for uncontrolled systematic effects from the charge integration of the SPS beamstop and target degradation, it was decided to determine the absolute scaling of the data relative to \({}^{23}\)Na\((^{3}\)He,\({}^{3}\) He). Elastic scattering was measured continuously during the course of the experiment by a silicon \(\Delta E/E\) telescope positioned at \(\theta_{lab}=45^{\circ}\). The telescope was double-collimated using a set of brass apertures to define the solid angle. A geometric solid angle of \(\Omega_{\rm Si}=4.23(4)\) msr was measured. ## III Updates to energy levels above \(11\) MeV Spectrograph measurements like the current experiment are dependent on previously reported excitation energies for energy calibration of the focal plane. In the astrophysical region of interest (\(11\lessapprox E_{x}\lessapprox 12\) MeV) the current ENSDF evaluation, Ref. [21], was found to be inadequate for an accurate calibration of our spectra. Discussion of the issues with the evaluation are available in Ref. [17], which first reported the astrophysically relevant results of the energy measurements of this work. In addition to the issues mentioned in the prior work, the current ENSDF evaluation recommends energies that include calibration points from the spectrograph measurements of Ref. [11] and Ref. [22]. The inclusion of these calibration points is an error because calibration points are not independent measurements and increase the weight of the values they are based on in the resulting average. Every compilation and evaluation since 1978 [23] includes this error. The measurements of Hale _et al._[16] have been excluded from our compiled values. Discussion of this decision can be found in Appendix A. Our compiled values are based on the most precise available literature, but are limited to a narrow excitation region selected for the purpose of accurately energy calibrating the current experiment and subsequently for calculating the astrophysical reaction rate. We made no attempt to update values outside of the region of interest. Our compiled energies are presented in Table. 1. Note that in the case of Ref. [24], resonant capture was used to excite \({}^{24}\)Mg, but the excitation energies were deduced from gamma ray energies making these values independent of the reaction \(Q\)-value. For the measurements that report the laboratory frame resonance energies, the excitation energies are deduced from: \[E_{x}=Q+E_{P}\frac{M_{T}}{M_{T}+M_{P}}, \tag{1}\] where \(E_{P}\) is the projectile energy measured in the laboratory frame, and \(M_{P}\) and \(M_{T}\) are the _nuclear_ masses for the projectile and target nuclei, respectively. We have used the _atomic_ masses from Ref. [25] assuming the difference is negligible compared to the statistical uncertainty in \(E_{P}\). \(Q\) is the \(Q\)-value for either the \((p,\gamma)\) or \((\alpha,\gamma)\) reaction. The \(2020\) mass evaluation [26; 27] was released after our compilation, but leads to a difference of \(1\) eV in the central \(Q\)-value, well below its associated uncertainty. The column in Table 1 from Ref. [28] shows energies deduced from a weighted average of several \((p,\gamma)\) measurements, and that paper should be referred to for additional details. For the present work, the suggested value of these weighted averages is treated as a single measurement that is updated according to Eq. (1). The weighted averages of all measurements are presented in the last column. In order to reduce the effects of potential outliers, when there are three or less measurements, the lowest measured uncertainty was used instead of the weighted average uncertainty. ## IV Energy calibration Our energy calibration is the same one as reported in Ref. [17], but is reiterated and expanded here for clarity and completeness. Excitation energies were extracted from the focal plane position spectrum using a third-order polynomial fit to parameterize the bending radius of the SPS in terms of the ADC channels, \(x\): \[\rho=Ax^{3}+Bx^{2}+Cx+D. \tag{2}\] An updated version of the Bayesian method presented in Ref. [20] was used to fit the polynomial. Briefly, the method accounts for uncertainties in both \(x\) and \(\rho\) while also estimating an additional uncertainty based on the quality of the fit. The update uses the python package emcee to more efficiently sample the posterior [32]. As a result, excitation energies can be directly calculated from posterior samples, ensuring the correlations between the fit parameters are correctly accounted for in our reported energies. Calibration states were methodically selected to span the majority of the focal plane. Care was taken to avoid introducing additional systematic errors that would come with misidentifying a state used for calibration. As such, some intensely populated peaks were excluded due to the possibility of misidentifying them with nearby levels that differed in energy by more than a few keV. The chosen calibration states at \(\theta_{lab}=11^{\circ}\) are shown in the top panel of Fig. 1. The validity of this internal calibration in the astrophysical region of interest between \(11\) and \(12\) MeV was checked at \(\theta_{lab}=11^{\circ}\) against a separate external calibration using the \({}^{27}\)Al(\({}^{3}\)He,\(d\))\({}^{28}\)Si reaction. The aluminum states were selected based on the spectrum shown in Ref. [33]. When applying the external aluminum calibration to the sodium states an energy offset of \(\approx 7\) keV compared to the internal calibration was observed. Using the stopping powers of SRIM [34], it was found that the energy offset could be ascribed to the difference in energy loss between the Al and NaBr targets. Taking the above as confirmation of its validity, the internal calibration was adopted for all angles. The energies of this work are presented in Table 2. They are the weighted average of the energies deduced at each angle. The bottom panel of Fig. 1 shows the location of the peaks in the astrophysical region of interest at \(11^{\circ}\). Only states that were seen at three or more angles are reported. Calibration states are given without uncertainties, italicized, and marked with an asterisk for clarity in Table 2. The additional uncertainty estimated by our Bayesian energy calibration also introduces a further complication into the weighted averaging between angles. Since this uncertainty is estimated directly from the data, it will be influenced by systematic effects. These systematic effects introduce correlations between the deduced energies and uncertainties at each angle, which can become significant because of the high number of angles measured in this experiment. A clear indication of a correlation was seen in the deduced energies of our calibration points. The energy of the calibration points predicted by the fit tend to agree with their input values at each angle, but exhibit little statistical scatter from angle to angle producing some disagreement larger than \(2\sigma\) for final values if a simple weighted average is adopted. To account for possible correlations, the uncertainties on the weighted averages were estimated using the methods of Ref. [35]. This correction is done by calculating the \(\chi^{2}\) value of the data with respect to the weighted average, \(\bar{x}\), which is given by: \[\chi^{2}=\sum_{i}^{N}\frac{(x_{i}-\bar{x})^{2}}{\sigma_{i}^{2}}. \tag{3}\] Since the expected value of \(\chi^{2}\) is \(N-1\), the uncertainties from the weighted average, \(\bar{\sigma}\), are adjusted based on the deviation from \(N-1\). For the case of positive correlations, \(\chi^{2}<1\), and, therefore, \(\bar{\sigma}\) will need to be adjusted by: \[\sigma_{adj}=\sqrt{(N-\chi^{2})\bar{\sigma}^{2}}. \tag{4}\] A separate estimate can also be made if the scatter in the data is not well described by the weighted average. In this case, \(\chi^{2}>1\), which gives the adjustment: \[\sigma_{adj}=\sqrt{\frac{\chi^{2}}{N-1}\bar{\sigma}^{2}}. \tag{5}\] To be conservative, we adopt the larger of these two values. It can be seen from Table 2 that our energies are in good agreement with previous measurements. The sole exceptions are the pair of states at \(E_{x}=9838(7)\) keV and \(9977(6)\) keV, which lie \(10\) keV above the values reported in Ref. [21]. However, both of these states show clear bimodal behavior as a function of spectrograph angle, undergoing shifts of over \(10\) keV, and as a result skewing the average towards higher energies. Behavior of this nature is inconsistent with the kinematic shift seen from contaminants, but did not appear to impact the Figure 1: Full and partial focal plane position spectrum at \(\theta_{lab}=11^{\circ}\) after 6 hours (\(10^{15}\) particles) of data accumulation. The top panel (\(\circ\)) shows the entire focal plane spectrum with the calibration states (given in keV) highlighted in orange and the astrophysical region of interest is between the dashed black lines. The energy values for the states below the proton threshold (\(11692.69\) keV) are taken from Ref. [21], while the rest are from Table 1. All values in the top panel are rounded to the nearest integer. The bottom panel (\(\circ\)) is zoomed in on the astrophysical region of interest. Peaks from \({}^{24}\)Mg have been identified with their final weighted average energy value in keV. corresponding angular distributions, which were in agreement with the known \(1^{+}\) spin-parities. These states have no bearing on the astrophysical measurement, so while their unique behavior is puzzling, we have opted to report the average of all angles using the _expected value method_ of Ref. [36] to give a more conservative estimate of the uncertainties. ### Suggested Energies for Astrophysically Relevant States Our angle averaged excitation energies have been combined with our compilation of literature values (Sec. III), to produce the recommended resonance energies given in the second half of Table 2. The energies of Ref. [16] have been excluded from the averaging (see Appendix.A). Note that some states not directly measured in the present work are included since they play a role in the reaction rate. All values come from a weighted average of the separate measurements, except for the \(11694\)-keV state. For this state an extreme tension of 10 keV exists between the two most precise measurements, which are this work and the value of Ref. [30]. In order for our recommended value to reflect this disagreement, we again adopt the _expected value method_ of Ref. [36] to combine our measurement and the measurement of Ref. [30] leading to a more realistic uncertainty given the discrepant data. ## V Bayesian DWBA analysis Proton partial widths necessary for the calculation of the reaction rate can be estimated from the spectroscopic factors extracted from single-particle transfer reactions. Uncertainties arising from the optical potential and bound state wave function will typically dominate the total uncertainties of the spectroscopic factors. Analysis of our data would be incomplete if we ignored these sources of uncertainty; therefore, we adopt the Bayesian distorted wave Born approximation (DWBA) methods of Ref. [39] to quantify these uncertainties for the present measurement. All DWBA calculations were carried out using FRESCO [40]. Ref. [39] should be consulted for a more complete discussion of the Bayesian DWBA method, but a brief overview is given here in the context of the present study. ### Overview of Bayesian DWBA Elastic scattering data are used to constrain the parameters of a Woods-Saxon potential given by: \[V(r)=-\frac{V}{1+\exp{(\frac{-r_{0}A_{0}^{1/3}}{a_{0}})}}, \tag{6}\] where \(V\) is the depth of the well in MeV, \(r_{0}\) is the radius in fm, and \(a_{0}\) is the diffuseness in fm. The optical model uses a linear combination of both real and imaginary Woods-Saxon potentials, and by adjusting the parameters of these potentials the observed elastic scattering data can be reproduced. Bayesian statistics treats parameters as probability distributions. By assigning each parameter a _prior_ probability distribution before considering the data, Bayesian statistics allows the data, \(\mathbf{D}\), and Bayes' theorem to update the prior distributions in light of our observations. Bayes' theorem is given by: \[P(\mathbf{\theta}|\mathbf{D})=\frac{P(\mathbf{D}|\mathbf{\theta})P(\mathbf{\theta})}{P( \mathbf{D})}, \tag{7}\] where \(P(\mathbf{\theta})\) are the prior probability distributions of the model parameters, \(P(\mathbf{D}|\mathbf{\theta})\) is the likelihood function, \(P(\mathbf{D})\) is the evidence, and \(P(\mathbf{\theta}|\mathbf{D})\) is the posterior [41]. Informally we can state: the priors are what we believe about the model parameters considering the new data, the likelihood is the probability of measuring the observed data given a set of model parameters, the evidence is the probability of the observed data, and the posterior is what we know about the model parameters after analyzing the new data. The goal of the present experiment is to extract spectroscopic factors and assign \(\ell\) values to states in \({}^{24}\)Mg in the astrophysical region of interest. Spectroscopic factors are extracted from experimental angular distributions, \(\frac{d\sigma}{d\Omega}_{\text{Exp}}\), according to: \[\frac{d\sigma}{d\Omega}_{\text{Exp}}=C^{2}S_{\text{proj}}C^{2}S_{\text{surg}} \frac{d\sigma}{d\Omega}_{\text{DWBA}}, \tag{8}\] where \(C\) and \(S\) denotes the isospin Clebsch-Gordan coefficient and spectroscopic factor, while the subscripts proj and tag refer to the projectile and target systems, respectively. For this work, we approximate \(C^{2}S\) for the projectile system as \(\frac{3}{2}\) according to Ref. [42]. Any further mention of \(C^{2}S\) should be understood to be in reference to \(C^{2}S_{\text{surg}}\). It is essential to recognize that Eq. (8) establishes \(C^{2}S\) as a parameter in the framework of DWBA. The only meaningful way to estimate its uncertainty in the presence of both the measured uncertainties of \(\frac{d\sigma}{d\Omega}_{\text{Exp}}\) and the optical model uncertainties that affect \(\frac{d\sigma}{d\Omega}_{\text{DWBA}}\) is to treat it as a parameter in the statistical analysis. Using Bayesian statistics this entails assigning a prior distribution. The excited states of interest to this work lie above \(11\) MeV, where it can be safely assumed that the majority of the single particle strength of the proton shells has been exhausted. Thus, \(C^{2}S\ll 1\) and we assign an informative prior: \[C^{2}S\sim\text{HalfNorm}(1.0^{2}), \tag{9}\] where \(\sim\) means "distributed according to". HalfNorm stands for the half normal distribution, which is strictly positive and has one free parameter the standard deviation, \(\sigma\). In the case of Eq. (9), \(\sigma=1.0\) is chosen to reflect our assumption that \(C^{2}S\) is more than likely to be less than one in the astrophysical region of interest. Assigning probabilities to \(\ell\) values requires a subcategory of Bayesian inference called model selection. In this context, the model is \(\ell_{l}\), which is shorthand for \(\ell=l\) (for example \(\ell=0\) is written \(\ell_{0}\)). Posterior distributions for \(\ell_{l}\) can be determined through a modified version of Bayes' theorem: \[P(\ell_{l}|\mathbf{D})=\frac{P(\mathbf{D}|\ell_{l})P(\ell_{l})}{\sum_{k}P( \mathbf{D}|M_{k})P(M_{k})}. \tag{10}\] Each \(\ell_{l}\) is implicitly dependent on a set of model parameters \(\mathbf{\theta}_{l}\) which have been marginalized. Expanding \(P(\mathbf{D}|\ell_{l})\) to show the explicit dependence gives: \[P(\mathbf{D}|\ell_{l})=\int P(\mathbf{D}|\ell_{l},\mathbf{\theta}_{l})P(\mathbf{\theta} _{l}|\ell_{l})d\mathbf{\theta}_{l}. \tag{11}\] This equation shows that \(P(\mathbf{D}|\ell_{l})\) is precisely equivalent to the evidence integral from Eq. (7) conditioned on \(\ell_{l}\). Thus, calculating the posteriors for each \(\ell\) demands evaluating the evidence for each DWBA cross section generated using a distinct \(\ell\) value. Denoting the evidence integral that corresponds to a model \(\ell_{l}\) as \(Z_{l}\), we can compare each value of \(\ell\). The Bayes Factor, \(B_{lk}\), can be calculated between two angular momentum \begin{table} \begin{tabular}{c c|c c c c} \hline \hline ENSDF ([21]) & This Work & Compilation (Table I) & This Work & Recommended \(E_{x}\) & Recommended \(E_{r}\) \\ \hline 7349.00(3) & 7364(14) & 11392.5(21) & 11387.7(14) & 11389.2(12) & \(-\)303.5(12) \\ 7555.04(15) & 7555(13) & 11452.9(4) & 11453.2(21) & 11452.9(4) & \(-\)239.8(4) \\ 7747.51(9) & 7752(10) & 11521.5(16) & 11520.3(23) & 11521.1(13) & \(-\)171.6(13) \\ 8357.98(13) & 8362(4) & 11698(2) & 11688.7(14) & 11694(4) & 1(4) \\ 8437.31(15) & 8441(4) & 11729.8(16) & & 11729.8(16) & 37.1(16) \\ 8439.36(4) & & 11828(3) & 11823(3) & 11826(3) & 133(3) \\ 8654.53(15) & 8654\({}^{\prime}\) & 11861.6(15) & 11857(3) & 11860.8(14) & 168.1(14) \\ 8864.29(9) & 8864\({}^{\prime}\) & 11933.05(19) & 11935(3) & 11933.06(19) & 240.37(19) \\ 9003.34(9) & 9002.9(24) & 11966.7(5) & & 11966.7(5) & 274.0(5) \\ 9145.99(15) & 9145.0(16) & 11988.45(6) & 11989.3(14) & 11988.45(6) & 295.76(6) \\ 9284.22(14) & 9292.6(12) & 12016.9(5) & 12014(3) & 12016.8(5) & 324.1(5) \\ 9299.77(24) & & 12051.3(4) & 12050(3) & 12051.3(4) & 358.6(4) \\ 9457.81(4) & 9460\({}^{\prime}\) & 12119(1) & 12121.5(17) & 12119.7(8) & 427.0(8) \\ 9516.28(4) & 9520(3) & 12183.3(1) & 12182.3(22) & 12183.3(1) & 490.6(1) \\ 9527.8(21) & & 12259.6(4) & _I2260\({}^{\prime}\)_ & & \\ 9828.11(11) & 9838(7)\({}^{\dagger}\) & 12341.0(4) & 12342(3) & 12341.0(4) & 648.3(4) \\ 9967.19(22) & 9977(6)\({}^{\dagger}\) & 12405.3(3) & 12406.0(22) & 12405.3(3) & 712.6(3) \\ 10027.97(9) & 10021(3) & 12528.4(6) & 12530.5(24) & 12528.5(6) & 835.8(6) \\ 10058.54(16) & 10055(3) & 12578(5) & 12576(3) & 12577(3) & 884(3) \\ 10161(3) & 10163.2(19) & 12669.9(4) & _I2670\({}^{\prime}\)_ & 12669.9(4) & 977.2(4) \\ 10333.29(13) & 10328.1(18) & 12738.9(7) & 12738(3) & 12738.8(7) & 1046.1(7) \\ 10360.51(13) & _10358\({}^{\ast}\)_ & 12817.77(19) & 12819(4) & 12817.77(19) & 1125.08(19) \\ 10576.02(7) & 10572.7(21) & 12852.1(5) & 12854(3) & 12852.2(5) & 1159.5(5) \\ 10659.58(13) & 10660.1(21) & 12921.6(4) & 12924(4) & 12921.6(4) & 1228.9(4) \\ 10660.03(4) & & 12963.9(5) & 12965(4) & 12963.9(5) & 1271.2(5) \\ 10711.74(17) & 10713.9(12) & & & & \\ 10730.79(11) & 10732.5(16) & & & & \\ 10820.7(4) & 10824.3(13) & & & & \\ 10916.96(17) & _10918\({}^{\ast}\)_ & & & & \\ 11010.5(14) & 11011(3) & & & & \\ 11015.8(7) & & & & & \\ 11208.4(16) & 11201(5) & & & & \\ 11314.4(15) & 11317(3) & & & & \\ \hline \hline \multicolumn{5}{l}{\({}^{\ast}\) State used for calibration.} \\ \multicolumn{5}{l}{\({}^{\dagger}\) These two states show bimodal behavior as a function of angle. The _expected value method_ of Ref. [36] was adopted to give a more conservative uncertainty since only one state is expected around each energy. See text for additional details.} \\ \end{tabular} \end{table} Table 2: \({}^{24}\)Mg excitation energies from this work compared to the values of Table I for states within the region of interest and those of Ref. [21] for all others. In some cases, due to the presence of a high number of states in certain regions, a unique identification of the observed state could not be made so all nearby states are listed. For states in the region of interest we compare to the values of Table. I. The recommended excitation and resonance energies are also given for these states. States used for this work’s energy calibration are reported in italics, marked with \(\ast\), and listed without uncertainties, but they do represent the mean value obtained after calibration. All energies are given in units of keV. transfers which are assumed to have equal prior probabilities: \[B_{lk}=\frac{Z_{l}}{Z_{k}}. \tag{12}\] Generally, if this ratio is greater than one, the data favor the transfer \(\ell=l\), while values less than \(1\) favor \(\ell=k\). While the significance of values for \(B_{lk}\) is open to interpretation, a useful heuristic given by Jeffreys [43] is often adopted. Assuming \(\ell=l\) is favored over \(\ell=k\), we have the following levels of evidence: \(1<B_{lk}<3\) is anecdotal, \(3<B_{lk}<10\) is substantial, \(10<B_{lk}<30\) is strong, \(30<B_{lk}<100\) is very strong, and \(B_{lk}>100\) is decisive. Normalized probabilities for each transfer are given by: \[P(\ell_{l}|\mathbf{D})=\frac{Z_{l}}{\sum_{k}Z_{k}}, \tag{13}\] where the index \(k\) runs over all allowed angular momentum values. However, practically \((^{3}\)He, \(d)\) reactions are highly selective, allowing us to restrict the sum to the most likely transfers with \(\ell=0\)-\(3\). By using a Bayesian model, Ref. [39] made it possible to incorporate optical potential uncertainties into the extraction of spectroscopic factors and assignment of \(\ell\) values. However, the current data set for \({}^{23}\)Na\((^{3}\)He, \(d)\) presents challenges that require significant extensions to those previously reported methods. ### Incorporating Relative Yields Extraction of \(C^{2}S\) for a state requires that the absolute scale of the differential cross section is known. Here we use a relative method to remove beam and target effects. Yields measured at the focal plane are normalized to the \({}^{23}\)Na+\({}^{3}\)He elastic scattering measured by the monitor detector positioned at \(45^{\circ}\). From these normalized yields, an absolute scale is established by inferring an overall normalization through comparison of the measured elastic scattering angular distribution collected in the focal plane to the optical model predictions. Our approach is similar in principle to those found in Refs. [44; 16; 45]. The present study has a set of ten elastic scattering data points, which we denote by \(\frac{dY}{d\Omega\text{ Elastic},j}\) for the data measured at angle \(j\). From these data a posterior distribution can be found for an overall normalization parameter, \(\eta\), which renormalizes the predictions of the optical model such that: \[\frac{dY}{d\Omega\text{ Optical},j}=\eta\times\frac{d\sigma}{d\Omega\text{ Optical},j}, \tag{14}\] where \(\frac{dY}{d\Omega\text{ Optical},j}\) is the relative yield predicted by the optical model at angle \(j\). As a parameter in our model, \(\eta\) needs a prior distribution. To assign equal probability on the logarithmic scale, we introduce a parameter, \(g\), such that: \[g\sim\text{Uniform}(-10,10), \tag{15}\] where Uniform is the uniform distribution. \(\eta\) is then defined via \(\eta=10^{g}\). Since \(\eta\) is estimated simultaneously with \(C^{2}S\), the uncertainty in our absolute normalization will automatically be included in the uncertainty of \(C^{2}S\). ### Global Potential Selection Global optical potentials are used to construct the prior distributions for the potential parameters in our Bayesian model. Elastic scattering data were only measured for the entrance channel, since it can be gathered with the same beam energy and target as the transfer reaction of interest. As a result, our priors differ for the entrance and exit channels. For the entrance channel, mildly informative priors are selected. The depths, \(V,W\), etc., are assigned Normal distributions centered around their global values with standard deviations equal to \(20\%\) of the central value: \[V\sim\mathcal{N}(\mu_{global},\{0.20\,\mu_{global}\}^{2}). \tag{16}\] The geometric parameters, \(r\) and \(a\), are given priors that attempt to cover their expected physical range while still allowing the posterior to be determined by the data. Taking this range to be \(r=1.0-1.5\) fm and \(a=0.52-0.78\) fm, we can again assign Normal distributions with central values \(r=1.25\) fm and \(a=0.65\) and standard deviations of \(20\%\) the central value. Collecting the parameters for all of the potentials, the priors for the entrance channel are written compactly as: \[\mathcal{U}_{\text{Entrance}}\sim\mathcal{N}(\mu_{\text{central},k},\{0.20\, \mu_{\text{central},k}\}^{2}), \tag{17}\] where the index \(k\) runs over each of the potential parameters. The subscript \(central\) refers to either the values taken from the selected global study for each depth or the central values \(r=1.25\) fm and \(a=0.65\) fm for the geometric parameters. The first attempts to fit the elastic scattering data used the optical model from the lab report of Beccehetti and Greenless [47]. The imaginary depth of this potential for a beam of \({}^{3}\)He on \({}^{23}\)Na at \(E_{{}^{3}\text{He}}=21\) MeV is \(36\) MeV. We note that this value is nearly twice as deep as the values reported in the more recent works of Trost _et al._[48], Pang _et al._[49] and Liang _et al._[50]. Although these works use a surface potential, the work of Vernote _et al._[45] is parameterized by a volume depth, and also favors depths around \(20\) MeV. While the starting parameters are of little consequence to standard minimization techniques, the overly deep well depth is an issue for our Bayesian analysis because it determines the prior distribution for our model. When using the deeper value of 36 MeV for inference, we observed that the data preferred a lower depth, thereby causing a bimodal posterior with one mode centered around the global depth and the other resulting from the influence of the data. Based on these observations, a decision was made to use the potential of Liang _et al._ (Ref. [50]) due to its applicability in the present mass and energy range and its shallower imaginary depth of 19.87 MeV. We have chosen to exclude the imaginary spin-orbit portion of the Liang potential because of the limited evidence presented for its inclusion in Ref. [50]. The exit channel optical potential parameters must also be assigned prior distributions. Our experiment does not have data to constrain these parameters directly, but fixing these parameters in our analysis would neglect a source of uncertainty. We chose informative priors that are determined by the selected global deuteron potential. These parameters are assigned Normal priors centered around the global values and given standard deviations of \(10\%\): \[\boldsymbol{\mathcal{U}}_{\text{Exit}}\sim\mathcal{N}(\mu_{\text{global},k},\{0.10\,\mu_{\text{global},k}\}^{2}). \tag{18}\] The selected deuteron potential is the non-relativistic potential from Ref. [46]. Since the region of interest is \(11\)-\(12\) MeV, the outgoing deuterons will have an energy of \(E_{d}\approx E_{3\text{He}}+Q_{(3\text{He},d)}-E_{x}\approx 15.5\) MeV. All of the potentials used in the following analysis are listed in Table 3. The bound state spin-orbit term was set to roughly satisfy \(\lambda=25\) with \(\lambda\approx 180V_{so}/V\) for values of \(V\) in the above energy range. The bound state geometric parameters, all spin-orbit terms for the entrance and exit channels, and Coulomb radii were fixed in our calculations. ### Elastic Scattering As stated in Sec. II, elastic scattering yields were measured for \(15^{\circ}\)-\(55^{\circ}\) in \(5^{\circ}\) steps and finally at \(59^{\circ}\), for a total of \(10\) angles. The yields at each angle were normalized to those measured by the monitor detector. A further normalization to the Rutherford cross section was applied to the elastic scattering data to ease the comparison to the optical model calculations. Low angle elastic scattering cross sections in normal kinematics can be collected to almost arbitrary statistical precision, with the present data having statistical uncertainty of approximately \(2\)-\(7\%\). In this case, it is likely that the residuals between these data and the optical model predictions are dominated by theoretical and experimental systematic uncertainties. To account for this possibility, the Bayesian model is modified to consider an additional unobserved uncertainty in the elastic channel: \[\sigma^{\prime 2}_{\text{E elastic},i}=\sigma^{2}_{\text{E elastic},i}+\bigg{(}f_{ \text{E elastic}}\frac{d\sigma}{d\Omega\,_{\text{Optical},i}}\bigg{)}^{2}, \tag{19}\] where the experimentally measured uncertainties, \(\sigma_{\text{E elastic},i}\), at angle \(i\) have been added in quadrature with an additional uncertainty coming from the predicted optical model cross section. This prescription is precisely the same procedure that is used for the additional transfer cross section uncertainty from Ref. [39]. With only \(10\) data points, an informative prior on \(f_{\text{Elastic}}\) is necessary to preserve the predictive power of these data. We select the form: \[f_{\text{Elastic}}\sim\text{HalfNorm}(0.10^{2}). \tag{20}\] This quantifies the expectation that the data will have residuals with the theoretical prediction of about \(10\%\). We found the above prior to provide the best compromise between the experimental uncertainties, which lead to unphysical optical model parameters, and less informative priors that lead to solutions above \(f_{\text{Elastic}}=50\%\) where the data become non-predictive. Once the above parameter was included, the data could be reliably fit. However, it then became clear that the discrete ambiguity posed a serious issue for the analysis. It is know (see for example Ref. [51]) that nearly identical theoretical cross sections can be produced with drastically different potential depths due to the phase shift only differing by an additive multiple of \(\pi\). Previously, Ref. [39] found that the biasing of the entrance channel potential priors towards their expected physical values was sufficient to remove other modes from the posterior. For the present data, the potential priors did little to alleviate the problem, as might be expected since strongly absorbed projectiles like \({}^{3,4}\)He suffer much worse discrete ambiguities ([51]) compared to the deuteron scattering data in Ref. [39]. In order to explore potential solutions, the nested sampling algorithm in dynesty was used to draw appropriately weighted samples from both of the modes. Nested sampling can explore multi-modal distributions with ease [52], but is not necessarily suited towards precise posterior estimation. A run was carried out with \(1000\) live points, and required over \(5\times 10^{6}\) likelihood calls, which is nearly three times the number of samples required in other calculations. The pair correlation plot of these samples is shown in Fig. 2, and the impacts of the discrete ambiguity can clearly be seen. Two different approaches were explored to differentiate the modes. The first was a simple selection of the modes based on the continuous ambiguity, \(Vr_{0}^{n}=c\). Fig. 2 shows that the correlation between \(V\) and \(r_{0}\) can cleanly resolve the two modes, while the correlations in the other parameters have significant overlap between them. In this approach, the constant, \(c\), is calculated for each mode, while the exponent is kept fixed with a value of \(n=1.14\), taken from Ref. [45]. It was found that the correlation in the samples was well described by this relation as shown in Fig. 3. Our second approach utilized the volume integral of the real potential. Ref. [53] gives an approximate analytical form of the integral: \[J_{R}=\frac{1}{A_{P}A_{T}}\frac{4\pi}{3}R_{0}^{3}V\bigg{[}1+\bigg{(}\frac{\pi a _{0}}{R_{0}}\bigg{)}^{2}\bigg{]}, \tag{21}\] where \(R_{0}=r_{0}A_{T}^{1/3}\). Calculating \(J_{R}\) for the samples in each mode resulted in two clearly resolved peaks, as shown in Fig. 4. After comparing these two methods, it was decided to use the first one, and exclude the other modes via a uniform distribution based on the relationship between \(V\) and \(r_{0}\). This calculation had the advantages of being relatively simple and only involving two parameters. The method based on \(J_{R}\) has the advantage that the global values well predict the location of the peak, but its dependence on \(a_{0}\) makes its possible effect on the posterior less clear. Integrating the \(Vr_{0}^{n}\) relation into the Bayesian method requires a probability distribution be specified. A uniform distribution that covered \(\pm 30\%\) around \(c\) of the physical mode was chosen. We have intentionally avoided the word prior because this condition clearly does not represent a belief about the parameter \(c\) before inference. Rather, this is a _constraint_ enforced on the posterior to limit the inference to the physical mode [54]. It should be emphasized that the posterior distributions of be conditioned on \(c\), i.e., \(P(\theta|D,c)\). The constraint is written: \[c\sim\text{Uniform}(c_{0}(1-0.30),c_{0}(1+0.30)), \tag{22}\] where \(c_{0}\) is the value that is roughly centered around the lower mode. In this case \(c_{0}=132.9\). As long as the distribution in Eq. (22) covers all of the physical mode and excludes the unphysical ones, the value of \(c_{0}\) and the width of the distribution should be understood to be arbitrary. ### Transfer Considerations Transfer cross sections are calculated using the zero-range approximation with the code FRESCO. The zero-range approximation is necessary in the current context because of the number of function evaluations that are needed to compute the posterior distributions (for this work \(2\times 10^{6}\)). For the volume integral of the proton-deuteron interaction, \(D_{0}\), we use a value of \(D_{0}=-172.8\) MeV fm\({}^{3/2}\)[55]. \(D_{0}\) is calculated theoretically and has a dependence on the selected nucleon-nucleon interaction. We added a \(15\%\) uncertainty using a parameter \(\delta D_{0}\) to account for the spread observed between different theoretical models in Refs. [56]. A similar estimate for the \(D_{0}\) uncertainty was made in Ref. [57]. The residuals between the transfer cross section and DWBA calculations will be impacted not only by the experimental and optical model uncertainties, but by any deficiency in the reaction theory. If we do not acknowledge that the DWBA residuals could be greater than the uncertainties coming from counting statistics, then we would be assuming that the transfer data are a meaningful constraint on the optical model parameters. If this were the case, each state would have its own set of optical model parameters that have been incorrectly adjusted to best reproduce the observed angular distribution. To avoid this issue, we add an additional theoretical uncertainty in quadrature with the experimental uncertainties, similar to our procedure for the elastic scattering. Using the same functional form as Eq. (19), we define a fraction of the DWBA cross section, with the weakly informative prior: \[f\sim\text{HalfNorm}(1.0^{2}), \tag{23}\] meaning that our expectation for the fractional uncertainty on the DWBA cross section at each angle is \(f<1\). A majority of the states of astrophysical interest lie above the proton threshold, and are therefore unbound. For bound states, calculation of the overlap functions, which determine \(C^{2}S\), is done by using a single particle potential with its Woods-Saxon depth adjusted to reproduce the binding energy of the state. For unbound states, an analogous procedure would be to adjust the well depth to produce a resonance centered around \(E_{r}\). FRESCO does not currently support a search routine to vary \(V\) to create a resonance condition, meaning that \(V\) would have to be varied by hand until a phase shift of \(\pi/2\) is observed. Such a calculation is obviously time consuming and computationally infeasible in the current work. An alternative is the weak binding approximation. This approach assumes that the wave function of resonance scattering resembles the wave function of a loosely bound particle, typically with a binding energy on the order of \(E_{bind}=1\) keV. Studies have shown that this approximation performs well for states within \(\approx 500\) keV of the particle threshold, and reproduce the unbound calculations to within \(1\%\)[58; 59]. There are indications that the validity of this approximation depends on the \(\ell\) value. The reasoning is that states with higher \(\ell\) values more closely resemble bound states, due to the influence of the centrifugal barrier, and therefore are better described by the approximation [60]. For this work, DWBA calculations for states above the proton threshold were carried out with the weak binding approximation. The error arising from use of the approximation is considered negligible in the current context. Further complications arise from the non-zero ground state of \({}^{23}\)Na (\(J^{\pi}=3/2^{+}\)). In this case, angular distributions can be characterized by a mixture of \(\ell\) transitions. Although in principle every allowed \(\ell\) transition can contribute, practically speaking, it is difficult to unambiguously determine all but the lowest two \(\ell\) contributions because of the rapidly decreasing cross section with increasing \(\ell\)[61]. Ignoring the light particle spectroscopic factor, the relationship between the experimentally measured differential cross section and the DWBA prediction can be expressed as: \[\frac{d\sigma}{d\Omega}_{exp}=C^{2}S\bigg{[}\alpha\frac{d\sigma}{d\Omega}_{ \text{DWBA},\ell_{1}}+(1-\alpha)\frac{d\sigma}{d\Omega}_{\text{DWBA},\ell_{2} }\bigg{]}, \tag{24}\] where \(\alpha\) is defined such that \(C^{2}S_{\ell_{1}}=C^{2}S\alpha\) and \(C^{2}S_{\ell_{2}}=C^{2}S(1-\alpha)\)[62]. Note that the values for \(\ell\) must still obey parity conservation, meaning the most probable combinations for \((^{3}\text{He},d)\) are \(\ell=0\oplus 2\) and \(\ell=1\oplus 3\). Incorporating multiple \(\ell\) transfers into the Bayesian framework requires assigning a prior to \(\alpha\). The above definitions make it clear that \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline Interaction & \(V\) & \(r_{0}\) & \(a_{0}\) & \(W\) & \(W_{s}\) & \(r_{i}\) & \(a_{i}\) & \(r_{c}\) & \(V_{so}\) & \(r_{so}\) & \(a_{so}\) \\ & (MeV) & (fm) & (fm) & (MeV) & (MeV) & (fm) & (fm) & (fm) & (MeV) & (fm) & (fm) \\ \hline \hline \({}^{3}\text{He}+^{23}\text{Na}\) 1 & \(117.31\) & \(1.18\) & \(0.67\) & & \(19.87\) & \(1.20\) & \(0.65\) & \(1.29\) & \(2.08\) & \(0.74\) & \(0.78\) \\ \(d+^{24}\text{Mg}\)2 & \(88.1\) & \(1.17\) & \(0.74\) & \(0.30\) & \(12.30\) & \(1.32\) & \(0.73\) & \(1.30\) & \(6.88\) & \(1.07\) & \(0.66\) \\ \(p+^{23}\text{Na}\) & 3 & \(1.25\) & \(0.65\) & & & & & \(1.25\) & \(6.24\) & \(1.25\) & \(0.65\) \\ \hline \end{tabular} \end{table} Table 3: Optical potential parameters used in this work before inference. \(\alpha=[0,1]\); therefore, an obvious choice is: \[\alpha\sim\text{Uniform}(0,1). \tag{25}\] ### Bayesian Model for \({}^{23}\text{Na}(^{3}\text{He},d)^{24}\text{Mg}\) Before explicitly defining the Bayesian model for the DWBA analysis, the points made above are reiterated for clarity. 1. The measured elastic scattering uncertainties have been added in quadrature with an inferred theoretical uncertainty. 2. The \({}^{3}\)He optical model has a severe discrete ambiguity. A constraint based on the continuous ambiguity has been added to the model to select the physical mode. Figure 2: Pair correlation plot of the posterior samples for the nested sampling run. The discrete ambiguity is prominent in the \({}^{3}\)He \(+^{23}\)Na data, posing a significant challenge in estimating the optical model parameter posteriors. 3. Due to the non-zero spin of the ground state of \({}^{23}\)Na, the transfer cross section can have contributions from multiple \(\ell\) values. 4. Only the two lowest \(\ell\) values are considered for a mixed transition, with the relative contributions weighted according to a parameter \(\alpha\) that is uniformly distributed from \(0\) to \(1\). Folding these additional parameters and considerations into the Bayesian model of Ref. [39] gives: Parameters: \[n=1.14\] \[c_{0}=132.9\] Priors: \[\mathcal{U}_{\text{Entrance}}\sim\mathcal{N}(\mu_{\text{central},k},\{0.20\,\mu_{\text{central},k}\}^{2})\] \[\mathcal{U}_{\text{Exit}}\sim\mathcal{N}(\mu_{\text{global},k}, \{0.10\,\mu_{\text{global},k}\}^{2})\] \[f\sim\text{HalfNorm}(1)\] \[f_{\text{Elastic}}\sim\text{HalfNorm}(0.10^{2})\] \[\delta D_{0}^{2}\sim\mathcal{N}(1.0,0.15^{2})\] \[C^{2}S\sim\text{HalfNorm}(1.0^{2})\] \[g\sim\text{Uniform}(-10,10)\] Functions: \[\eta=10^{g}\] (26) \[c=\mathcal{U}_{\text{Entrance},(k=0)}\big{(}\mathcal{U}_{ \text{Entrance},(k=1)}\big{)}^{n}\] \[\frac{dY^{\,\prime}}{d\Omega\,_{\text{Optical},j}}=\eta\times \frac{d\sigma}{d\Omega\,_{\text{Optical},j}}\] \[\frac{dY^{\,\prime}}{d\Omega\,_{\text{DWBA},i}}=\eta\times \delta D_{0}^{2}\times C^{2}S\times\frac{d\sigma}{d\Omega\,_{\text{DWBA},i}}\] \[\sigma_{i}^{\prime 2}=\sigma_{\text{Transfer},i}^{2}+\bigg{(}f\frac{dY ^{\,\prime}}{d\Omega\,_{\text{DWBA},i}}\bigg{)}^{2}\] \[\sigma_{\text{Elastic},i}^{\prime 2}=\sigma_{\text{Elastic},i}^{2}+ \bigg{(}f_{\text{Elastic}}\frac{dY}{d\Omega\,_{\text{Optical},i}}\bigg{)}^{2}\] Likelihoods: \[\frac{dY}{d\Omega\,_{\text{Transfer},i}}\sim\mathcal{N}\bigg{(} \frac{dY^{\,\prime}}{d\Omega\,_{\text{DWBA},i}},{\sigma_{i}^{\prime}}^{2} \bigg{)},\] \[\frac{dY}{d\Omega\,_{\text{Elastic},j}}\sim\mathcal{N}\bigg{(} \frac{dY^{\,\prime}}{d\Omega\,_{\text{Optical},j}},{\sigma_{\text{Elastic},i}^{ \prime 2}}\bigg{)},\] Constraint: \[c\sim\text{Uniform}(c_{0}(1-0.30),c_{0}(1+0.30)),\] where the index \(k\) runs over the optical model potential parameters, \(i\) and \(j\) denote the elastic scattering and transfer cross section angles, respectively, and \(\mathcal{U}_{\text{Entrance},(k=0,1)}\) are the real potential depth and radius for the entrance channel. In the case of a mixed \(\ell\) transfer, the model has the additional terms: Prior: \[\alpha\sim\text{Uniform}(0,1)\] Function: (27) \[\frac{dY^{\,\prime}}{d\Omega\,_{\text{DWBA},i}}=\eta\times\delta D _{0}^{2}\times C^{2}S\] \[\times\bigg{[}\alpha\frac{d\sigma}{d\Omega\,_{\text{DWBA},\ell_{ 1}}}+(1-\alpha)\frac{d\sigma}{d\Omega\,_{\text{DWBA},\ell_{2}}}\bigg{]},\] where the definition for \(\frac{dY^{\,\prime}}{d\Omega\,_{\text{DWBA},i}}\) is understood to replace all other occurrences of that variable in Eq. (26). Note that the individual cross sections, \(\frac{d\sigma}{d\Omega\,_{\text{DWBA},\ell_{1}}}\) and \(\frac{d\sigma}{d\Omega\,_{\text{DWBA},\ell_{2}}}\), are calculated simultaneously using same sampled values for the optical potential. Figure 4: Values from the volume integral of the real potential as calculated using Eq. (21) and the samples from the nested sampling calculation. The discrete ambiguity causes two well separated peaks to appear. The black dashed line shows the value of \(J_{R}\) for the global potential. Figure 3: The discrete ambiguity as seen in the \(V\) versus \(r_{0}\) correlations between the histogrammed samples of the nested sampling calculation shown in black. The colored lines show the description of the correlation based on the analytic form \(Vr_{0}^{n}=c\). The value of \(c\) provides a way to distinguish these modes. ### Results The above Bayesian model was applied to the eleven states observed in the astrophysical region of interest. For each state, affine invariant MCMC [63], as implemented in the python package emcee[32] was run with \(400\) walkers taking \(8000\) steps, giving a total of \(3.2\times 10^{6}\) samples. Of these samples, the first \(6000\) steps were discarded as burn in, and the last \(2000\) steps were thinned by \(50\) for \(16000\) final samples. The effective sample size was estimated to be greater than \(2000\) based on the calculated autocorrelation of \(\approx 400\) steps. These \(16000\) samples were used to estimate the posterior distributions for \(C^{2}S\), and to construct the differential cross sections shown in Fig. 6. An example of the simultaneous fit obtained for the elastic scattering data is shown in Fig. 5. All of the data have been plotted as a function of their relative value (Sec. V.2). Data points were only fit up to the first minimum in the cross section, the region where DWBA is expected to be most applicable [64]. The normalization \(\eta\) was found to be \(\eta=0.075^{+0.007}_{-0.006}\), which shows that the absolute scale of the data, despite the influence of the optical model parameters, can be established with a \(9\%\) uncertainty. Values obtained for \((2J_{f}+1)C^{2}S\) in this work are listed in Table 4, were the term \((2J_{f}+1)C^{2}S\) is constant for all possible values of \(J_{f}\) for the final state if it is populated by the same \(j=\ell\oplus s\) transfer. There is general agreement between our values and those of Ref. [16], which provides further evidence that the absolute scale of the data is well established. However, for the three \(2^{+}\) states that show a mixture of \(\ell=0\oplus 2\), the current values are consistently lower. In these cases, the Bayesian method demonstrates that considerable uncertainty is introduced when a mixed \(\ell\) transfer is present. The origin of this effect merits a deeper discussion, which we will now present. First, consider that the posterior distributions for \((2J_{f}+1)C^{2}S\) from states with unique \(\ell\) transfers were found to be well described by log-normal distributions. Estimations of these distributions can be made by deriving the log-normal parameters \(\mu\) and \(\sigma\) from the MCMC samples. These parameters are in turn related to the median value of the log-normal distribution by \(med.=\exp{(\mu)}\) and its factor uncertainty, \(f.u.=\exp{(\sigma)}\). The \(med.\) and \(f.u.\) quantities are listed in Table 4. It can be seen that states that have a unique \(\ell\) transfer show factor uncertainties of \(f.u.\approx 1.30\), or, rather, a \(30\%\) uncertainty. On the other hand, states that show a mixed \(\ell\) transition vary from \(f.u.=1.4\)-\(2.0\). It was found that the individual \(\ell\) components, which are the quantities relevant to the reaction rate, have a large factor uncertainty and deviate strongly from a log-normal distribution. However, their sum shares the same properties as the states with a single \(\ell\) transfer. In other words, the total spectroscopic factor still has a \(30\%\) uncertainty. Since the total spectroscopic factor is the quantity that determines the relationship between the theoretical calculations and the data, its uncertainty is similar to a single \(\ell\) spectroscopic factor, \(30\%\). For the mixed \(\ell\) case, the individual components are terms in a sum that produces the theoretical prediction. The mean value of this sum grows linearly with each term, while the uncertainty grows roughly as the square root of the sum of the squares. It is this fact that requires, without appealing to the current Bayesian methods, the individual \(\ell\) components to have a greater percentage uncertainty than their sum. Since previous studies, like those of Ref. [16], assume a constant uncertainty with the extraction of spectroscopic factors, each \(\ell\) component is assumed to have the same percentage uncertainty. The above discussion highlights that this assumption cannot be true, regardless of the statistical method. The influence of optical model parameters limits the precision of the total normalization of the cross section; thereby, giving an upper limit on the precision that can be expected from the components. These results indicate that applying a standard \(\chi^{2}\) fit to a mixed \(\ell\) transfer might not accurately extract the individual spectroscopic factors if optical model uncertainties are ignored. We will now discuss our results, and summarize the previously reported information for each of these states. #### v.6.1 The \(11389\)-keV State; \(-303\)-keV Resonance This state has been reported in several studies, and is known to have a spin parity of \(J^{\pi}=1^{-}\)[22]. Our measurements confirm an \(\ell=1\) nature to the angular distribution, making it a candidate for a subthreshold \(p\)-wave resonance. A higher lying state with unknown spin-parity has been reported in Ref. [12] at \(E_{x}=11394(4)\) keV. The current evaluation states that the \({}^{25}\)Mg\((^{3}\)He\(,^{4}\)He\()^{24}\)Mg measurement of Ref. [65] also observes this higher state at \(11397(10)\) keV, but their angular distribution gives an \(\ell=1\) character, indicating it would be compatible with the lower \(J^{\pi}=1^{-}\) state. Ref. [16] finds a similar peak in their spectrum, but considered it a doublet because of the ambiguous shape of the angular distribution, Figure 5: The credibility intervals obtained for the elastic scattering fit compared to the measured yields relative to Rutherford scattering (\(Y_{R}/\sigma_{R}\)). The dark and light purple bands show the \(68\%\) and \(95\%\) credibility intervals, respectively. The measured error bars are smaller than the points, while the adjusted uncertainty of Eq. (19) that is inferred from the data is not shown. which was caused primarily by the behavior of the data above \(20^{\circ}\). Due to our angular distribution not having these higher angles, and considering the excellent agreement between our data and an \(\ell=1\) transfer, only the state at \(11389.2(12)\) keV with \(J^{\pi}=1^{-}\) was considered to be populated. The present calculation assumes a \(2p_{3/2}\) transfer and is shown in Fig. 6a. #### iv.2.2 The \(11453\)-keV State; \(-240\)-keV Resonance Two states lie in the region around \(11.45\) MeV, with the lower assigned \(J^{\pi}=2^{+}\) and the upper \(J^{\pi}=0^{+}\). The only study that reports a definitive observation of the \(0^{+}\), \(11460(5)\) keV state is the \((\alpha,\alpha_{0})\) of Ref. [66]. The current study and that of Ref. [16] indicate that there is a state around \(E_{x}=11452\) keV that shows a mixed \(\ell=0+2\) angular distribution. Since the ground state of \({}^{23}\)Na is non-zero, this angular distribution can be the result of a single \(2^{+}\) state, and the \(\ell=2\) component cannot be unambiguously identified with the higher lying \(0^{+}\) state. The \((p,p^{\prime})\) measurement of Ref. [22] also notes a state at \(11452(7)\) keV with \(\ell=2\). The excellent agreement between our excitation energy and the gamma ray measurement of Ref. [24] leads us to assume the full strength of the observed peak comes from the \(2^{+}\) state. The calcula Figure 6: DWBA calculations for the states of \({}^{24}\)Mg. The \(68\%\) and \(95\%\) credibility intervals are shown in purple and light purple, respectively. Only the data points shown in black were considered in each calculation, with the triangles being excluded based on the cross section increasing after the minimum value was reached. For the \(11826\) keV state, the \(68\%\) bands are shown for all of the \(\ell\) transfers between \(0\)-\(3\). tion shown in Fig. 6b assumes transfers with quantum numbers \(2s_{1/2}\) and \(1d_{5/2}\). #### v.2.3 The \(11521\)-keV State; \(-172\)-keV Resonance Another sub-threshold \(2^{+}\) state lies at \(11521.1(13)\) keV. It should be noted that another state with unknown spin-parity was observed at \(11528(4)\) keV in Ref. [12], but has not been seen on other studies. Ref. [12] reports a measured \(\Gamma_{\gamma}/\Gamma\approx 1\) for this new state, making it a candidate for an unnatural parity \({}^{24}\)Mg state. The present angular distribution, Fig. 6c, is indicative of a mixed \(\ell=0+2\) assignment. Thus, the observation is associated with the \(2^{+}\) state at \(11521.1(13)\) keV, and transfers were calculated using \(2s_{1/2}\) and \(1d_{5/2}\). #### v.2.4 The \(11694\)-keV State; \(1\)-keV Resonance For our measurement this state was partially obscured by a contaminant peak from the ground state of \({}^{17}\)F coming from \({}^{16}\)O(\({}^{3}\)He, \(d\))\({}^{17}\)F for \(\theta_{Lab}<9^{\circ}\). Previous measurements have established a firm \(4^{+}\) assignment, and our angular distribution is consistent with an \(\ell=2\) transfer. The fit for a \(1d_{5/2}\) transfer is shown in Fig. 6d. #### v.2.5 The \(11826\)-keV State; \(133\)-keV Resonance This state is also obscured at several angles by the fifth excited state of \({}^{15}\)O. The previous constraints on its spin parity come from the comparison of the extracted spectroscopic factors for each \(\ell\) value in Ref. [16] and the upper limits established in Ref. [13] and subsequently Ref. [14]. This DWBA analysis finds an angular distribution consistent with Ref. [16], which it should be noted experienced similar problems with the nitrogen contamination, but with the Bayesian model comparison methods presented in Sec. V.1, constraints can be set based purely on the angular distribution. All of the considered \(\ell\) transfer are shown in Fig. 6e, and were calculated assuming \(2s_{1/2}\), \(2p_{3/2}\), \(1d_{5/2}\), and \(1f_{7/2}\) transfers, respectively. The results of the nested sampling calculations, which give the relative probabilities of each transfer, are presented in Table 5 and shown in Fig. 7. The adopted values were taken to be the mean of these distributions instead of the median as in Ref. [39]. Since the statistical errors of the nested sampling are normally distributed in \(\ln Z\), the resulting probabilities are distributed log-normally. The choice of the mean instead of the median then amounts to selecting the arithmetic mean instead of the geometric mean, which ensures \(\sum_{\ell}P(\ell)=1\). \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \(E_{x}\) (keV) & \(J^{\pi}\) & \(\ell\)\({}^{\rm a}\) & \((2J_{f}+1)C^{2}S\) & \(med.\) & \(f.u.\) & Ref. [16] \\ \hline \(11389\) & \(1^{-}\) & 1 & \(0.066^{+0.021}_{-0.015}\) & \(0.067\) & \(1.30\) & \(0.06\) \\ \(11453\) & \(2^{+}\) & \(0+2\) & \(0.14^{+0.05}_{-0.04}\) + \(0.05^{+0.03}_{-0.02}\) & \(0.14\) + \(0.048\) & \(1.39\) + \(2.00\) & \(0.24\) + \(0.16\)\({}^{\rm b}\) \\ \(11521\) & \(2^{+}\) & \(0+2\) & \(0.05^{+0.03}_{-0.02}\) + \(0.057^{+0.024}_{-0.018}\) & \(0.055\) + \(0.056\) & \(1.61\) + \(1.51\) & \(0.10\)\({}^{\rm c}\) \\ \(11694\) & \(4^{+}\) & 2 & \(0.085^{+0.02}_{-0.018}\) & \(0.086\) & \(1.29\) & \(0.11\) \\ \(11826\) & & 0 & \(0.023^{+0.012}_{-0.007}\) & \(0.024\) & \(1.52\) & \(0.039\) \\ & & 1 & \(0.010^{+0.004}_{-0.003}\) & \(0.010\) & \(1.40\) & \(0.009\) \\ & & 2 & \(0.014^{+0.005}_{-0.003}\) & \(0.014\) & \(1.36\) & \(0.015\) \\ & & 3 & \(0.025^{+0.009}_{-0.006}\) & \(0.025\) & \(1.36\) & \(0.024\) \\ \(11861\) & \(1^{-}\) & 1 & \(0.022^{+0.007}_{-0.005}\) & \(0.022\) & \(1.32\) & \(0.026\) \\ \(11933\) & \((2\)-\(4)^{+}\) & 2 & \(0.23^{+0.07}_{-0.05}\) & \(0.24\) & \(1.30\) & \(0.25\) \\ \(11988\) & \(2^{+}\) & \(0+2\) & \(0.26^{+0.10}_{-0.07}\) + \(0.24^{+0.10}_{-0.07}\) & \(0.26\) + \(0.24\) & \(1.40\) + \(1.45\) & \(0.42\) + \(0.33\) \\ \(12017\) & \(3^{-}\) & 1 & \(0.20^{+0.06}_{-0.04}\) & \(0.20\) & \(1.30\) & \(0.13\) \\ \(12051\) & \(4^{+}\) & 2 & \(0.13^{+0.04}_{-0.03}\) & \(0.14\) & \(1.30\) & \(0.13\) \\ \(12183\) & \((1,2^{+})\) & 2 & \(0.12^{+0.04}_{-0.03}\) & \(0.12\) & \(1.34\) & \(0.13\) \\ \hline \hline \end{tabular} \({}^{\rm a}\) + in the context of mixed \(\ell\) transfers is simply a delineation between each \(\ell\) component. \({}^{\rm b}\) Ref. [16] assumed a doublet. The \((2J_{f}+1)C^{2}S\) values were taken from these two states. \({}^{\rm c}\) Ref. [16] assumed a doublet, with a portion of the strength assigned to a negative parity state. \end{table} Table 4: The values of \((2J_{f}+1)C^{2}S\) that were derived in this work compared to those of Ref. [16]. All values for this work give the \(68\%\) credibility interval from the posterior estimation. Additionally, the parameters of the corresponding log-normal distribution are listed. All spin parity information, except that of the \(11825\)-keV state, is taken from Ref. [21], and are updated based on the current observations. #### iv.2.6 The \(11861\)-keV State; \(168\)-keV Resonance There are two states within a few keV of one another reported to be in this region. One is known to have \(J^{\pi}=1^{-}\)[22], and has been populated in nearly all of the experiments listed in Table 1. The other state is reported to decay to the \(6^{+},8114\)-keV state, with a \(\gamma\)-ray angular distribution that favors an assignment of \(8^{+}\)[67]. The later polarization measurements of Ref. [68] support the assignment of \(8^{+}\). For our experiment, the tentative \(8^{+}\) state is likely to have a negligible contribution to the observed peak, and the angular distribution in Fig. 6f is consistent with a pure \(\ell=1\) transfer. The calculation assumed \(2p_{5/2}\). #### iv.2.7 The \(11933\)-keV State; \(240\)-keV Resonance The \(11933\)-keV State does not have a suggested spin assignment in the current ENSDF evaluation [21]. However, the earlier compilation of Ref. [28] lists a tentative \((2-4)^{+}\). The compilation assignment is justified from two pieces of evidence. First, the \(\ell=2\) angular distribution observed in the (\({}^{4}\)He,\({}^{3}\)He) measurement of Ref. [65] suggests (0-\(4)^{+}\). Second, the \(0^{+}\) and \(1^{+}\) assignments are ruled out from the observed \(\gamma\)-decays to the \(J^{\pi}=2^{+}\), \(1368\)-keV and \(J^{\pi}=4^{+}\), \(4122\)-keV states observed in Ref. [69]. Our measurement indicates an \(\ell=2\) transfer. Based on these observations, and the satisfactory ability to describe the angular distribution with \(\ell=2\), a \(1d_{5/2}\) transfer was calculated, and is shown in Fig. 6g. It should also be noted that Schmalbrock _et. al_ suggested that this state could be the analogue to a \(T=1\) state with spin \(3^{+}\) in \({}^{24}\)Na [30]. #### iv.2.8 The \(11988\)-keV State; \(295\)-keV Resonance As can be seen in Table 1, the \(11988\)-keV State has been observed in multiple experiments, including the high precision \(\gamma\)-ray measurement of Ref. [24]. A spin parity of \(2^{+}\) has been assigned based on the inelastic measurement of Ref. [22]. The current fit is shown in Fig. 6h and assumes a mixed \(\ell=0+2\) transition with \(2s_{1/2}\) and \(1d_{5/2}\). #### iv.2.9 The \(12017\)-keV State; \(324\)-keV Resonance The \(12017\)-keV state is known to have \(J^{\pi}=3^{-}\), which was established from the angular distributions of Ref. [70; 71] and confirmed by the inelastic scattering of Ref. [22]. Our angular distribution is consistent with an \(\ell=1\) transfer, and was calculated assuming \(2p_{3/2}\). The fit is shown in Fig. 6i. #### iv.2.10 The \(12051\)-keV State; \(359\)-keV Resonance The angular distribution of \(\alpha\)-particles from \({}^{23}\)Na\((p,\alpha)\) measured in Ref. [71] established \(J^{\pi}=4^{+}\) for the \(12051\)-keV state, which was later confirmed by the inelastic scattering of Ref. [22]. The angular distribution of the present work is well described by a transfer of \(1d_{5/2}\), which is shown in Fig. 6j. #### iv.2.11 The \(12183\)-keV State; \(491\)-keV Resonance Ref. [72] observed that the \(12183\)-keV state \(\gamma\)-decays to \(0^{+}\), \(2^{+}\), and \(1^{+}\) states, which permits values of \((1,2^{+})\). The angular distribution of Ref. [16] permits either \(\ell=2\) or \(\ell=0+2\) transfers, which requires the parity of this state be positive. The current work finds an angular distribution consistent with a pure \(\ell=2\) transfer. The calculation of the \(1d_{5/2}\) transfer is shown in Fig. 6k. \begin{table} \begin{tabular}{c c c c} \hline \hline \(\ell\) & \(\ln Z_{\ell}\) & \(B_{3\ell}\) & \(P(\ell)\) \\ \hline \(0\) & \(44.226(294)\) & \(47.79\) & \(1\%\) \\ \(1\) & \(45.990(289)\) & \(8.20\) & \(7\%\) \\ \(2\) & \(47.762(323)\) & \(1.39\) & \(39\%\) \\ \(3\) & \(48.093(293)\) & \(1.00\) & \(53\%\) \\ \hline \hline \end{tabular} \end{table} Table 5: Results of the model comparison calculations for the \(11826\) keV state. For each \(\ell\) value, We list the \(\log Z\) value calculated with nested sampling, the median Bayes factor when compared to the most likely transfer \(\ell=3\), and the mean probability of each transfer. Figure 7: The distributions from the nested sampling algorithm for the most likely \(\ell\) values for the \(11826\)-keV state. Proton partial widths The spectroscopic factors extracted in Sec. V.6 are only an intermediate step in the calculation of the \({}^{23}\)Na\((p,\gamma)\) reaction rate. From the proton spectroscopic factors of this work, proton partial widths can be calculated using \[\Gamma_{p}=C^{2}S\Gamma_{\rm sp}, \tag{28}\] where \(\Gamma_{\rm sp}\) is the single-particle partial width. If there is a mixed \(\ell\) transfer, then the total proton width is calculated using: \[\Gamma_{p}=\sum_{\ell}\Gamma_{p,\ell}. \tag{29}\] However, for our case the \(\ell=2\) single particle widths, \(\Gamma_{sp}\), are typically two orders of magnitude lower than the \(\ell=0\) ones, making them negligible in the calculations presented below. ### Bound State Uncertainties There are additional sources of uncertainty impacting the determination of \(\Gamma_{p}\). One of the largest is the bound state parameters used to define the overlap function. Since the overlap function is extremely sensitive to the choice of Woods-Saxon radius and diffuseness parameters, the extracted spectroscopic factor can vary considerably. This dependence has been discussed extensively in the literature, for a review, see Ref. [73]. Ref. [39] confirmed this strong dependence in a Bayesian framework. If the uncertainties of \(C^{2}S\) are independent from those of \(\Gamma_{sp}\), then single-particle transfer reaction experiments that determine spectroscopic factors will be unable to determine \(\Gamma_{p}\) with the precision needed for many astrophysics applications. Ref. [57] noted an important consideration for the calculation of \(\Gamma_{p}\) from \(C^{2}S\) and \(\Gamma_{sp}\). If these quantities are calculated using the _same_ bound state potential parameters, the variation in \(C^{2}S\) is anticorrelated with that of \(\Gamma_{sp}\). Thus, the product of these two quantities, i.e., \(\Gamma_{p}\), has a reduced dependence on the chosen bound state potentials. Using the same bound state parameters for both quantities, Refs. [44; 16] found variations in \(\Gamma_{p}\) of \(\approx 5\%\). With the Bayesian methods of this study, we investigate whether this anticorrelation still holds in the presence of optical model uncertainties. The code BIND calculates \(\Gamma_{\rm sp}\) for a resonance at energy \(E_{r}\) with a Woods-Saxon potential. For additional details on this code see Ref. [74]. Modifications were made to the code so that it could be run on a set of tens of thousands of bound state samples to produce a set of \(\Gamma_{sp}\) samples. Due to the numerical instability of the integration for low energy resonances, the potential impact of the weak binding approximation, and the difficulties for mixed \(\ell\) transitions, the state selected for this calculation needs to have a \(500\gtrapprox_{r}\gtrapprox_{0}100\) keV, \(\ell\geq 2\), and a known spin parity. The only such state is at \(E_{x}=12051\) keV (\(E_{r}=359\) keV). A new MCMC calculation was carried out using the same model as Eq. (26) with the additional parameters for the bound state \(r_{0}\) and \(a_{0}\). These were given priors: \[r_{0} \sim\mathcal{N}(1.25,0.125^{2}) \tag{30}\] \[a_{0} \sim\mathcal{N}(0.65,0.065^{2}).\] The sampler was again run with \(400\) walkers taking \(8000\) steps. The final \(2000\) steps were thinned by \(50\) giving \(16000\) posterior samples. These samples were then plugged into BIND to produce the \(16000\) samples of \(\Gamma_{sp}\). Since these samples all come directly from the MCMC calculation they naturally account for the variations in the optical model parameters as well as \(C^{2}S\). First it is worth establishing the bound state parameters influence on the uncertainty of \(C^{2}S\). The log-normal distribution well described these samples and had a factor uncertainty of \(f.u.=1.50\) increased from \(f.u.=1.30\) in the case of fixed bound state parameters. The pair correlation plot for \((2J_{f}+1)C^{2}S\) versus \(\Gamma_{sp}\) is shown in Fig. 8. The resulting distribution gives \((2J_{f}+1)\Gamma_{p}=0.083^{+0.025}_{-0.018}\) eV, while the value calculated using fixed bound state parameters gives \((2J_{f}+1)\Gamma_{p}=0.082^{+0.025}_{-0.018}\) eV. The cancellation between the variation in \(\Gamma_{sp}\) and \(C^{2}S\) is nearly exact in this case, with the resulting uncertainty being \(30\%\) in both calculations. The quantum numbers of the bound state, \(n\) and \(j\), can also a have a dramatic effect on the extracted spectroscopic factor. Repeating the above calculation assuming a \(2d_{5/2}\) state instead of a \(1d_{5/2}\) causes \(C^{2}S\) to drop to a value \(50\%\) lower. Once again, taking the MCMC samples of the bound state geometric parameters and running BIND with these parameters as well as \(n=2\) gives \((2J_{f}+1)\Gamma_{p}=0.090^{+0.029}_{-0.020}\) eV. This relation still requires further study using Bayesian methods, particularly the influence of the bound state quantum numbers \(n\) and \(j\), which cannot be determined from the transfer data, but for the present work the potential influence of the bound state parameters on \(\Gamma_{p}\) is considered negligible compared to the those of the optical model. ### Subthreshold Resonances Three of the observed states lie close enough to the proton threshold to be astrophysically relevant. The penetrability, \(P_{\ell}\), is undefined for \(E_{r}<0\), and therefore \(\Gamma_{sp}\) cannot be calculated for subthreshold states. Instead these resonances will be integrated using \(\theta^{2}=C^{2}S\theta_{sp}^{2}\). \(\theta_{sp}^{2}\) can be calculated using the fits provided in either Ref. [74] or Ref. [75]. We have adopted the fit of Ref. [74]. It should be noted that the fit of Ref. [74] was derived using the bound state parameters \(r_{0}=1.26\) fm and \(a_{0}=0.69\) fm which differ from those used in this work. The impact of this difference was investigated by using higher lying states where values of \(\theta_{sp}^{2}\) could also be calculated using BIND. The maximum observed deviation was \(10\%\), which is in decent agreement with the expected accuracy of the fit as mentioned in Ref. [74]. The values of \(\theta^{2}\) for this work are shown in Table 6. ### Resonances Above Threshold Eight resonances were observed above the proton threshold and below \(500\) keV. Except for \(E_{r}=2\), all of the \(\Gamma_{sp}\) values were calculated using BIND. BIND calculations were carried out with the Woods-Saxon potential parameters \(r_{0}=1.25\) fm, \(a_{0}=0.65\) fm, \(r_{c}=1.25\) fm, \(V_{so}=6.24\), and channel radius of \(1.25\) fm. The low resonance energy of \(E_{r}=2\) presented numerical challenges for BIND, so it was calculated using the fit of Ref. [74]. Our results are shown in Table 7. ### Discussion The literature for \(\omega\gamma\) values is extensive. Ref. [16] compiled and corrected previous measurements for stopping powers and target stoichiometry. Using those compiled values as well as the recent measurement of Ref. [15], comparisons can be made between the results of the current work and previous measurements. We choose to compare \((2J_{f}+1)\Gamma_{p}\) values deduced from \(\omega\gamma\) measurements instead of transforming our \((2J_{f}+1)\Gamma_{p}\) values into their associated \(\omega\gamma\). Knowledge of \(\Gamma_{\gamma}/\Gamma\) is required in order to carry out a comparison, which limits us to a select few of the many measured resonances. #### iv.3.1 \(133\)-keV Resonance The \(133\)-keV resonance was measured directly at a significance greater than \(2\sigma\) for the first time at LUNA and is reported in Ref. [15]. The value from that work is \(\omega\gamma=1.46^{+0.58}_{-0.53}\times 10^{-9}\) eV. Using \(\Gamma_{\gamma}/\Gamma=0.95(4)\) from Ref. [12] implies \((2J_{f}+1)\Gamma_{p}=1.23^{+0.49}_{-0.45}\times 10^{-8}\) eV. The upper limit reported in Ref. [14] can also be used for comparison and yields \((2J_{f}+1)\Gamma_{p}\leq 4.35\times 10^{-8}\) eV. The closest value from this work is the \(\ell=2\) transfer which gives \((2J_{f}+1)\Gamma_{p}=6.0^{+2.1}_{-1.5}\times 10^{-8}\) eV. The disagreement between our value and that of LUNA is stark, and a significant amount of tension exists with the upper limit of Ref. [14]. #### iv.3.2 \(168\)-keV Resonance Ref. [16] derived a proton width of \((2J_{f}+1)\Gamma_{p}=1.8(4)\times 10^{-4}\) eV for the \(168\)-keV Resonance using \(\omega\gamma_{(\alpha,\gamma)}\), \(\omega\gamma_{(p,\alpha)}\), and \(\Gamma\). This value is in good agreement with the current work \((2J_{f}+1)\Gamma_{p}=1.3^{+0.4}_{-0.3}\times 10^{-4}\) eV. #### iv.3.3 \(240\)-keV Resonance Using the resonance strength measured in Ref. [15] of \(\omega\gamma=4.8(8)\times 10^{-4}\) eV and \(\Gamma_{\gamma}/\Gamma>0.7\) from Ref. [12], \((2J_{f}+1)\Gamma_{p}\) has a lower limit of \(3.8(6)\times 10^{-3}\) eV, which is in mild tension with the transfer value of \(2.5(7)\times 10^{-3}\) eV. #### iv.3.4 \(295\)-keV Resonance Ref. [15] measured \(\omega\gamma=1.08(19)\times 10^{-1}\) eV, while Ref. [12] gives \(\Gamma_{\gamma}/\Gamma=0.70(9)\). In this case, \((2J_{f}+1)\Gamma_{p}=1.2(2)\) eV. The current value is in significant disagreement with \((2J_{f}+1)\Gamma_{p}=4.0^{+1.5}_{-1.1}\) eV. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(E_{x}\)(keV) & \(E_{r}\)(keV) & \(J^{\pi}\) & \(\theta_{sp}^{2}\) & \((2J_{f}+1)\theta^{2}\) \\ \hline \(11389.2(12)\) & \(-303.5(12)\) & \(1^{-}\) & \(0.738\) & \(0.049^{+0.016}_{-0.011}\) \\ \(11452.9(4)\) & \(-239.8(4)\) & \(2^{+}\) & \(0.654\) & \(0.09^{+0.03}_{-0.03}\) \\ \(11521.1(13)\) & \(-171.6(13)\) & \(2^{+}\) & \(0.639\) & \(0.035^{+0.018}_{-0.013}\) \\ \hline \hline \end{tabular} \end{table} Table 6: Reduced width calculations for the observed subthreshold resonances. All \(\theta_{sp}^{2}\) values were calculated using the fit of Ref. [74] and should be considered to have a \(10\%\) systematic uncertainty. The \(68\%\) credibility intervals of the samples are presented in the fifth column. Figure 8: Pair correlation plot for the MCMC posterior samples of \(\Gamma_{sp}\) and \((2J_{f}+1)C^{2}S\) for the \(12051\)-keV state. A strong anticorrelation exists when the same bound state parameters are used to calculate both quantities, resulting in \(\Gamma_{p}\) having less sensitivity to these parameters. #### iv.4.5 \(491\)-keV Resonance The \(490\)-keV Resonance is considered a standard resonance for the \({}^{23}\)Na(\(p,\gamma\)) reaction, and has a value of \(9.1(12)\times 10^{-2}\) eV [77]. Unfortunately, \(\Gamma_{\gamma}/\Gamma\) is not known. However, an upper limit for \(\omega\gamma_{(p,\alpha)}\) has been set at \(\leq 0.011\) eV [16]. The ratio of the two resonances strengths can set an upper limit for \(\Gamma_{\alpha}/\Gamma_{\gamma}\): \[\frac{\omega\gamma_{(p,\alpha)}}{\omega\gamma_{(p,\gamma)}}=\frac{\Gamma_{ \alpha}}{\Gamma_{\gamma}}. \tag{31}\] Plugging in the values gives \(\Gamma_{\alpha}/\Gamma_{\gamma}\leq 0.12\). Assuming \(\Gamma_{p}\ll\Gamma_{\gamma},\Gamma_{\gamma}/\Gamma\geq 0.89\). The current value for \((2J_{f}+1)\Gamma_{p}=1.1^{+0.4}_{-0.3}\) eV which can be compared to the upper limit of the standard resonance of \((2J_{f}+1)\Gamma_{p}=0.82(11)\) eV. If we assume the \(\alpha\) channel is completely negligible, \((2J_{f}+1)\Gamma_{p}=0.73(10)\) eV. The standard resonance value appears to be consistent with the current work. ### Final Remarks on Proton Partial Widths The above comparisons make it clear that the agreement between the current experiment and previous measurements is inconsistent. Of particular concern are the \(133\)-keV and \(295\)-keV resonances, in which the disagreement is at a high level of significance. However, the measurement of Ref. [15] at LUNA used the \(295\)-keV resonance as a reference during the data collection on the \(133\)-keV resonance, which could explain some correlation between those resonance strengths when compared to this work. On the other hand, LUNA's value of \(\omega\gamma=1.08(19)\times 10^{-1}\) eV is in excellent agreement with the value given in the compilation of Endt [28], \(\omega\gamma=1.05(19)\times 10^{-1}\) eV, which normalized the value of Ref. [78] to the standard resonance at \(491\)-keV. These comments are not meant to brush aside the serious issues that come with extracting proton partial widths from DWBA calculations, but to highlight that any comparison between direct and indirect measurements involves data from several sources, each of which have their own systematic uncertainties complicating the conclusions that can be drawn. There is a need for detailed, systematic studies to determine the reliability of \(\Gamma_{p}\) values extracted from transfer reactions at energies relevant to astrophysics. It is also worth reiterating the comment first made in Ref. [17], the updated resonance energy of \(133\) keV compared to the previously assumed \(138\) keV could impact the assumption of a thick target yield curve made in Ref. [15; 14]. The significantly lower energy has the potential to move the beam off of the plateau of the yield curve, further affecting the extracted resonance strength, but the magnitude of this effect is difficult to estimate. However, the measurement of Ref. [14] made at LUNA has an upper limit that is consistent with the LUNA value and is in tension with the current work. Importantly, their upper limit also assumed the \(138\)-keV resonance energy, but used a much thicker target (\(\approx 30\) keV) than the LUNA measurement (\(\approx 15\) keV) making it less sensitive to the resonance energy shift. Again it should be mentioned that all of this discussion presupposes that the proton state has \(\ell=2\) and that our observed angular distribution arises completely from a direct reaction mechanism. If the spin is one of the other possible values, the current results will differ by over an order of magnitude, which could indicate the observed yields have significant contributions from a compound reaction mechanism. \begin{table} \begin{tabular}{c c c c c c} \(E_{x}\)(keV) & \(E_{r}\)(keV) & \(J^{\pi}\) & \(\Gamma_{sp}\)(eV) & \((2J_{f}+1)\Gamma_{p}\)(eV) This Work & \((2J_{f}+1)\Gamma_{p}\)(eV) Previous Work \\ \hline \(11694(4)\) & \(1(4)\) & \(4^{+}\) & \(2.589\times 10^{-140}\) a & \(2.2^{+0.6}_{-0.5}\times 10^{-141}\) & \\ \(11826(3)\) & \(133(3)\) & \(\ell=0\) & \(1.092\times 10^{-03}\) & \(2.6^{+1.3}_{-0.8}\times 10^{-5}\) & \\ & & \(\ell=1\) & \(2.314\times 10^{-04}\) & \(2.2^{+0.9}_{-0.6}\times 10^{-6}\) & \\ & & \(\ell=2\) & \(4.949\times 10^{-06}\) & \(6.7^{+2.4}_{-1.7}\times 10^{-8}\) & \(1.23^{+0.49}_{-0.45}\times 10^{-8}\) b \\ & & \(\ell=3\) & \(6.157\times 10^{-08}\) & \(1.5^{+0.5}_{-0.4}\times 10^{-9}\) & \\ \(11860.8(14)\) & \(168.1(14)\) & \(1^{-}\) & \(5.894\times 10^{-3}\) & \(1.3^{+0.4}_{-0.3}\times 10^{-4}\) & \(1.8(4)\times 10^{-4}\) c \\ \(11933.06(19)\) & \(240.37(19)\) & \((2\)\)-\(4\)1 & \(1.034\times 10^{-2}\) & \(2.4^{+0.7}_{-0.5}\times 10^{-3}\) & \(1.2(2)\) b \\ \(11988.45(6)\) & \(295.76(6)\) & \(2^{+}\) & \(15.39\) & \(4.0^{+1.5}_{-1.1}\) & \\ \(12016.8(5)\) & \(324.1(5)\) & \(3^{-}\) & \(8.550\) & \(1.7^{+0.5}_{-0.4}\) & \\ \(12051.3(4)\) & \(358.6(4)\) & \(4^{+}\) & \(6.141\times 10^{-1}\) & \(8.2^{+2.5}_{-1.8}\times 10^{-2}\) & \\ \(12183.3(1)\) & \(490.6(1)\) & \((1,2)^{+}\) & \(9.318\) & \(1.1^{+0.4}_{-0.3}\) & \\ \end{tabular} \end{table} Table 7: Proton partial widths derived from this work. The values of \(\Gamma_{sp}\) from BIND are listed for reference. \((2J_{f}+1)\Gamma_{p}\) values are given in terms of their \(68\%\) credibility intervals. ## VII The \({}^{23}\)Na\((p,\gamma)\) and \({}^{23}\)Na\((p,\alpha)\) reaction rates There exists a formidable amount of data relevant to the \({}^{23}\)Na\((p,\gamma)\) and \({}^{23}\)Na\((p,\alpha)\) reaction rates. The values compiled in Ref. [16] make up the majority of the current STARLIB rates [79]. A detailed reanalysis of these rates is likely needed, but is well beyond the scope of the current work. As such, we focus our efforts on showing the astrophysical implications of the results presented above. To do this we construct two updated versions of the rates in Ref. [15], which are themselves updates of STARLIB Version v6.5 [79]. The first update (called New1) uses all of our recommended resonance energies presented in Sec. IV.1 and scales the STARLIB proton partial widths for consistency. The second update (called New2) is a more exploratory study that, in addition to the updated energies, replaces the resonance strength for the \(133\)-keV resonance reported in Ref. [15] with the proton partial widths measured in this work (Table 7) using the probabilities for \(\ell=0\)-\(3\) transfers from Table 5. New2 also makes corrections to subthreshold resonances involved in the \((p,\alpha)\) rate. All rates and their uncertainties were calculated using the Monte-Carlo reaction rate code RatesMC [76]. ### Energy Update The resonance energies presented in Table 2 were substituted into the RatesMC input files provided by STARLIB for the \((p,\gamma)\) and \((p,\alpha)\) rates. Particle partial widths were scaled as needed to reflect the new energies. Normalizing the rates to their own median produces the reaction rate ratios shown in Fig. 9. The blue contours centered around one show the \(68\%\) coverage of the rates of this work, while the gray contour is the ratio of the rate as determined by LUNA [15] to the updated rate. The influence of the new energy for the \(133\)-keV resonance on the \((p,\gamma)\) rate can be clearly seen. Recall that the resonance energy enters the rate exponentially, and in this case the \(5\)-keV shift in energy is responsible for the rate increasing by a factor of \(2\) for temperatures of \(70\)-\(80\) MK. The impact of the new energies on the \((p,\alpha)\) rate are more modest. A factor of \(1.25\) increase is observed as a result of the lower energy for the \(168\)-keV resonance resulting from the exclusion of Hale's measurement from the weighted average. The updated rate is still well within the uncertainty of the current STARLIB rate. ### Partial Widths Update The partial widths extracted in this study are consistent with those reported in Ref. [16]. However, it was found that \(\theta^{2}\) value for the \(-304\)-keV resonance was erroneously translated using the value of \((2J+1)C^{2}S\) instead of \(C^{2}S\) in Ref. [80], making this subthreshold \(p\)-wave resonance appear \(\times 3\) stronger. When corrected, the sub-threshold region is dominated primarily by the two \(s\)-wave resonances at \(-240\) keV and \(-172\) keV, and the rate at lower temperatures is increased. In the case of the \(133\)-keV resonance, we substitute our proton partial widths weighted by the probabilities given in Table. 5. Folding different \(\ell\) probabilities estimated directly from transfer reactions is only possible due to the Bayesian methods developed in Ref. [39] and the Monte-Carlo reaction rate developed in Ref. [76; 81]. The net effect is a dramatically more uncertain rate in the temperature ranges relevant to globular cluster nucleosynthesis, as can be seen in Fig. 10. New2 uses the same energy value updates as New1. ### AGB Models The impact of our updated sodium destruction rates was examined in the context of intermediate mass (M \(\gtrsim 4M_{\odot}\), depending on metallicity) AGB stellar environments. AGB models that are sufficiently massive to enter the thermally pulsing AGB (also dependent on initial metallicity, see Ref. [82]) can activate the NeNa cycle within the intermittent hydrogen burning shell for temperatures greater than \(~{}15\) MK. Hydrogen burning can also occur at the base of the convective envelope if temperatures exceed \(50\) MK, with the NeNa cycle operating for \(T>80-100\) MK. This process is known as hot bottom burning (HBB) and can lead to significant enhancement of hydrogen burning products in the envelope [83; 84; 85; 86; 87; 88; 89]. AGB stars that undergo HBB provide a possible explanation for the Na-O abundance anomaly of globular clusters, though they and other models cannot account for all observations [8]. The competition between \({}^{23}\)Na production via the NeNa cycle and \({}^{23}\)Na destruction via the proton capture channels in question is not limited exclusively to the thermally pulsing AGB. \({}^{23}\)Na produced during the main sequence can be mixed to the stellar surface during the first and second (for intermediate mass stars) dredge up events. For further details of AGB evolution and nucleosynthesis, we refer the reader to Refs. [82; 89; 90; 91; 92; 93; 94]. Specifically for \({}^{23}\)Na production in AGB stars, see Refs. [95; 96; 97; 98; 99; 100; 101; 102; 103; 104; 105]. The models discussed in this section all achieve temperatures sufficient for HBB, but of varying efficiencies. We choose a range of initial masses and metallicities (where \(Z\) denotes the initial mass fraction of all elements heavier than helium): 4 and \(7M_{\odot}\) at \(Z=0.001\), 4\(M_{\odot}\) at \(Z=0.0028\), 6 and 8\(M_{\odot}\) at \(Z=0.014\) (solar metallicity; Asplund _et al._ 106). The evolutionary properties of the \(Z=0.014\) models were previously published in Refs. [86; 107], the \(Z=0.0028\) model in Ref. [88] and the \(7M_{\odot}\), \(Z=0.001\) model in Ref. [108]. We note that in general, lower metallicities and higher initial masses will produce higher temperatures at the base of the envelope. The lowest metallicity at \(7M_{\odot}\) contains the highest temperatures, the lower mass model, 4\(M_{\odot}\), at the same metallicity contains the coolest. The evolutionary sequences were run with the Monash stellar evolution code ([109; 110; 111]; described most recently in Ref. [82; 107] and Ref. [112]) and post processing nucleosynthesis code (see Refs. [113; 114; 115]). Briefly, the evolution of the models is run from the zero-age main sequence to the tip of the thermally pulsing AGB. Mass loss on the red giant branch (RGB) is only included in the 4M, Z=0.001 model. The quantity of mass lost from intermediate mass stars is typically insignificant on the RGB owing to their short lifetimes in this phase. For the \(4M_{\odot}\), \(Z=0.001\) model, the approximation of Ref. [116] is used, with parameter \(\eta_{R}\)=0.477 based on Ref. [117]. For AGB mass-loss, we use the semi-empirical mass-loss rate in all the models [118], except for the \(4M_{\odot}\), \(Z=0.001\) model in which we use the method of Ref. [119] with \(\eta=0.02\) (see treatment of mass-loss in Ref. [88] for intermediate-mass AGB stars). We treat convection using the Mixing-Length Theory (MLT) of convection, with the MLT parameter, \(\alpha_{\rm MLT}\), set to 1.86. We assume instantaneous mixing in convective regions and use the method of _relaxation_[109] to determine the borders of convective regions. We use the AESOPUS low temperature opacity tables of Ref. [120] and the OPAL opacities [121] for high temperature regions. The evolution code follows six isotopes, \({}^{1}\)H, \({}^{3}\)He, \({}^{4}\)He, \({}^{12}\)C, \({}^{14}\)N and \({}^{16}\)O, adopting the rates of Refs. [122; 123; 124]. These sequences are then fed into a post processing nucleosynthesis code, which follows 77 isotope species from hydrogen to sulfur, with a few iron-peak nuclei [125]. We use solar scaled initial compositions based on the solar abundances of Ref. [106]. Excluding the \({}^{23}\)Na proton capture rates investigated in this paper, the \(Z=0.001\) and \(Z=0.014\) models use the nuclear reaction rates from the 2016 default JINA REACLIB database [126]. For the \(Z=0.0028\) model, the rates were updated to the 2021 default set from the same database. For all models, we ran three calculations with different sets of median rates: LUNA's latest experimental results (LUNA), Figure 10: New2 reaction rates normalized to their median. The gray contour is the recommend rate of Ref. [15] normalized to the New2 rate’s median (left) The \((p,\gamma)\) reaction rate ratio. Large variations are seen around 80 MK due to the uncertain spin parity assignment for the \(133\)-keV resonance. (right) The reaction rate ratio plot for the \((p,\alpha)\) rate. Figure 9: New1 reaction rates normalized to their median. (left) The \((p,\gamma)\) reaction rate taken from Ref.[17]. The blue contours show the relative uncertainty as a function of temperature. The gray contour is the recommend rate of Ref. [15] normalized to the updated rate’s median. Both contours show \(68\%\) coverage. (right) The reaction rate ratio plot for the \((p,\alpha)\) rate. the energy updates from this paper (New1, Sec. VII.1), and the partial width updates (New2, Sec. VII.2). LUNA and New1 differ only in the temperature range of 60-80 million K, as seen in Fig. 9. In this range, the New1 sodium destruction rates are faster due to the influence of the \(133\)-keV resonance in the \({}^{23}\)Na\((p,\gamma)\) reaction. For each model, we also calculate the stellar yields. The yields are the integrated mass expelled from the model over its lifetime, where a positive yield for an isotope indicates net production of that species, a negative indicates net destruction of that species. These are shown in Table 8. The largest impact of the New1 rates is found in the 8\(M_{\odot}\), solar metallicity model, which holds a 5\(\%\) variation in yields between New1 and LUNA. This is most likely due to this particular model experiencing the largest duration of HBB within the temperature range for which the difference between New1/New2 and LUNA is at a maximum. The next largest variation is seen by the 6\(M_{\odot}\), solar metallicity model which shows a 2\(\%\) difference between the yields. The variation for both 4\(M_{\odot}\) models and the 7\(M_{\odot}\), \(Z=0.001\) model are less than 1\(\%\). The \({}^{20}\)Ne abundances mirror those of \({}^{23}\)Na, where higher quantities of \({}^{20}\)Ne are found with the New1 rates. There is almost no difference in \({}^{24}\)Mg between the rates for any of the models. It would therefore appear that most of the variation in the \({}^{23}\)Na yields between the rates is coming from the small variations in the median \({}^{23}\)Na\((p,\alpha)\) rate. For one chosen model, 6\(M_{\odot}\) and \(Z=0.014\), we ran a further six calculations (two combinations for each set of rates), to estimate the potential impact of the rate uncertainties. For each of the three rates, a high and low rate were run. These correspond to the \(16^{\text{th}}\) and \(84^{\text{th}}\) percentile of both the \((p,\gamma)\) and \((p,\alpha)\) rates. Thus, the high rate has an increase in both destructive rates and the low rate has a corresponding decrease in both. We show these results in Figure. 11. Even with the conservative uncertainties of New2, there appears to be very little impact on \({}^{23}\)Na production. ### Discussion The \({}^{23}\)Na abundances and the initial mass and metallicity thresholds for which the updated rates show maximum variation should be considered qualitatively. There are various uncertainties in stellar modeling that directly impact the temperature at the base of the convective envelope. These uncertainties can skew the exact amount of \({}^{23}\)Na that is destroyed. For example, we use the MLT to treat convective regions in the Monash code. Other methods, such as the Full Spectrum of Turbulence [127; 128] used in the ATON code [129; 130], are known to produce higher temperatures at the base of the convective envelope. Consequently, HBB occurs at a lower initial stellar mass [131]. The choice of mass loss rate on the AGB will also impact HBB. The mass loss rate of Ref. [118] is slower than that of Ref. [119] when used in intermediate-mass stellar models (e.g., see discussion in Ref. [86]). The mass loss rate of Ref. [118] results in more thermal pulses, a longer AGB lifetime, and consequently the base of the envelope will spend longer at higher temperatures. Hence the 4\(M_{\odot}\) model of \(Z=0.0028\) from Ref. [88] achieves much higher temperatures at the base of the envelope compared to the model of the same mass and metallicity evolved with the mass loss rate of Ref. [119]. There is a general pattern of faster \({}^{23}\)Na destruction with the New1 rates as opposed to LUNA in AGB models that spend significant time during HBB in the key temperature range of \(60\)-\(80\) million K. However, the exact initial mass and metallicity thresholds for which HBB at this temperature range occurs is heavily dependent on the stellar evolution mode utilized alongside the chosen input physics. Simple single zone calculations still indicate the importance of the \({}^{20}\)Ne\((p,\gamma)\), \({}^{23}\)Na\((p,\alpha)\), and \({}^{23}\)Na\((p,\gamma)\) reactions rates, but the above calculations emphasize that stellar modeling uncertainties dominate once a polluter candidate is chosen. ## VIII Conclusions and Outlook Utilizing the high resolution capabilities of the TUNL SPS, astrophysically important excited states in \({}^{24}\)Mg were populated via the \((^{3}\text{He},d)\) transfer reaction. Careful calibration and compilation of previous results give a significantly lower resonance energy for the \(133\)-keV resonance. This resonance has the single largest contribution to the \((p,\gamma)\) reaction rate at temperatures important for globular cluster nucleosynthesits. Angular distributions were analyzed using the Bayesian DWBA methods of Ref. [39], and spectroscopic factors were extracted. Methods were developed to deal with the additional challenges presented by \({}^{23}\)Na\((^{3}\text{He},d)^{24}\)Mg: mixed \(\ell\) transfers, a severe discrete ambiguity, and data that needed absolute scaling established during the fitting. These advances mean that our analysis is the first of its kind, where Bayesian Figure 11: \({}^{23}\)Na surface abundance for median (solid line) and low/high (dashed lines) reaction rates for LUNA, New1, and New2 (see text for details). The high rates for the \({}^{23}\)Na\((p,\gamma)\) and \({}^{23}\)Na\((p,\alpha)\) lead to lower \({}^{23}\)Na surface abundances and vice versa for the low rates. Model number is a proxy for time. The impact of rate uncertainties shown here is small. methods were used to accurately determine uncertainties at every step of the analysis of a transfer reaction. As a result of the above effort, astrophysical reaction rates derived from such experiments will naturally reflect the underlying nuclear physics uncertainties. The astrophysical impact of these uncertainties was briefly investigated. Our work indicates that the unknown astrophysically conditions still dominate the total uncertainty. However, in a given environment significant variation still exists due to uncertainties in the NeNa cycle. The results of this experiment indicate that uncertainties are still present in both the \({}^{23}\)Na\((p,\gamma)\) and \({}^{23}\)Na\((p,\alpha)\) reaction rates. The direct capture component of the rate, which dominates at temperatures lower than 60 MK and is significant up to 70 MK, has recently been updated by Boeltzig _et. al_[132]. Larger uncertainties on the direct capture component were found due to previously neglected interference effects and an assumption of larger uncertainties on \(C^{2}S\). Updated spectroscopic factors for bound states could significantly alter the behavior of the low temperature portion of the reaction rate. We suggest that future work focus on the direct capture component of the \({}^{23}\)Na\((p,\gamma)\) rate, the precise energy determination of the \(133\)-keV resonance, and the sub-threshold region of \({}^{23}\)Na\((p,\alpha)\). At this time our knowledge of the Na-O anti-correlation in globular clusters is still limited by the nuclear physics. ###### Acknowledgements. The authors would like to thank Christian Iliadis for his valuable input and the TUNL technical staff for their assistance during the experiment. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under Award No. DE-SC0017799 and Contract No. DE-FG02-97ER41041. G.C.C and A.LK are supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. ## Appendix A The energies reported by Hale _et al._ As first reported in Ref. [17], a significant disagreement exists between the results of our measurement and those of Ref. [16]. Of particular concern is the state corresponding to the \(138\)-keV resonance, whose mean values falls \(\approx 9\) keV below what is reported in Ref. [16]. A disagreement of this magnitude is of particular concern since the previous measurement was also performed at TUNL using the SPS. Studying the information reported in Ref. [16], measured energies are reported for a single region of the focal plane that covers \(\approx 400\) keV. These energies are extracted from a \(3^{\rm rd}\) order polynomial calibration based on twelve states surrounding the mentioned region. Of these twelve states, the most interior, i.e, the states that begin and end the interpolated region, were states identified as \(11330\) keV and \(12184\) keV, respectively. Comparing the spectrum from this work and that shown in Fig. 3 of Ref. [16], the state labeled \(11330\) keV in their spectrum corresponds to the state identified as \(11317(3)\) keV in this work. Ref. [21] lists two states around this energy range, one with \(E_{x}=11314.4(15)\) keV and the other \(E_{x}=11330.2(10)\) keV. Neither of these states has an unambiguous spin parity assignment in the current evaluation, but the preceding compilation of Ref. [28] identified the lower energy state (\(11314\) keV) as \((3,4)^{+}\) and the higher (\(11330\) keV) as \((2^{+}\)-\(4^{+})\). These assignments seem to be in tension with the \((p,p^{\prime})\) angular distribution of Ref. [22], which assigns the lower lying state \(\ell=3\) giving \(J^{\pi}=3^{-}\). However, Ref. [37] reports \(\log ft=5.19(14)\) for \({}^{24}\)Al\((\beta^{+})\) ( ground state \(J^{\pi}=4^{+}\)), which based on the empirical rules derived in Ref. [38] requires an allowed decay giving \((3,4,5)^{+}\) for this state. In light of these discrepancies, it is hard to reach a firm conclusion about the identity of the state populated in this work and Ref. [16]. One method to investigate the disagreement is to recalibrate our data using the calibration states of the previous study. This cannot be considered a one-to-one comparison because of the Bayesian method used to calibrate the focal plane and the different focal plane detectors used in each study, but it should show the impact of misidentifying the states around \(11320\) keV. To be specific, we consider two sets of energies: 1. The adopted results of this work from Sec. IV (Set \(\#1\)). 2. The peak centroids of this work energy calibrated using \begin{table} \begin{tabular}{c|c c c|c c|c c c} & \multicolumn{3}{c|}{\({}^{23}\)Na} & \multicolumn{3}{c|}{\({}^{20}\)Ne} & \multicolumn{3}{c}{\({}^{24}\)Mg} \\ & LUNA & New1 & New2 & LUNA & New1 & New2 & LUNA & New1 & New2 \\ \hline m4z001 & \(1.65\times 10^{-5}\) & \(1.64\times 10^{-5}\) & \(1.61\times 10^{-5}\) & \(-1.29\times 10^{-6}\)\(-1.29\times 10^{-6}\)\(-9.48\times 10^{-7}\) & \(-1.69\times 10^{-6}\)\(-1.68\times 10^{-6}\) \\ m7z001 & \(-1.05\times 10^{-5}\)\(-1.05\times 10^{-5}\)\(-1.02\times 10^{-5}\) & \(2.19\times 10^{-5}\) & \(2.20\times 10^{-5}\) & \(2.12\times 10^{-5}\) & \(-2.31\times 10^{-4}\)\(-2.31\times 10^{-4}\) \\ m4z0028 & \(2.48\times 10^{-5}\) & \(2.47\times 10^{-5}\) & \(2.47\times 10^{-5}\) & \(2.01\times 10^{-5}\) & \(2.00\times 10^{-5}\) & \(2.02\times 10^{-5}\) & \(-7.94\times 10^{-6}\)\(-7.94\times 10^{-6}\)\(-7.93\times 10^{-6}\) \\ m6z014 & \(9.08\times 10^{-5}\) & \(8.89\times 10^{-5}\) & \(9.19\times 10^{-5}\) & \(3.37\times 10^{-6}\) & \(4.79\times 10^{-6}\) & \(2.57\times 10^{-6}\) & \(-2.68\times 10^{-4}\)\(-2.68\times 10^{-4}\)\(-2.69\times 10^{-4}\) \\ m8z014 & \(1.05\times 10^{-4}\) & \(1.00\times 10^{-4}\) & \(9.28\times 10^{-5}\) & \(2.37\times 10^{-5}\) & \(2.76\times 10^{-5}\) & \(2.73\times 10^{-5}\) & \(-1.79\times 10^{-3}\)\(-1.79\times 10^{-3}\)\(-1.80\times 10^{-3}\) \\ \end{tabular} \end{table} Table 8: Stellar yields for stable isotopes of interest for various masses and metallicities. the calibration states of Hale _et al._ (Set \(\#2\)). The results shown in Table 9 report these two sets of energies and compares them to Ref. [16]. Using the same calibration for our data (Set \(\#2\)) produces consistent results with the Ref. [16]. The above discussion presents the evidence that led to the decision to exclude excitation energies of Ref. [16] from the recommended energies of the current work. There is a reasonable cause to do this at the current time, but further experiments are needed to firmly resolve this issue.
2310.06010
Forecast Cosmological Constraints with the 1D Wavelet Scattering Transform and the Lyman-$α$ forest
We make forecasts for the constraining power of the 1D Wavelet Scattering Transform (WST) when used with a Lyman-$\alpha$ forest cosmology survey. Using mock simulations and a Fisher matrix, we show that there is considerable cosmological information in the scattering transform coefficients not captured by the flux power spectrum. We estimate mock covariance matrices assuming uncorrelated Gaussian pixel noise for each quasar, at a level drawn from a simple lognormal model. The extra information comes from a smaller estimated covariance in the first-order wavelet power, and from second-order wavelet coefficients which probe non-Gaussian information in the forest. Forecast constraints on cosmological parameters from the WST are more than an order of magnitude tighter than for the power spectrum, shrinking a $4D$ parameter space by a factor of $10^6$. Should these improvements be realised with DESI, inflationary running would be constrained to test common inflationary models predicting $\alpha_s = - 6\times 10^{-4}$ and neutrino mass constraints would be improved enough for a $5-\sigma$ detection of the minimal neutrino mass.
Hurum Tohfa, Simeon Bird, Ming-Feng Ho, Mahdi Qezlou, Martin Fernandez
2023-10-09T17:59:34Z
http://arxiv.org/abs/2310.06010v2
Forecast Cosmological Constraints with the 1D Wavelet Scattering Transform and the Lyman-\(\alpha\) forest ###### Abstract We make forecasts for the constraining power of the 1D Wavelet Scattering Transform (WST) in the context of Lyman-\(\alpha\) forest cosmology. Using mock simulations and a Fisher matrix, we show that there is considerable cosmological information in the scattering transform coefficients. We estimate mock covariance matrices assuming uncorrelated Gaussian pixel noise for each quasar, at a level drawn from a simple lognormal model. The extra information comes from a smaller estimated covariance in the first-order wavelet power, and from second-order wavelet coefficients which probe non-Gaussian information in the forest. Forecast constraints on cosmological parameters from the WST are as much as an order of magnitude tighter than for the power spectrum. Should these constraints be confirmed on real data, it would substantially improve cosmological constraints on, for example, neutrino mass. ## I Introduction Many current and future cosmological surveys gain their constraining power from observations of small scales where structure is nonlinear. It is important to find ways to extract more cosmological information from these scales, able to take full advantage of these observations. While the two-point correlation function (or power spectrum) is optimal for a Gaussian field, as predicted by perturbation theory on large scales, non-linear density fields are non-Gaussian. There may thus be substantial information within the survey that can only be captured by summary statistics other than the power spectrum. An important probe of small-scale structures is the Lyman-\(\alpha\) forest, absorption features of neutral hydrogen. The Lyman-\(\alpha\) forest allows us to explore fundamental questions about the Universe, such as the nature of dark matter, the total neutrino mass, and the physics of the inflationary era [1, 2, 3, 4]. It can also answer important astrophysical questions such as the thermal history of the intergalactic medium (IGM) [5]. However, as the Lyman-\(\alpha\) forest probes the non-linear regime, hydrodynamic simulations are needed to accurately predict its behavior under different cosmological parameters. There are a wide variety of Lyman-\(\alpha\) forest surveys, from lower resolution, larger sample surveys such as the Sloan Digital Sky Survey (SDSS) to higher resolution, smaller sample surveys such as those conducted by [6], [7], and [8]. The flux power spectrum constructed from the SDSS extended Baryon Oscillation Spectroscopic Survey (eBOSS) spectra probes redshifts of \(2.2-4.6\), while higher resolution surveys reach up to \(z=5.4\). Newer surveys, such as the Dark Energy Spectroscopic Instrument (DESI)[9, 10], and WEAVE-QSO[11] will vastly increase the sample size and aim to measure the flux power spectrum to an accuracy of a few percent at smaller scales and higher redshifts. Higher-order n-point functions have been used to attempt to extract cosmological information [12, 13, 14, 15], including for galaxy number and weak lensing surveys [e.g. 16, 17]. While useful for theoretical predictions and detecting weak deviations from Gaussianity, they suffer from increasing variance and reduced robustness to outliers in real data [18]. In highly non-Gaussian distributions with heavy tails, as we move further away from the center towards higher orders, there can be significant fluctuations or noise in the data that make it difficult to extract meaningful features related to interactions/correlations. These fluctuations become more pronounced as we move toward higher orders and dilute many important features [19]. Here we investigate using the wavelet scattering transform for Lyman-\(\alpha\) forest cosmology. The wavelet scattering transform (WST) was initially proposed by Mallat for signal processing [20]. In recent years, it has been used successfully in, for example, audio signal processing, audio classification [21] and texture classification[22]. Each order of the WST is fully deterministic, mathematically robust, has clearly interpretable structures, and does not require any training of the model [23, 24]. The WST convolves signals with a complex-valued wavelet at different scales, selecting Fourier modes around a central frequency and thus coarsely separating information of different scales. It then takes the modulus of the result of this convolution operation. While this first order WST is conceptually similar to the power spectrum, a important strength of the WST is that it may be applied repeatedly to extract higher order summary statistics. The WST has been applied to weak lensing shear maps, where it has been shown to outperform the power spectrum and achieve similar constraining power to neural network techniques [25]. While forecasts suggest that Convolutional Neural Networks (CNNs) can improve cosmological inference over two-point statistics, particularly for weak lensing data [26], other forecasts have suggested that scattering architectures are able to outperform a CNN [27], and are more interpretable [28]. The WST has also been applied to mock 21-cm data, where it was found to outperform the 3D spherically averaged power spectrum at constraining astrophysical parameters [29]. Finally, 3D WST has been shown to preserve 50% more of the information in an idealised mock density field than the marked power spectrum [30]. A wavelet power spectrum was first applied to the Lyman-\(\alpha\) forest by Ref. [31], and used to extract thermal history information. More recent work has found that the constraints on thermal history from the wavelet power spectrum are similar to those from the flux power spectrum [32]. We make forecasts for an extended wavelet-based analysis which both considers cosmological parameters and uses higher order WST coefficients to extract non-Gaussian information. We use the Fisher information matrix to make forecast constraints from a mock survey for both the WST and the power spectrum. Our analysis suggests a novel approach for extracting high-order cosmological information from the Lyman-\(\alpha\) forest, which may be applied in future to obtain better parameter estimates. The letter is structured as follows: in section II, we discuss the Lyman-\(\alpha\) forest simulations that we use. We give an overview of the WST and power spectrum in section III, followed by the Fisher matrix formalism that we use for this work in section IV. We present our results and discuss them in section V and draw our final conclusions in section VI. ## II **mock data for analysis** We use 10 hydrodynamic simulations, described fully in Ref. [33]. Since this paper is aimed at making forecasts for the constraining power of different summary statistics rather than making a comparison to observational data, we use relatively fast, small simulations containing 2 \(\times 256^{3}\) dark matter and Smoothed-Particle Hydrodynamics particles within a 40 Mpc/h box shown in Fig. 1. We chose to use these smaller simulations we possess a sample which are identical except for a change in one single parameter. Our larger simulations from Ref. [34] use a Latin Hypercube design which would make our analysis more complex. Simulations were performed with the massively parallel code MP-Gadget [35]. The gravitational dynamics of the simulations are calculated using a Fourier transform on large scales and on small scales by walking a distributed Barnes-Hut tree structure [36]. We use the same transfer function for dark matter and baryonic particles and initialise at \(z=99\). Each of the simulations generates snapshots from \(z=4\) to \(z=2.2\) with a spacing \(\Delta z=0.2\), matching the redshift bins of the eBOSS survey. These snapshots are then post-processed by our artificial spectral generation code "fake spectra" [37] to generate 32 000 Lyman-\(\alpha\) absorption spectra parallel to the x-axis of the simulation box. To produce more spectra which more closely approximate a realistic survey, we then add Gaussian noise and smooth the simulated spectra with a Gaussian filter with width corresponding to the spectral resolution of eBOSS. Our simulation suite varies 4 parameters. We wish to understand how the WST varies when each parameter is changed. We thus use simulations which vary one parameter at a time, generating a total of ten sets of spectra for different input parameters. Our parameters include the spectral index, \(n_{P}\), and amplitude, \(A_{P}\) of the primordial power spectrum. The power spectrum is given by the equation: \[P(k)=A_{\rm P}\left(\frac{k}{k_{P}}\right)^{n_{P}-1}, \tag{1}\] where \(k_{P}\) is a reference scale (in this case, \(k_{P}=2\pi/8=0.78\) h/Mpc), and \(A_{\rm P}\) and \(n_{P}\) control the amplitude and spectral tilt of the power spectrum, respectively. Our model also adds parameters to model uncertainty in the thermal history of the IGM [38]. We rescale the photo-heating rate by a density-dependent factor, \(\tilde{\epsilon}=H_{A}\epsilon\Delta^{H_{S}}\). \(\epsilon\) is the photo-heating rate, \(\Delta\) is the overdensity of the Figure 1: Rendering of gas particles, color-coded by temperature, for a \(40\,{\rm Mpc}/h\times 10\,{\rm Mpc}/h\times 10\,{\rm Mpc}/h\) slice of our fiducial simulation box. The parameters used were \(n_{P}=0.897\), \(A_{P}=1.9\times 10^{-9}\), \(H_{S}=-0.3\), \(H_{A}=0.9\). IGM, \(H_{A}\) controls the IGM temperature at mean density, and \(H_{S}\) controls the slope of the temperature-density relation. We add an extra parameter for the observed mean flux in the forest, which is proportional to the overall ionization fraction of neutral hydrogen. We rescale our spectra to have the same mean flux by multiplying the optical depth in each spectral pixel by a constant factor. The mean optical depth follows the power law redshift evolution from Ref. [39]: \[\tau=2.3\times 10^{-3}(1+z)^{3.65}. \tag{2}\] ## III WST and 1D flux power spectrum For our Lyman-\(\alpha\) spectra, the input flux field \(\mathcal{F}(x)\) is a 1D array where along a sightline with velocity coordinate \(x\). The 1D flux power spectrum is defined as the square of the absolute value of each frequency component. So, for a spectrum \(\mathcal{F}(x)\), the power spectrum is \[P(k) =\left\langle|\widehat{\mathcal{F}}(x)|^{2}\right\rangle \tag{3}\] \[=\left\langle|\mathcal{F}(x)\star\psi_{k}(x)|^{2}\right\rangle \tag{4}\] where \(\psi_{k}=e^{-ikx}\) and \(\left\langle\right\rangle\) denote averages over all spectra in the box. We define the WST coefficients via recursive convolution of the field \(\mathcal{F}(x)\) with a series of wavelets with different numbers of octaves (denoted by Q) of different scales (j). Each wavelet in the wavelet family maintains an identical shape but varies in scale and orientation. After convolution, we take the modulus of the generated fields and use low pass filters to smooth out the high-frequency components of the signal. We set \(Q=1\) to employ a dilation factor of 2, making the wavelet's real-space size roughly equivalent to \(2^{J}\) pixels [24], where \(J\) denotes the largest physical scale possible. By averaging these resultant fields, we obtain scattering coefficients that describe the statistical characteristics of the input field. Up to second order, the scattering coefficients are defined as: \[S_{0} =\left\langle\mathcal{F}(x)\star\phi[0]\right\rangle \tag{5}\] \[S_{1}\left(j_{1}\right) =\left\langle|\mathcal{F}(x)\star\psi^{j_{1}}|\star\phi[1]\right\rangle\] (6) \[S_{2}\left(j_{1},j_{2}\right) =\left\langle||\mathcal{F}(x)\star\psi^{j_{1}}|\star\psi^{j_{2}}| \star\phi[2]\right\rangle \tag{7}\] Here \(\psi^{j}\) are Morlet wavelet filters and \(\phi[n]\) are low-pass filters where n denotes the order of the scattering transform, which downsample the field. We compute the coefficients for \(0\leq j\leq J\), take their modulus, and average over all sightlines, as denoted by \(\left\langle\right\rangle\). Lower '\(j\)' values correspond to smaller scales, meaning they oscillate more slowly and capture more detailed small-scale structures. Higher-order scattering coefficients are computationally expensive [40], so we limit ourselves to scattering coefficients up to the second order. We set \(J=5\), as we found that this extracted most of the information from the simulations. The zeroth-order coefficient is acquired by applying a low-pass filter, \(\phi[J]\) to the input field, \(\mathcal{F}(x)\). It thus represents the local average of the field, which, for the Lyman-\(\alpha\) spectra, corresponds to the local intensity of the transmitted flux. We expect this to be highly degenerate with the mean flux and continuum fitting routine and so to be conservative we focus on the first and second order coefficients. We use the publicly available KYMATIO package for generating 1D scattering coefficients with complex-valued Morlet wavelets [41]. Conventionally, following the convolution, scattering coefficients are downsampled by \(2^{j}\). After a low-pass filter, they are further downsampled by \(2^{(J-j)}\) before inverse Fourier transforming back to real space, ensuring all coefficients share the same Figure 2: Dependence of the scattering coefficients described in Equation. 5 on changes in the mean flux. The top panels show the first and second order scattering coefficients and the bottom panels show the fractional change in mean flux with respect to the observed model. We see about a 2.5% change in scattering coefficients as the value of the mean flux changes by 10%. Figure 3: Change in the wavelet scattering coefficients defined in Equation 5 as we change our four parameters: \(n_{P}\), \(A_{P}\), \(H_{S}\), \(H_{A}\). The top panels show the scattering coefficients and the bottom panels show the fractional change in the cosmological parameters with respect to our fiducial model, similar to Fig. 2. Our fiducial cosmology is defined as \(n_{P}=0.897\), \(A_{P}=1.9\times 10^{-9}\), \(H_{S}=-0.3\), \(H_{A}=0.9\). coarsest resolution, achieved by being downsampled by \(2^{J}\). Second order coefficients are computed by applying a second convolution and modulus operation to the first order field (before low-pass filtering). A similar downsampling process to first order coefficients is done by downsampling the new field by \(2^{(j_{1}-j_{2})}\) during convolution with the first-order field and further downsampling by \(2^{(J-j_{2})}\) using the low-pass filter. The modulus converts fluctuations into their local strengths. This generally has lower frequencies than the original fluctuations. Thus, modulus scatters information and energy from high frequencies and low frequencies [19]. The WST coefficients are Lipschitz continuous to deformation, meaning that similar fields differing by small deformations are also similar in terms of their scattering coefficient representation [24]. This makes the scattering characterization stable even when there are slight variations or noise in the data. ## IV Fisher matrix formalism We use the Fisher information approach to forecast the parameter constraints that we could achieve with a WST analysis. The Fisher matrix is defined as the second derivative of the log-likelihood function, \(\ln\mathcal{L}(p|d,M)\) around the maximum likelihood location, where \(p\) is the parameter value of a model, and \(d\) is the data. Under this definition, we define the Fisher matrix as: \[F_{ij}=\left\langle\frac{\partial^{2}\ln\mathcal{L}}{\partial p_{i}^{2}} \right\rangle=\frac{\partial\mathbf{S}}{\partial p_{i}}\cdot\Sigma^{-1}\cdot\frac {\partial\mathbf{S}}{\partial p_{j}}. \tag{8}\] Following Refs. [42] and [43], we model the covariance \(\mathbf{\Sigma}\) by adding Gaussian noise to the sightlines from our fiducial simulation. The continuum-to-noise ratio (CNR) for each simulated spectrum is sampled from a log-normal distribution, whose parameters are chosen to fit to the noise in the DR16Q observed spectral sample 1[44]. We find that: Footnote 1: [https://live-sdss4org-dr16.pantheonsite.io/algorithms/qso_catalog](https://live-sdss4org-dr16.pantheonsite.io/algorithms/qso_catalog) \[\log(\text{CNR})\sim\mathcal{N}(\mu=0.53,\sigma=0.36) \tag{9}\] Each spectrum, \(i\) has a CNR value, \(\text{CNR}_{i}\), drawn from the distribution in Eq. 9. Gaussian noise, \(\epsilon_{i}\), realising \(\text{CNR}_{i}\) is then added to each pixel: \[\mathcal{F}(x)_{j}^{\prime} =\mathcal{F}(x)_{j}+\epsilon_{i}, \tag{10}\] \[\epsilon_{i} \sim\mathcal{N}(\mu=0,\sigma^{2}=\text{CNR}_{i}). \tag{11}\] In this work, we do not consider the errors from incomplete QSO continuum fitting as they are subdominant to the CNR [42]. A comprehensive treatment of continuum errors is deferred to future work. We generate the covariance matrix \(\Sigma\) by taking the variance of the scattering coefficients (or power spectrum), defined as: \[\Sigma_{jk}=\frac{1}{N-1}\sum_{i=1}^{N}\left(X_{ij}-\bar{X}_{j}\right)\left(X _{ik}-\bar{X}_{k}\right)\,. \tag{12}\] Here \(X\) is the summary statistic, either the WST coefficients or the 1D flux power spectrum. We use 2000 random noise realizations. ## V Results In this Section, we describe the sensitivity of the WST and the 1D flux power spectrum to the parameters \(A_{P}\), \(n_{P}\), \(H_{S}\) and \(H_{A}\), as well as the mean flux \(\tau_{\text{eff}}\). The scattering transform with \(J=5\) generates a total of 19 scattering coefficients (1 zeroth order, 6 first order, and 12 second-order coefficients). We do not include the zeroth order coefficient in our analysis. The zeroth order coefficient is the local average of the field and so in real observational data would likely be dominated by continuum fitting. Figure 2 shows the sensitivity of WST to changes in the mean flux. The mean flux is set by post-processing the spectra. We computed the scattering transform with a mean flux offset by 10%, and looked at the first and second-order coefficients. The top panel shows the scattering coefficents and the bottom panel shows fractional change in the WST coefficients. We find that for 10% change in \(\tau\), the first and second-order WST coefficients each change by around 2.5%. Whereas for similar change in mean flux, powerspectrum changes by about 7%. Figure 3 show how the first and second order scattering coefficients are affected as we change cosmological parameters for our simulations. As in Figure 2, the top panel shows the scattering coefficients and the bottom panel shows the fractional change in the WST coefficients with respect to the fiducial model. We see that changes in cosmological parameters indeed affect the scattering coefficients, signifying the presence of cosmological information. Next, to compare the performance of WST with the power spectrum, we calculated the Fisher matrix as discussed in Section IV. Figure 4 shows that the WST coefficients contain more information than the power spectrum alone, across the four cosmological parameters we are discussing in this paper \(n_{P}\), \(A_{P}\), \(H_{A}\), \(H_{S}\). Even though the power spectrum and first-order WST coefficients are structurally similar, the first-order coefficients outperform the power spectrum. This may be because the power spectrum uses Fourier modes which are delocalized in real space and therefore lose spatial information. WST on the other hand uses localized kernels and captures information about both frequency and location. We find that \(|\Sigma|\) for the power spectrum is larger than the scattering transform, implying a lower sensitivity to noise. Since the wavelet covers the whole Fourier space, the mean of the generated first order field, \(I_{1}\), characterizes the average amplitude of Fourier modes but does not provide information about how these modes are spatially distributed or interact with each other. However, \(I_{1}\) contains information on both amplitude and phase interactions. So, when we apply a subsequent convolution to get second-order coefficients, we are able to capture the clustering behavior of the field due to the phase interaction encoded in \(I_{1}\). The second-order fields contribute about 50% extra information to the Fisher matrix. We use the Fisher matrix to summarize uncertainties in the cosmological parameters, via \(\sigma_{X}=1/\sqrt{F_{ij}}\) for a parameter \(X\). We find that for the power spectrum slope \(n_{P}\), the power spectrum can constrain \(n_{P}\) with an error of \(\sigma_{n_{P}}=0.178\). In contrast, WST yields an improved constraint of \(\sigma_{n_{P}}=0.0059\). We observed similar enhancements in constraints for other parameters: These constraints are generated using a Fisher matrix rather than a full likelihood calculation. Our constraints using the flux power spectrum are within an order of magnitude of the constraints found in Ref. [45] using the 1D flux power spectrum from the eBOSS Lyman-\(\alpha\) forest survey. However, that analysis includes more, larger scales, and the full redshift range from \(z=2.2\) to \(z=4.6\). Our Fisher matrix estimates are thus likely over-tight for both summary statistics, perhaps because the Fisher matrix approach neglects parameter degeneracies. The right panel of Figure 4 shows the redshift dependence of both summary statistics. The constraining power of both the WST coefficients and the flux power spectrum varies similarly with redshift. We find that redshift 3 contains the most information in both WST and the flux power spectrum for all parameters except \(H_{A}\), which controls the mean IGM temperature and is most constrained by \(z=2\) data. ## VI Conclusion In this letter, we have examined the power of the wavelet scattering transform (WST) to constrain cosmological parameters using the Lyman-\(\alpha\) forest. The wavelet basis of the WST is more local in real space than the Fourier basis of the flux power spectrum. A second advantage of the WST is that it can be applied iteratively, and the second order transform contains independent information on non-Gaussianities in the field. Using a simplified Fisher matrix analysis, we show that the first-order WST is more constraining than the flux power spectrum. We further show that the second-order WST coefficients contain more information than the first-order coefficients, indicating that it captures information in the non-Gaussian shape of the spectra. The sensitivity of scattering coefficients to changes in the mean flux is smaller than the sensitivity of the flux power spectrum to the mean flux. We also explored redshift dependence, showing that it was often similar for the WST and the flux power spectrum. Overall, our Fisher matrix forecasts suggests achieve an order of magnitude tighter constraints on the cosmological parameters \(n_{P}\), and \(A_{P}\) is \begin{table} \begin{tabular}{c c c} \hline Parameter & Power Spectrum & WST \\ \hline \(n_{P}\) & 0.178 & 0.0059 \\ \(A_{P}\times 10^{-9}\) & 0.503 & 0.0079 \\ \(H_{S}\) & 0.207 & 0.0049 \\ \(H_{A}\) & 0.258 & 0.0062 \\ \hline \end{tabular} \end{table} Table 1: Fisher matrix errors on cosmological parameter constraints obtained from both the power spectrum and the combined WST coefficients. Figure 4: (Left) Fisher information from the power spectrum and wavelet scattering transform at \(z=2\) for our four parameters. For the WST coefficients, we divide them into the contribution from first and second order coefficients, with second order coefficients generally being more constraining. (Right) The Fisher information for the combined WST coefficients and flux power spectrum for different redshift bins at \(z=2,3,4\). achievable from the Lyman-\(\alpha\) forest using the WST. This would translate into tighter constraints on the scientific goals of the survey, such as neutrino mass and inflationary running. Our Fisher matrix formalism does not account for degeneracies between the cosmological parameters, and thus our overall constraints are optimistic. Ref. [45] discusses the parameter degeneracies for the 1D flux power spectrum, in a model closely related to ours. The largest correlation was between the mean optical depth \(\tau\) and the primordial power spectrum amplitude \(A_{P}\), with a correlation coefficient of \(-0.7\). Most other correlations were relatively small, with a correlation coefficient \(<0.4\). Our purpose here is to compare the WST as a summary statistic to the flux power spectrum; it is likely that the parameter correlations will be similar between the two statistics, and so our main conclusion that the 1D WST provides information not captured by the 1D flux power spectrum should be robust. Our model for adding error to the spectra (see section IV) was simplistic. For example, one of the systematic uncertainties of Lyman-\(\alpha\) analysis comes from the knowledge of the redshift-dependent spectral resolution of the instrument, which we neglect. The presence of these and other systematic errors in real data will change our presented constraints, although there is no a priori reason to suppose they should affect the WST more than the flux power spectrum. A robust analysis of systematic errors is necessary for creating a WST estimator for future surveys for parameter constraints. These findings carry profound implications for future cosmological studies. As we continue to refine our understanding of the Universe, we require increasingly nuanced methods for analyzing data. The observed performance of WST in this study shows that WST will be an effective tool for future research, including experiments such as the Dark Energy Spectroscopic Instrument (DESI) and the WEAVE-QSO survey. Since these experiments will generate an unprecedented volume of high-quality data, it is essential to refine our tools and methods to take full advantage of these ambitious experiments. The results of our research provide a promising step in this direction. However, to constrain cosmological parameters from these surveys, a variety of future work is necessary. Most importantly, we will need to construct a maximum likelihood estimator for the WST coefficients from noisy and sparse data. In addition, our Fisher matrix forecast is simplistic and we will in future work investigate the WST more thoroughly, using an emulator and Markov Chain Monte Carlo analysis. ###### Acknowledgements. SB was supported by NASA-80NSSC21K1840. MQ was supported by NSF grant AST-2107821 and the UCOP Dissertation Year Program. MFH is supported by a NASA FINESST grant No. ASTRO20-0022. Computing resources were provided by Frontera LRAC AST21005. The authors acknowledge the Frontera computing project at the Texas Advanced Computing Center (TACC) for providing HPC and storage resources that have contributed to the research results reported within this paper. Frontera is made possible by National Science Foundation award OAC-1818253. URL: [http://www.tacc.utexas.edu](http://www.tacc.utexas.edu). Analysis computations were performed using the resources of the UCR HPCC, which were funded by grants from NSF (MRI-2215705, MRI-1429826) and NIH (1S10OD016290-01A1).